id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2306.11072 | Causal Effect Regularization: Automated Detection and Removal of
Spurious Attributes | In many classification datasets, the task labels are spuriously correlated
with some input attributes. Classifiers trained on such datasets often rely on
these attributes for prediction, especially when the spurious correlation is
high, and thus fail to generalize whenever there is a shift in the attributes'
correlation at deployment. If we assume that the spurious attributes are known
a priori, several methods have been proposed to learn a classifier that is
invariant to the specified attributes. However, in real-world data, information
about spurious attributes is typically unavailable. Therefore, we propose a
method to automatically identify spurious attributes by estimating their causal
effect on the label and then use a regularization objective to mitigate the
classifier's reliance on them. Compared to a recent method for identifying
spurious attributes, we find that our method is more accurate in removing the
attribute from the learned model, especially when spurious correlation is high.
Specifically, across synthetic, semi-synthetic, and real-world datasets, our
method shows significant improvement in a metric used to quantify the
dependence of a classifier on spurious attributes ($\Delta$Prob), while
obtaining better or similar accuracy. In addition, our method mitigates the
reliance on spurious attributes even under noisy estimation of causal effects.
To explain the empirical robustness of our method, we create a simple linear
classification task with two sets of attributes: causal and spurious. We prove
that our method only requires that the ranking of estimated causal effects is
correct across attributes to select the correct classifier. | Abhinav Kumar, Amit Deshpande, Amit Sharma | 2023-06-19T17:17:42Z | http://arxiv.org/abs/2306.11072v2 | # Causal Effect Regularization: Automated Detection and Removal of Spurious Attributes
###### Abstract
In many classification datasets, the task labels are _spuriously_ correlated with some input attributes. Classifiers trained on such datasets often rely on these attributes for prediction, especially when the spurious correlation is high, and thus fail to generalize whenever there is a shift in the attributes' correlation at deployment. If we assume that the spurious attributes are known a priori, several methods have been proposed to learn a classifier that is invariant to the specified attributes. However, in real-world data, information about spurious attributes is typically unavailable. Therefore, we propose a method to automatically identify spurious attributes by estimating their causal effect on the label and then use a regularization objective to mitigate the classifier's reliance on them. Compared to a recent method for identifying spurious attributes, we find that our method is more accurate in removing the attribute from the learned model, especially when spurious correlation is high. Specifically, across synthetic, semi-synthetic, and real-world datasets, our method shows significant improvement in a metric used to quantify the dependence of a classifier on spurious attributes (\(\Delta\)Prob), while obtaining better or similar accuracy. In addition, our method mitigates the reliance on spurious attributes even under noisy estimation of causal effects. To explain the empirical robustness of our method, we create a simple linear classification task with two sets of attributes: causal and spurious. We prove that our method only requires that the _ranking_ of estimated causal effects is correct across attributes to select the correct classifier.
## 1 Introduction
When trained on datasets where the task label is spuriously correlated with some input attributes, machine learning classifiers have been shown to rely on these attributes (henceforth known as _spurious_ attributes) for prediction [8; 20; 9]. For example in a sentiment classification dataset that we evaluate (Twitter-AAE [3]), a demographic attribute like race could be spuriously correlated with the sentiment of a sentence [7]. Classifiers trained on such datasets are at risk of failure during deployment when the correlation between the task label and the spurious attribute changes [1; 19; 27; 8].
Assuming that a set of auxiliary attributes is available at training time (but not at test time), several methods have been proposed to mitigate the classifier's reliance on the spurious attributes. The first category of methods assumes that the spurious attributes are known a priori. They develop regularization [32; 13; 19], optimization [27] or data-augmentation [15; 14; 12] strategies to train a classifier invariant to the specified spurious attributes. The second category of methods relaxes the assumption by automatically identifying the spurious attributes and regularizing the classifier to be invariant to them. To identify spurious attributes, these methods impose assumptions on the type of spurious correlation they deal with. For example, they may assume that attributes are modified in input via symmetry transformation [22] or a group transformation [21], or the data-generating process follows a specific graph structure such as anti-causal [35]. However, all these methods consider only two possibilities--an attribute is spurious or not--which makes them susceptible to imposing incorrect invariance whenever there is a mistake in identifying the spurious attribute.
In this paper, we propose a method that regularizes the effect of attributes on a classifier proportional to their _average causal effect_ on the task label, instead of binning them as spurious(or not) using an arbitrary threshold. We propose a two-step method that (**1**) uses an effect estimation algorithm to find the causal effect of a given attribute; (**2**) regularizes the classifier proportional to the estimated causal effect for each attribute estimated in the first step. If the estimated causal effects are correct, our method is perfect in removing a classifier's reliance on spurious attributes (i.e., attributes with causal effect close to zero), while retaining the classifier's accuracy. But in practice it is difficult to have good estimates of the causal effect due to issues like non-identifiability, noise in the labels [36], or finite sample estimation
error [2]. Our method is resilient to such errors since it doesn't group the attributes as spurious or causal and regularizes the classifier proportional to the estimated effect.
We analyze our method both theoretically and empirically. First, we provide the conditions under which causal effect is identified. An implication of our analysis is that causal effect is identified whenever the relationship between the attribute and label is fully mediated through the observed input (e.g, text or image). This is often the case in real-world datasets where human labellers use only the input to generate the label. Even if the causal effect is not identified, we show that our method is robust to errors in effect estimation. Theoretically, on a simple classification setup with two sets of attributes--causal and spurious--we prove that one only needs the correct ranking of the causal effect estimates to learn a classifier invariant to spurious attributes. For the general case with multiple disentangled high-dimensional causal and spurious attributes, under the same condition on correct causal effect rankings, we prove that the desired classifier (that does not use spurious attributes) is preferred by our method over the baseline ERM classifier.
To confirm our result empirically, we use three datasets that introduce different levels of difficulty in estimating the causal effect of attributes: (**1**) Syn-Text: a synthetic dataset that includes an unobserved confounding variable and violates the identifiability conditions of causal effect estimators; (**2**) MNIST34: a semi-synthetic dataset from [21], where we introduce noise in the image labels; (**3**) Twitter-AAE:, which is a real-world text dataset. To evaluate the robustness of methods to spurious correlation, we create multiple versions of each dataset with varying levels of correlation with the spurious attribute, thereby increasing the difficulty to estimate the causal effect. Even with a noisy estimate of the causal effect of attributes, our method shows significant improvement over previous algorithms in reducing the dependence of a classifier on spurious attributes, especially in the high correlation regime. Our contributions include,
* Causal effect identifiability guarantee for two realistic causal data-generating processes, which is the first step to automatically distinguish between causal and spurious attributes.
* A method, CausalReg, to train a classifier that obeys the causal effect of attributes on the label. Under a stylized setting, we show that it requires only the correct ranking of attributes.
* Evaluation on three datasets showing that CausalReg is effective at reducing spurious correlation even under noisy, high correlation, or confounded settings.
## 2 OOD Generalization under Spurious Correlation: Problem Statement
For a classification task, let \((\mathbf{x}^{i},y^{i},\mathbf{a}^{i})_{i=1}^{n}\sim\mathcal{P}_{m}\) be the set of example samples from the data distribution \(\mathcal{P}\), where \(\mathbf{x}^{i}\in\mathbf{X}\) are the input features, \(y^{i}\in Y\) are the task labels and \(\mathbf{a}^{i}=(a_{1}^{i},\ldots,a_{k}^{i})\), \(a_{j}^{i}\in A_{j}\) are the auxiliary attributes, henceforth known as _attributes_ for the example \(\mathbf{x}^{i}\). We use \(``\mathbf{x}_{a_{j}}"\) to denote an example \(``\mathbf{x}"\) to have attribute \(a_{j}\in A_{j}\). These attributes are only observed during the training time. The goal of the classification task referred to as _task_ henceforth, is to predict the label \(y^{i}\) from the given input \(\mathbf{x}^{i}\). The task classifier can be written as \(c(h(\mathbf{x}))\) where
Figure 1: To reduce dependence on spurious attributes in a classifier, our method CausalReg has two stages. Stage 1 (middle panel) estimates the causal effect of attributes on the task label. Stage 2 ensures that the causal effect of each attribute on the classifier’s prediction matches its estimated causal effect on the task label. CausalReg works well even under high spurious correlation (bottom panel) where prior methods fail as shown by the Spuriousness score (\(\Delta\)Prob).
\(h:\mathbf{X}\to\mathbf{Z}\) is an encoder mapping the input \(\mathbf{x}\) to a latent representation \(\mathbf{z}:=h(\mathbf{x})\) and \(c:\mathbf{Z}\to Y\) is the classifier on top of the representation \(\mathbf{Z}\).
**Generalization to shifts in spurious correlation.** We are interested in the setting where the data-generating process is _causal_[28] i.e. there is a certain set of _causal_ attributes that affect the task label \(y\). Upon changing these causal attributes, the task label changes. Apart from these attributes, there could be other attributes defining the overall data-generating process (see Fig. 2 for examples). A attribute \(c\in\mathcal{C}\) is called spurious attribute if it is correlated with the task label in the training dataset and thus could be used by a classifier trained to predict the task label, as a _shortcut_[35, 19]. But their correlation could change at the time of deployment, affecting the classifier's accuracy.
Using the attributes available at training time, our goal is to train a classifier \(c(h(x))\) that is robust to shift in correlation between the task label and the spurious attributes. We use the fact that changing spurious attributes will not lead to a change in the task label i.e. they have zero causal effect on the task label \(y\). Hence, we use the _estimated causal effect_ of an attribute to automatically identify its degree of spuriousness. For a spurious attribute, its true causal effect on the label is zero and hence the goal is to ensure its causal effect on the classifier's prediction is also zero. More generally, we would like to regularize the effect of each attribute on the classifier's prediction to its causal effect on the label. In other words, unlike existing methods that aim to discover a subset of attributes that are spurious [21, 22, 35], we aim to estimate the causal effect of each attribute and match it. Since our method avoids hard categorization of attributes that downstream regularizers need to follow, we show that is a more robust way of handling estimation errors when spurious correlation may be high.
## 3 Causal Effect Regularization: Minimizing reliance on spurious correlation
We now describe our method to train a classifier that generalizes to shift in attribute-label correlation by automatically identifying and imposing invariance w.r.t to spurious attributes. In SS3.1, we provide sufficient conditions for identifying the causal effect, a crucial first step to detect the spurious attributes. Next, in SS3.2 and SS3.3, we present the CausalReg method and its theoretical analysis.
### Causal Effect Identification
The identifiability of the causal effect of an attribute depends on the data-generating process (DGP). Thus, given a particular DGP, one needs to find the conditions under which the causal effect of any attribute can be identified. Below we give sufficient conditions for two DGPs (DGP-1 and DGP-2 from Fig. 2). _DGP-1_ is common in many real-world datasets where the task labels are annotated based on the observed input \(X\) either automatically using some deterministic function or using human annotators [15, 26]. Thus _DGP-1_ is applicable for all the settings where the input \(X\) has all the sufficient information for creating the label. Mouli and Ribeiro [22] consider a DGP where the nodes are _transformations_ (like rotation or vertical-flips in image) that generates both the input \(X\) and the task label \(Y\). We adapt their graph to our setting where we have observed attributes as a node in the graph and there is a hidden confounder. DGP-2 and DGP-3 represent two such adaptations from their work. In our empirical study (SS4.1), we use three datasets, each of them associated with one of the above data-generating processes. Generally for identifying the causal effect one assumes that we only have access to observational data (\(\mathcal{P}\)). But here we also assume access to the interventional distribution for the input, \(P(X|do(A))\) where the attribute \(A\) is set to a particular value, as in [21]. This is commonly available in vision datasets via data augmentation strategies over attributes like rotation or brightness, and also in text datasets using generative language models [4, 34]. Having access to interventional distribution \(P(X|do(A))\) could help us identify the causal effect in certain cases where observational data (\(\mathcal{P}\)) alone cannot as we see below.
**Proposition 3.1**.: _Let DGP-1 and DGP-2 in Fig. 2 be the causal graphs of two data-generating processes. Let \(A,C,S\) be different attributes, \(X\) be the observed input, \(Y\) be the observed task label and \(U\) be the unobserved confounding variable. In DGP-2, \(x\) is the (unobserved) core input feature that affects the label \(Y\). Then:_
1. _[leftmargin=*]_
2. _DGP-1 Causal Effect Identifiability:_ _Given the interventional distribution_ \(P(X|do(A))\)_, the causal effect of the attribute_ \(A\) _on task label_ \(Y\) _is identifiable using observed data distribution._
3. _DGP-2 Causal Effect Identifiability:_ _Let_ \(C\) _be a set of observed attributes that causally affect the task label_ \(Y\) _(unknown to us),_ \(S\) _be the set of observed attributes spuriously correlated with task label_ \(Y\) _(again unknown to us), and let_ \(\mathcal{V}=C\cup S\) _be the given set of all the attributes. Then if all the causal attributes are observed then the causal effect of all the attributes in_ \(V\) _can be identified using observational data distribution alone._
_Proof Sketch._ **(1)** We show that we can identify the interventional distribution \(P(Y|do(A))\) which is needed to estimate the causal effect of \(A\) on \(Y\) using the given observational distribution \(P(Y|X)\) and interventional distribution
\(P(X|do(A))\). **(2)** For both causal attribute \(C\) and spurious attribute \(S\), we show that we can identify the interventional distribution \(P(Y|do(S))\) or \(P(Y|do(C))\) using purely observational data using the same identity without the need to know whether the variable is causal or spurious a priori. See SSB for proof.
In DGP-3, the causal effect of A on Y is not identified. However, as we will see empirically in Section 4, our method still works to remove the spurious correlation. In comparison, prior methods like Mouli and Ribeiro [22] provably fail under this DGP-3 (see SSC for the proof that their method would fail to detect the spurious attribute).
### CausalReg: Causal Effect Regularization for predictive models
Our proposed method proceeds in two stages. In the first stage, it uses a causal effect estimator to identify the causal effect of an attribute on the task label. Then in the second stage, it regularizes the classifier proportional to the causal effect of every attribute.
**Stage 1: Causal Effect Estimation.** Given a set of attributes \(\mathcal{A}=\{a_{1},\ldots,a_{k}\}\), the goal of this step is to estimate the causal effect (\(TE_{\mathbf{a}_{i}}\))) of every attribute \(a\in\mathcal{A}\). If the data-generating process of the task is one of DGP-1 or DGP-2, our Prop 3.1 gives us sufficient conditions needed to identify the causal effect. Then one can use appropriate causal effect estimators that work under those conditions or build their own estimators using the closed form causal effect estimand given in the proof of Prop 3.1. The causal effect of a attribute on the label \(Y\) is defined as the expected change in the label \(Y\) when we change the attribute (see SSA for formal definition). There is rich literature on estimating causal effects for high dimensional data [10, 30, 5]. We use the deep learning-based estimator from Chernozhukov et al. [5] to estimate the causal effect (henceforth called _Riesz_). Given the treatment along with the rest of the covariates, it learns a common representation that approximates backdoor adjustment to estimate the causal effect (see SSE for details). Even if the causal effect in the relevant dataset is identifiable, we might get a noisy estimate of the effect due to finite sample error or noise in the labels. Later in SS3.3 and SS4 we will show that our method is robust to error in the causal effect estimate of attributes both theoretically and empirically. Finally, as a baseline effect estimator, we use the direct effect estimator (henceforth called _Direct_) defined as \(\mathbb{E}_{X}(\mathbb{E}(Y|X,a=1)-\mathbb{E}(Y|X,a=0))\) for attribute \(a\in\mathcal{A}\) that has limited identifiability guarantees (see SSE for details).
**Stage2: Regularization.** Here our method regularizes the model prediction with the estimate of causal effect \(=\{TE_{a_{1}},\ldots,TE_{a_{k}}\}\) of each attribute \(\mathcal{A}=\{a_{1},\ldots,a_{k}\}\). The loss objective is,
\[\mathcal{L}_{CausalReg}\coloneqq\mathbb{E}_{(\mathbf{x},y)\sim\mathcal{P}} \Big{[}\mathcal{L}_{task}\Big{(}c(h(\mathbf{x})),y\Big{)}+R\cdot\mathcal{L}_{Reg }\Big{(}\mathbf{x},y\Big{)}\Big{]} \tag{1}\]
where \(R\) is the regularization strength hyperparameter, \(\mathcal{P}\) is the training data distribution (SS2). The first term \(\mathcal{L}_{task}(c(h(\mathbf{x})),y)\) can be any training objective e.g. cross-entropy or max-margin loss for training the encoder \(h\) and task classifier \(c\) jointly to predict the task label \(y\) given input \(\mathbf{x}\). Our regularization loss term \(\mathcal{L}_{Reg}\) aims to regularize the model such that the causal effect of an attribute on the classifier's output matches the estimated causal effect of the attribute \(a_{i}\) on the label. Formally,
\[\mathcal{L}_{Reg}\coloneqq\sum_{i=\{1,2,\ldots,|\mathcal{A}|\}} \mathbb{E}_{(\mathbf{x}_{a_{i}^{\prime}})\sim\mathcal{Q}(\mathbf{x}_{a_{i}})}\Big{[} \Big{(}c(h(\mathbf{x})_{a_{i}})-c(h(\mathbf{x}_{a_{i}}^{\prime}))\Big{)}-TE_{a_{i}} \Big{]}^{2} \tag{2}\]
where \(\mathbf{x}_{a_{i}^{\prime}}\sim\mathcal{Q}(\mathbf{x}_{c_{i}})\) be a sample from counterfactual distribution \(\mathcal{Q}\coloneqq\mathbb{P}(\mathbf{x}_{c_{i}^{\prime}}|\mathbf{x}_{c_{i}})\) and \(\mathbf{x}_{c_{i}^{\prime}}\) is the input had the attribute in input \(\mathbf{x}_{c_{i}}\) been \(c_{i}^{\prime}\).
### Robustness of CausalReg with noise in the causal effect estimates
Our proposed regularization method relies primarily on the estimates of the causal effect of any attribute \(c_{i}\) to regularize the model. Thus, it becomes important to study the efficacy of our method under error or noise in causal effect estimation.
Figure 2: Causal graphs showing different data-generating processes. Shaded nodes denote observed variables; red arrow denotes an unknown target relationship. In the first two graphs, the causal effect of attributes (A and C,S respectively) on Y is identified.
We consider a simple setup to theoretically analyze the condition under which our method will train a better classifier than the standard ERM max-margin objective (following previous work [17, 11, 23]) in terms of generalization to the spurious correlations. Let \(\mathcal{A}=\{ca_{1},\ldots,ca_{K},sp_{1},\ldots,sp_{J}\}\) be the set of available attributes where \(ca_{k}\) are causal attributes and \(sp_{j}\) are the spurious attributes. For simplicity, we assume that the representation encoder mapping \(\mathbf{X}\) to \(\mathbf{Z}\) i.e \(h:\mathbf{X}\to\mathbf{Z}\) is frozen and the final _task_ classifier (\(c\)) is a linear operation over the representation. Following Kumar et al. [17], we also assume that \(\mathbf{z}\) is a disentangled representation w.r.t. the causal and spurious attributes, i.e., the representation vector \(\mathbf{z}\) can be divided into two subsets, features corresponding to the causal and spurious attribute respectively. Thus the task classifier takes the form \(c(\mathbf{z})=\sum_{k=1}^{K}\mathbf{w}_{ca_{k}}\cdot\mathbf{z}_{ca_{k}}+\sum_{j=1}^{J}\mathbf{ w}_{sp_{j}}\cdot\mathbf{z}_{sp_{j}}\).
Let \(\mathcal{L}_{task}(\theta;(\mathbf{x},y_{m}))\) be the max-margin objective (see SSA for details) used to train the task classifier "\(c\)" to predict the task label \(y_{m}\) given the _frozen_ latent space representation \(\mathbf{z}\). Let the task label \(y\) and the attribute "\(a\)" where \(a\in\mathcal{A}\) be binary taking value from \(\{0,1\}\). The causal effect of an attribute \(a\in\mathcal{A}\) on the task label \(Y\) is given by \(TE_{a}=\mathbb{E}(Y|do(a)=1)-\mathbb{E}(Y|do(a)=0)=P(y=1|do(a)=1)-P(y=1|do(a)=0)\) (see SSA for definition). Thus the value of the causal effect is bounded s.t. \(TE_{a}\in[-1,1]\) where the ground truth causal effect of spuriously correlated attribute "\(sp\)" is \(TE_{sp}=0\) and for causal attribute "\(ca\)" is \(|TE_{ca}|>0\). Given that we assume a linear model, we instantiate a simpler form of the regularization term \(\mathcal{L}_{Reg}\) for the training objective given in Eq. 1:
\[\mathcal{L}_{Reg}\coloneqq\sum_{k=\{1,2,\ldots,K\}}\lambda_{ca_{k}}\|\mathbf{w}_{ ca_{k}}\|_{p}+\sum_{j=\{1,2,\ldots,J\}}\lambda_{sp_{j}}\|\mathbf{w}_{sp_{j}}\|_{p}\]
where \(\lambda_{ca_{k}}\coloneqq 1/|TE_{ca_{k}}|\) and \(\lambda_{sp_{j}}\coloneqq 1/|TE_{sp_{j}}|\) are the regularization strength for the causal and spurious features \(\mathbf{z}_{ca_{k}}\) and \(\mathbf{z}_{sp_{j}}\) respectively, \(|\cdot|\) is the absolute value operator and \(\|\mathbf{w}\|_{p}\) is the \(L_{p}\) norm of the vector \(\mathbf{w}\). Since \(|TE_{(\cdot)}|\in[0,1]\), we have \(\lambda_{(\cdot)}=1/|TE_{(\cdot)}|\geq 1\). In practice, we only have access to the empirical estimate of the causal effect \(TE_{(\cdot)}\) denoted as \(\hat{TE}_{(\cdot)}\) and the regularization coefficient becomes \(\lambda_{(\cdot)}=1/|\hat{TE}_{(\cdot)}|\). Now we are ready to state the main theoretical result that shows our regularization objective will learn the correct classifier which uses only the causal attribute for its prediction, given that the ranking of estimated treatment effect is correct up to some constant factor. Let \([S]\) denote the set \(\{1,\ldots,S\}\).
**Theorem 3.1**.: _Let the latent space be frozen and disentangled such that \(\mathbf{z}=[\mathbf{z}_{ca_{1}},\ldots,\mathbf{z}_{ca_{K}},\mathbf{z}_{sp_{1}},\ldots,\mathbf{z}_{ sp_{J}}]\) (Assm D.1). Let the desired classifier \(c^{des}(\mathbf{z})=\sum_{k=1}^{K}\mathbf{w}_{ca_{k}}^{des}\cdot\mathbf{z}_{ca_{k}}\) be the max-margin classifier among all linear classifiers that use only the causal features \(\mathbf{z}_{ca_{k}}\)'s for prediction. Let \(c^{mm}(\mathbf{z})=\sum_{k=1}^{K}\mathbf{w}_{ca_{k}}^{mm}\cdot\mathbf{z}_{ca_{k}}+\sum_{j =1}^{J}\mathbf{w}_{sp_{j}}^{mm}\cdot\mathbf{z}_{sp_{j}}\) be the max-margin classifier that uses both the causal and the spurious features, and let \(\mathbf{w}_{sp_{j}}^{mm}\neq\mathbf{0}\), \(\forall j\in[J]\). We assume \(\mathbf{w}_{sp_{j}}^{mm}\neq\mathbf{0}\), \(\forall j\in[J]\), without loss of generality because otherwise, we can restrict our attention only to those \(j\in[J]\) that have \(\mathbf{w}_{sp_{j}}^{mm}\neq\mathbf{0}\). Let the norm of the parameters of both the classifier be set to 1 i.e \(\sum_{k=1}^{K}\|\mathbf{w}_{ca_{k}}^{mm}\|_{p=2}^{2}+\sum_{j=1}^{J}\|\mathbf{w}_{sp_{j} }^{mm}\|_{p=2}^{2}=\sum_{k=1}^{K}\|\mathbf{w}_{ca_{k}}^{des}\|_{p=2}^{2}=1\). Then if regularization coefficients are related s.t. \(\mathrm{mean}\left(\left\{\frac{\lambda_{ca_{k}}}{\lambda_{sp_{j}}}\cdot\eta _{k,j}\right\}_{k\in[K],j\in[J]}\right)<\frac{J}{K}\) where \(\eta_{k,j}=\frac{\|\mathbf{w}_{ca_{k}}^{des}\|-\|\mathbf{w}_{ca_{k}}^{mm}\|_{p}}{\| \mathbf{w}_{sp_{j}}^{mm}\|_{p}}\), then_
1. _Preference:_ _._ _Thus, our causal effect regularization objective (Def 3.3) will choose the_ \(c^{des}(\mathbf{z})\) _classifier over the max-margin classifier_ \(c^{mm}(\mathbf{z})\) _which uses the spuriously correlated feature._
2. _Global Optimum:_ _The desired classifier_ \(c^{des}(\mathbf{z})\) _is the global optimum of our loss function_ \(\mathcal{L}_{CausalReg}\) _when_ \(J=1\)_,_ \(K=1\)_,_ \(p=2\)_, the regularization strength are related s.t._ \(\lambda_{ca_{1}}<\lambda_{sp_{1}}\implies|\hat{TE}_{ca_{1}}|>|\hat{TE}_{sp_{1}}|\) _and search space of linear classifiers_ \(c(\mathbf{z})\) _are restricted to have the norm of parameters equal to 1._
**Remark**.: _The result of Theorem 3.1 holds under a more intuitive but stricter constraint on the regularization coefficient \(\lambda\) which states that \(\lambda_{sp_{j}}>\left(\frac{K}{J}\eta_{k,j}\right)\lambda_{ca_{k}}\implies| \hat{TE}_{sp_{j}}|<\left(\frac{K}{J}\eta_{k,j}\right)\|\hat{TE}_{ca_{k}}|\)\(\forall k\in[K]\) and \(j\in[J]\). The above constraint states that if the treatment effect of the causal feature is more than that of the spurious feature by a constant factor then the claims in Theorem 3.1 hold._
Proof Sketch.: **(1)** We compare both the classifier \(c^{des}(\mathbf{z})\) and \(c^{mm}(\mathbf{z})\) using our overall training objective to our training objective \(\mathcal{L}_{CausalReg}\) (Eq. 1). Given the relation between the regularization strength mentioned in the above theorem is satisfied, we then show that one can always choose a regularization strength "\(R\)" greater than a constant value s.t the desired classifier \(c^{des}(\mathbf{z})\) has lower loss than the \(c^{mm}(\mathbf{z})\) in terms of our training objective \(\mathcal{L}_{CausalReg}\). **(2)** We use the result from the first claim to show that \(c^{des}(\mathbf{z})\) has a lower loss than any other classifier that uses the spurious feature \(\mathbf{z}_{sp_{1}}\). Then, among the classifier that only uses the causal feature \(\mathbf{z}_{ca_{1}}\), we show that again \(c^{des}(\mathbf{z})\) has the lower loss
w.r.t. \(\mathcal{L}_{CausalReg}\). Thus the desired classifier has a lower loss than all other classifiers w.r.t \(\mathcal{L}_{CausalReg}\) with parameter norm \(1\) hence a global optimum. Refer SSD for proof.
## 4 Empirical Results
### Datasets
Theorem 3.1 showed that our method can find the desired classifier in a simple linear setup. We now evaluate the method on a synthetic, semi-synthetic and real-world dataset. Details are in SSE.
**Syn-Text.** We introduce a synthetic dataset where the ground truth causal graph is known and thus the ground truth spuriously correlated feature is known apriori (but unknown to all the methods). The dataset contains two random variables (_causal_ and _confound_) that cause the binary main task label \(y_{m}\) and the variable _spurious_ is spuriously correlated with the _confound_ variable. Given the values of spurious and causal features, we create a sentence as input. We define Syn-Text-Obs-Conf- a version of the Syn-Text dataset where all three variables/attributes are observed. Next, to increase difficulty, we create a version of this dataset -- Syn-Text-Unobs-Conf-- where the _confound_ attribute is not observed in the training dataset, corresponding to DGP-3 from Fig. 2.
**MNIST34.** We use MNIST [18] to compare our method on a similar semi-synthetic dataset as used in Mouli and Ribeiro [22]. We define a new task over this dataset but use the same attributes, color, rotation, and digit, whose associated transformation satisfies the _lumpability_ assumption in their work. We define the digit attribute (\(3\) and \(4\)) and the color attribute (red or green color of digit) as the causal attributes which creates the binary main task label using the \(XOR\) operation. Then we add rotation (\(0^{\lx@math@degree}\) or \(90^{\lx@math@degree}\) ) to introduce spurious correlation with the main task label. This dataset corresponds to DGP-2, where \(C\) is color and digit attribute, \(S\) is rotation attribute and \(x\) is an empty set.
**Twitter-AAE [3].** This is a real-world dataset where the main task is to predict a binary sentiment label from a tweet's text. The tweets are associated with _race_ of the author which is spuriously correlated with the main task label. Since this is a real-world dataset where we have not artificially introduced the spurious attribute we don't have a ground truth causal effect of _race_ on the sentiment label. But we expect it to be zero since changing _race_ of a person should not change the sentiment of the tweet. We use GPT3 [4] to create the counterfactual example by prompting it to change the race-specific information in the given sentence (see SSF for examples). This dataset corresponds to DGP-1, where the node \(A\) is the spurious attribute race.
**Varying spurious correlation in the dataset.** Since the goal is to train a model that doesn't rely on the spuriously correlated attribute, we create multiple settings for every dataset with different levels of correlations between the main task labels \(y_{m}\) and spurious attribute \(y_{c_{sp}}\). Following [17], we use a _predictive correlation_ metric to define the label-attribute correlation that we vary in our experiments. The predictive correlation \((\kappa)\) measures how informative one label or attribute (\(s\)) is for predicting the other (\(t\)), \(\kappa\coloneqq Pr(s=t)=\sum_{i=1}^{N}\mathbf{1}[s=t]/N\), where \(N\) is the size of the dataset and \(\mathbf{1}[\cdot]\) is the indicator function that is \(1\) if the argument is true otherwise \(0\). Without loss of generality, assuming \(s=1\) is correlated with \(t=1\) and similarly, \(s=0\) is correlated with \(t=0\); predictive correlation lies in \(\kappa\in[0.5,1]\) where \(\kappa=0.5\) indicates no correlation and \(\kappa=1\) indicates that the attributes are fully correlated. For Syn-Text dataset, we vary the predictive correlation between the confound attribute and the spurious attribute; for MNIST34, between the combined causal attribute (digit and color) and the spurious attribute (rotation); and for Twitter-AAE, between the task label and the spurious attribute (race). See SSE for details.
### Baselines and evaluation metrics
**Baselines.** We call the first baseline _ERM_, corresponding to training the main task classifier using cross-entropy loss without any additional regularization. For the second baseline, we consider the method proposed in [22] (henceforth referred to as _Mouli+CAD_) to automatically detect the spurious attributes and train a model invariant to those attributes. Given a set of attributes, this method computes a score for every subset of attributes, selects the subset with a minimum score as the spurious subset, and finally enforces invariance with respect to those subsets using counterfactual data augmentation (CAD) [29, 14]. Empirically, we observe that CAD is not correctly imposing invariance for a given attribute (see SSF for discussion). Thus we add a variant of Mouli+CAD's method where instead of using CAD it uses our regularization objective (Eq. 2) to impose invariance using the causal effect \(TE_{a}=0\) for some attribute "\(a\)". Henceforth we will call this method _Mouli+CausalReg_.
**Metrics.** We use two metrics for evaluation. Since all datasets have binary task labels and attributes, we define a group-based metric (_average group accuracy_) to measure generalization under distribution shift. Specifically, given binary task label \(y\in\{0,1\}\) and spurious attribute \(a\in\{0,1\}\). Following Kumar et al. [17] we define \(2\times 2\) groups, one for each combination of \((y,a)\). The subset of the dataset with \((y=1,a=1)\) and \((y=0,a=0)\) are the majority
group \(S_{maj}\) while groups \((y=1,a=0)\) and \((y=0,a=1)\) make up the minority group \(S_{min}\). We expect the main classifier to exploit this correlation and hence perform better on \(S_{maj}\) but badly on \(S_{min}\) where the correlation breaks.
Thus we want a method that performs better on the average accuracy on both the groups i.e \(\frac{Acc(S_{min})+Acc(S_{maj})}{2}\), where \(Acc(S_{maj})\) and \(Acc(S_{min})\) are the accuracy on majority and minority group respectively. The second metric (\(\Delta\)Prob) measures the reliance of a classifier on the spurious feature. For every given input \(\mathbf{x}_{a}\) we have access to the counterfactual distribution \((\mathbf{x}_{a^{\prime}})\sim\mathcal{Q}(\mathbf{x}_{a})\) (SS2) where the attribute \(A=a\) is changed to \(A=a^{\prime}\). \(\Delta\)Prob is defined as the change in the prediction probability of the model on changing the spurious attribute \(a\) in the input, thus directly measuring the reliance of the model on the spurious attribute. For background on baselines refer SSA a detailed description of our experiment setup refer SSE.
### Evaluating Stage 1: Automatic Detection of Spurious Attributes
**Failure of Mouli+CAD in detecting the spurious attributes at high correlation.** In Fig. 3, we test the effectiveness of Mouli+CAD to detect the subset of attributes which are spurious on different datasets with varying levels of spurious correlation (\(\kappa\)). In Syn-Text dataset, at low correlations (\(\kappa<0.8\)) Mouli+CAD correctly detects _spurious_ attribute (orange line is lower than blue). As the correlation increases, their method incorrectly states that there is no spurious attribute (blue line lower than orange). In MNIST34 dataset, Mouli+CAD does not detect any attributes as spurious (shown by blue line for all \(\kappa\)). For Twitter-AAE dataset, Mouli+CAD's method is correctly able to detect the spuriously correlated attribute (_race_) for all the values of predictive correlation, perhaps because the spurious correlation is weak compared to the causal relationship from causal features to task label.
**CausalReg is robust to error in the estimation of spurious attributes.** Unlike Mouli+CAD's method that does a hard categorization, CausalReg estimates the causal effect of every attribute on the task label as a fine-grained measure of whether an attribute is spurious or not. Table 1 summarizes the estimated treatment effect of spurious attributes in every dataset for different levels of predictive correlation (\(\kappa\)). We use two different causal effect estimators named _Direct_ and _Riesz_ with the best estimate selected using validation loss (see SSE). Overall, the Riesz estimator gives a better or comparable estimate of the causal effect of spurious attribute than Direct, except in the Syn-Text-Unobs-Conf dataset where the causal effect is not identified. At high predictive correlation(\(>=0.9\)), as expected, the causal effect estimates are incorrect. But as we will show next, since CausalReg uses a continuous effect value to detect the spurious attribute, it allows for errors in the first (detection) step to not affect later steps severely.
### Evaluating Stage 2: Evaluation of CausalReg and other baselines
Fig. 4 compares the efficacy of our method with other baselines in removing the model's reliance on spurious attributes. On **Syn-Text-Unobs-Conf** dataset, our method (CausalReg) performs better than all the other baselines for all levels of predictive correlation (\(\kappa\)) on _average group accuracy_ (first row in Fig. 3(a)). In addition, \(\Delta\)Prob (the sensitivity of model on changing spurious attribute in input, see 4.1) for our method is the lowest and close to the correct value \(0\), compared to other baselines (see bottom row of Fig. 3(a)). For \(\kappa\geq 0.8\), Mouli+CAD is the same as ERM since it fails to detect the spurious attribute and thus doesn't impose invariance w.r.t the spurious attribute (Fig. 2(a) for details). On **MNIST34** dataset, the average group accuracy of all methods is comparable, but CausalReg has a substantially lower \(\Delta\)Prob than baselines for all values of \(\kappa\). Again, the main reason why Mouli+CAD fails is that it is not able to detect the spurious attribute for all \(\kappa\) and thus doesn't impose invariance w.r.t them (see Fig. 2(b) and SS4.3). On **Twitter-AAE** dataset, Mouli+CAD correctly detects the _race_ attribute as spurious and performs better in terms of Average Group Accuracy than CausalReg and ERM. But if we look at \(\Delta\)Prob, the gain in accuracy is not because of better invariance:
Figure 3: **Detecting spurious attribute using Mouli+CAD: x-axis is the predictive correlation and y-axis shows the score defined by Mouli+CAD. Attribute with the lowest score is selected as spurious. The orange curve shows the score for true spurious attribute. (a) and (b). Mouli+CAD fails to detect the spurious attribute at high \(\kappa\). (c) Mouli+CAD correctly identifies the race attribute as spurious.**
in fact, the reliance on the spurious attribute (race) is worse than ERM. In contrast, CausalReg has a significantly lower \(\Delta\)Prob while obtaining comparable accuracy to ERM. To summarize, in all datasets, CausalReg ensures a higher or comparable accuracy to ERM while yielding the lowest \(\Delta\)Prob. For details on how we select the best models and additional empirical results see SSF.
## 5 Related Work
**Known spurious attributes.** When the spuriously correlated attributes are known a priori, three major types of methods have been proposed, based on worst-group optimization, conditional independence constraints, or data augmentation. Methods like GroupDRO [27] create multiple groups in the training dataset based on the spurious attribute and optimize for worst group accuracy. Other methods assume knowledge of causal graph and impose conditional independence constraints for removing spurious attributes [32; 13; 19]. Methods based on data augmentation add counterfactual training data where only the spurious attribute is changed (and label remains the same) [14; 12; 15]. **Automatically
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline & & & \multicolumn{6}{c}{Predictive Correlation (\(\kappa\))} \\ \cline{3-8} Dataset & True & Method & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & 0.99 \\ \hline \multirow{2}{*}{Syn-Text-Obs-Conf} & \multirow{2}{*}{0} & Direct & **0.00** & **0.00** & **0.01** & 0.08 & 0.16 & 0.22 \\ & & Riesz & **0.00** & 0.03 & 0.03 & **0.03** & **0.05** & **0.15** \\ \cline{2-8} \multirow{2}{*}{Syn-Text-Unobs-Conf} & \multirow{2}{*}{0} & Direct & **-0.01** & **0.13** & **0.24** & **0.37** & **0.49** & **0.67** \\ & & Riesz & 0.06 & 0.19 & 0.31 & 0.42 & 0.58 & 0.70 \\ \cline{2-8} \multirow{2}{*}{MNIST34} & \multirow{2}{*}{0} & Direct & 0.02 & **0.03** & **0.05** & 0.06 & **0.15** & **0.27** \\ & & Riesz & **-0.01** & 0.06 & **0.05** & **0.04** & 0.2 & 0.31 \\ \cline{2-8} \multirow{2}{*}{Twitter-AAE} & \multirow{2}{*}{-} & Direct & **0.01** & **0.07** & **0.12** & 0.20 & **0.27** & **0.31** \\ & & Riesz & 0.02 & **0.07** & **0.12** & **0.17** & **0.27** & 0.37 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Causal Effect Estimate of spurious attribute.**_Direct_ and _Riesz_ are two different causal effect estimators as described in §3.2. Overall we see that _Riesz_ performs better or comparable to the baseline _Direct_ and closer to ground truth value \(0\). Since Twitter-AAE is a real dataset we don’t have a ground causal effect of _race_ attribute but we expect it to be zero (see §4.1).
Figure 4: **Average group accuracy (top row) and \(\Delta\)Prob (bottom row) for different methods. x-axis denotes predictive correlation (\(\kappa\)) in the dataset. Compared to other baselines, (a) and (b) show that CausalReg performs better or comparable in average group accuracy and trains a classifier significantly less reliant to spurious attribute as measured by \(\Delta\)Prob. Mouli+CAD performs better in average group accuracy in (c) but relies heavily on spurious prop shown by high \(\Delta\)Prob whereas CausalReg performs equivalent to ERM on average group accuracy with lowest \(\Delta\)Prob among all.**
discovering spurious attributes.The problem becomes harder if spurious attributes are not known. Mouli and Ribeiro's work [21, 22] provides a method assuming specific kinds of transformations on the spurious attributes, either transformations that form finite linear automorphism groups [21] or symmetry transformations over equivalence classes [22]. Any attribute changed via the corresponding transformation that does not hurt the training accuracy is considered spurious. However, they do not consider settings with correlation between the transformed attributes and the task labels. Our work considers a more realistic setup where we don't impose any constraint on the transformation or the attribute values, and allows attributes to be correlated with the task label (at different strengths). Using conditional independencies derived from a causal graph, Zheng and Makar [35] propose a method to automatically discover the spurious attributes under the anti-causal classification setting. Our work, considering the _causal_ classification setting (features cause label) complements their work and allows soft regularization proportional to the causal effect.
## 6 Limitations and Conclusion
We presented a method for automatically detecting and removing spurious correlation while training a classifier. While we focused on spurious attributes, estimation of causal effects can be used to regularize effect of non-spurious features too. That said, our work has limitations: we can guarantee identification of causal effect only in certain DGPs. In future work, it will be useful to characterize the datasets on which our method is likely to work and where it fails.
|
2303.05801 | Introducing the Random Phase Approximation Theory | Random Phase Approximation (RPA) is the theory most commonly used to describe
the excitations of many-body systems. In this article, the secular equations of
the theory are obtained by using three different approaches: the equation of
motion method, the Green's function perturbation theory and the time-dependent
Hartree--Fock theory. Each approach emphasizes specific aspects of the theory
overlooked by the other methods. Extensions of the RPA secular equations to
treat the continuum part of the excitation spectrum and also the pairing
between the particles composing the system are presented. Theoretical
approaches which overcome the intrinsic approximations of RPA are outlined. | Giampaolo Co' | 2023-03-10T09:16:49Z | http://arxiv.org/abs/2303.05801v1 | # Introducing the Random Phase Approximation Theory
###### Abstract
Random Phase Approximation (RPA) is the theory most commonly used to describe the excitations of many-body systems. In this article, the secular equations of the theory are obtained by using three different approaches: the equation of motion method, the Green function perturbation theory and the time-dependent Hartree-Fock theory. Each approach emphasizes specific aspects of the theory overlooked by the other methods. Extensions of the RPA secular equations to treat the continuum part of the excitation spectrum and also the pairing between the particles composing the system are presented. Theoretical approaches which overcome the intrinsic approximations of RPA are outlined.
## 1 Introduction
The aim of the Random Phase Approximation (RPA) theory is the description of harmonic excitations of quantum many-body systems. This theory was formulated by David Bohm and David Pines in the early 1950s at the end of a set of articles dedicated to the description of collective oscillations of electron gas [1, 2, 3]. The approximation is well defined in the first of these articles [1], where it is used to eliminate the random movement of single electrons out of phase with respect to the oscillations of the external probe exciting the system. The theory is presented only in the third of these articles [3] and does not contain any random phase to be approximated. However, the authors used the term _Random Phase Approximation_ to identify the theory and it is by this name that it is nowadays commonly known.
The applications of RPA in the 1950s and 1960s were focused on the description of infinite, homogeneous and translationally invariant systems, such as electron gas. A detailed historical overview of the works of these early years is given in Ref. [4]. Advances in the computing technologies allowed the application of RPA also to finite systems such as atoms and especially nuclei. During the 1970s and 1980s, RPA was the main theoretical tool used to investigate nuclear excitations of various types (see, for example, Refs. [5, 6] for a review). More recently, RPA has been applied to atomic and molecular systems [7]. Nowadays, RPA calculations are rather standard and relatively simple to carry out, so that they are, improperly, classified as mean-field calculations.
RPA belongs to the category of effective theories. These theories use particle-particle interactions which do not have a strongly repulsive core at small inter-particle distances, a feature characterizing instead the microscopic interactions which are tailored to describe two-particle data. Hartree-Fock (HF) and Density Functional Theory (DFT) are also effective theories. They are conceived to describe the ground state of many-body systems, while RPA starts from the assumption that the ground state is known and considers the problem of describing the excitation modes.
The validity of RPA is restricted to situations where the excitation energies are relatively small as compared to the global binding energies of the system. This means that RPA is not suitable for describing situations where the system undergoes deep modifications of its structure, such as fission in nuclei or phase transitions in fluid.
In the energy regime adequate to be described by RPA, it is plausible to separate the role of the external probe, which excites the system, from its response. Each probe, photon, electron, neutrino, hadron, electric and magnetic
field, sound wave, etc., is described by a specific set of operators depending on the type of interaction with the system. The response of the system depends only on the interactions between its fundamental components. For this reason, the many-body response is universal, independent of the specific probe that induces it. RPA evaluates this universal response.
Regarding the theoretical aspects of the theory, I like to quote what David Pines and Philippe Nozieres write in Chapter 5.2 of their book on quantum liquids [8]:
_"The development, frequent independent rediscovery and gradual appreciation of the Random Phase Approximation offers a useful lesson to theoretical physicist. First, it illustrates the splendid variety of ways that can be developed for saying the same thing. Second, it suggests the usefulness of learning different languages of theoretical physics and of attempting the reconciliation of seemingly different, but obviously related results."_
Despite this clear statement, RPA is commonly presented in the context of specific theoretical frameworks in order to attack some well identified problem. In this article, I want to focus attention on the theory in itself and I present three different ways of obtaining the secular RPA equations. In my opinion, this allows a richer comprehension of the theory, since each method emphasizes aspects overlooked by the other ones. The present article is not a review of the recent advances in the use of RPA theory, but it aims to be a guide to understand it by pointing out its underlying assumptions, its merits and its faults and by indicating how to improve it.
The starting point of every many-body theory is the Independent Particle Model (IPM) and in Section 2, I recall some aspects of this model which are important for the RPA theory. RPA secular equations are derived in Sections 3-5 by using, respectively, the method of the equations of motion, the perturbation calculation of the two-body Green function and the harmonic approximation of the time-dependent evolution of the HF equations.
The following two sections are dedicated to specific aspects which can be considered by RPA. In Section 6, I present how to describe the fact that one particle can be emitted from the system, and in Section 7 how to treat pairing effects between the particles. Some issues related to the pragmatic application of RPA in actual calculations are presented in Section 8.
Approaches that extend the usual RPA formulations are outlined in Section 9, and the formulation of an RPA-like theory able to handle microscopic interactions is presented in Section 10.
Despite my good intentions, I used numerous acronyms and to facilitate the reading I list them in Tab. 1.
## 2 Independent Particle Models
The starting point of all the many-body theories is the Independent Particle Model (IPM). In this model, each particle moves independently of the presence of the other particles.This allows the definition of single-particle (s.p.) energies and wave functions identified by a set of quantum numbers. This is the basic language necessary to build any theory where the particles interact among them.
### Mean-Field model
A very general expression of the hamiltonian describing the many-body system is
\[\hat{H}=\sum_{i=1}^{A}\left(-\frac{\hbar^{2}}{2m_{i}}\nabla_{i}^{2}+\hat{V}_{0 }(i)\right)+\frac{1}{2}\sum_{i,j=1}^{A}\hat{V}(i,j)+\cdots\;\;, \tag{1}\]
where \(A\) is the number of particles, each of them with mass \(m_{i}\). In the expression (1), the term containing the Laplace operator \(\nabla_{i}^{2}\) represents the kinetic energy, \(\hat{V}_{0}(i)\) is a generic potential acting on each particle and \(\hat{V}(i,j)\) is the interaction between two particles. The dots indicate the, eventual, presence of more complex terms of the interaction, such as three-body forces. Henceforth, we shall not consider these latter terms.
By adding to and subtracting from the expression (1) an average potential \(\hat{U}(i)\) acting on one particle at a time, we obtain:
\[\hat{H}=\underbrace{\sum_{i}^{A}\left(-\frac{\hbar^{2}}{2m_{i}}\nabla_{i}^{2 }+\hat{V}_{0}(i)+\hat{U}(i)\right)}_{\hat{H}_{0}}+\underbrace{\frac{1}{2}\sum _{i,j}^{A}\hat{V}(i,j)-\sum_{i}^{A}\hat{U}(i)}_{\hat{H}_{1}}\;. \tag{2}\]
The part indicated by \(\hat{H}_{0}\) is a sum of terms acting on one particle, the \(i\)-th particle, at a time. We can define each term of this sum as s.p. hamiltonian \(\hat{h}(i)\),
\[\hat{H}_{0}=\sum_{i}\hat{h}(i)=\sum_{i}^{A}\left(-\frac{\hbar^{2}}{2m_{i}}\nabla _{i}^{2}+\hat{V}_{0}(i)+\hat{U}(i)\right)\ . \tag{3}\]
The basic approximation of the Mean-Field (MF) model consists in neglecting, in the expression (2), the term \(\hat{H}_{1}\) called _residual interaction_. In this way, the many-body problem is transformed into a sum of many, independent, one-body problems, which can be solved one at a time. The MF model is an IPM since the particles described by \(\hat{H}_{0}\) do not interact among them.
The fact that the hamiltonian \(\hat{H}_{0}\) is a sum of independent terms implies that its eigenstates can be built as a product of the eigenstates of \(\hat{h}(i)\)
\[\hat{h}(i)|\phi_{i}\rangle=\epsilon_{i}|\phi_{i}\rangle\ \, \tag{4}\]
therefore
\[\hat{H}_{0}|\Phi\rangle=\left(\sum_{i}\hat{h}(i)\right)|\Phi\rangle=\xi|\Phi \rangle\ \, \tag{5}\]
where
\[|\Phi\rangle=|\phi_{1}\rangle|\phi_{2}\rangle\cdots|\phi_{A}\rangle\ . \tag{6}\]
For fermions, the antisymmetry of the global wave function under the exchange of two particles implies that the wave function \(|\Phi\rangle\) has to be described as the sum of antisymmetrized products of one-particle wave functions. This solution is known in the literature as Slater determinant [9]
\[|\Phi\rangle=\frac{1}{\sqrt{A!}}\,\det\{|\phi_{i}\rangle\}\ . \tag{7}\]
Systems with global dimensions comparable to the average distances of two interacting particles are conveniently described by exploiting the spherical symmetry. We are talking about nuclei, atoms and small molecules. After choosing the center of the coordinate system, it is convenient to use polar spherical coordinates.
The single-particle wave function can be expressed as a product of a radial part, depending only on the distance \(r\equiv|{\bf r}|\) from the coordinate center, with a term dependent on the angular coordinates \(\theta\) and \(\phi\) and, eventually, the spin of the particle. The angular part has a well known analytic expression. For example, in cases of an MF potential containing a spin-orbit term the s.p. wave functions are conveniently expressed as:
\[\phi_{nljm}({\bf r})=R_{nlj}(r)\sum_{\mu\sigma}\langle l\,\mu\,\frac{1}{2}\, \sigma|j\,m\rangle Y_{l\mu}(\theta,\phi)\chi_{\sigma}=R_{nlj}(r)y_{ljm}( \theta,\phi)\ \, \tag{8}\]
where the spherical harmonics \(Y_{l\mu}\) and the Pauli spinors \(\chi_{\sigma}\) are connected by the Clebsch-Gordan coefficients and form the so-called spin spherical harmonics [10].
Systems with dimensions much larger than average distances between two interacting particles are conveniently described by exploiting the translational invariance. In condensed matter conglomerates, the translational symmetry dominates. A basic structure of the system is periodically repeated in three cartesian directions and it is not possible to find a central point.
The basic MF model for this type of system considers the potential \(\hat{U}\) to be constant. This fermionic system is commonly called _Fermi gas_. It is a toy model, homogeneous, with infinite volume, composed by an infinite number of fermions which do not interact with each other. Since the energy scale is arbitrary, it is possible to select \(\hat{U}=0\) without loosing generality. In this case, the one-body Schrodinger equation is
\[-\frac{\hbar^{2}}{2m_{j}}\nabla_{j}^{2}\phi_{j}({\bf r})=\epsilon_{j}\phi_{j}( {\bf r})\ . \tag{9}\]
By defining
\[\epsilon_{j}=\frac{\hbar^{2}{\bf k}_{j}^{2}}{2m_{j}}, \tag{10}\]
the eigenfunction of Equation (9) can be written as
\[\phi_{j}({\bf r})=\frac{1}{\sqrt{V}}\,e^{i{\bf k}_{j}\cdot{\bf r}}\chi_{\sigma} \chi_{\tau}\;\;, \tag{11}\]
where \(V\) is the volume of the system and \(\chi\) are the Pauli spinors related to the spin of the fermion and, eventually, to its isospin. The third components of spin and isospin are indicated as \(\sigma\) and \(\tau\), respectively. The physical quantities of interest are those independent of \(V\) whose value, at the end of the calculations, is taken to be infinite.
The solution of the Fermi gas model provides a set of continuum single particle energies. Each energy is characterized by \(k\equiv|{\bf k}|\), as indicated by Equation (10). In the ground state of the system, all the s.p. states with \(k\) smaller than a value \(k_{\rm F}\), called Fermi momentum, are fully occupied and those with \(k>k_{\rm F}\) are empty. Each state has a degeneracy of 2 in cases of electron gas and of 4 for nuclear matter where each nucleon is characterized also by the isospin third component.
### Hartree-Fock Theory
The theoretical foundation of the MF model is provided by the Hartree-Fock (HF) theory, which is based on the application of the variational principle, one of the most used methods to solve the Schrodinger equation in an approximated manner. The basic idea is that the wave function which minimizes the energy, considered as functional of the many-body wave function, is the correct eigenfunction of the hamiltonian. This statement is correct when the search for the minimum is carried out by considering the full Hilbert space. In reality, the problem is simplified by assuming a specific expression of the wave function and the search for the minimum is carried out in the subspace spanned by all the wave functions which have the chosen expression. The energy value obtained in this manner is an upper bound of the correct energy eigenvalue of the hamiltonian. The formal properties of the variational principle are discussed in quantum mechanics textbooks.
For a fermion system, the HF equations are obtained by considering trial many-body wave functions which are expressed as a single Slater determinant. This implies the existence of an orthonormal basis of s.p. wave functions. The requirement that the s.p. wave functions are orthonormalized is a condition inserted in the variational equations in terms of Lagrange multipliers.
We continue this discussion by using Occupation Number Representation (ONR) formalism, which describes the operators acting on the Hilbert space in terms of creation \(\hat{a}^{+}_{\nu}\) and destruction \(\hat{a}_{\nu}\) operators. Concise presentations of this formalism are given in various textbooks, for example, in Appendix 2A of [11], in Appendix C of [12], in Appendix C of [13], in Chapter 4 of [14] and in Chapter 1 of [15].
In Appendix A we show that the hamiltonian of the many-body system, if only two-body interactions are considered, can be written as
\[\hat{H}=\sum_{\nu}\epsilon_{\nu}\hat{a}^{+}_{\nu}\hat{a}_{\nu}-\frac{1}{2} \sum_{ij}\overline{V}_{ijij}+\frac{1}{4}\sum_{\mu\mu^{\prime}\nu\nu^{\prime} }\overline{V}_{\nu\mu\nu^{\prime}\mu^{\prime}}\hat{\mathbb{N}}[\hat{a}^{+}_{ \nu}\hat{a}^{+}_{\mu}\,\hat{a}_{\mu^{\prime}}\,\hat{a}_{\nu^{\prime}}]=\hat{H }_{0}+\hat{V}_{\rm res}\;\;, \tag{12}\]
where \(\hat{H}_{0}\) is the sum of the first two terms, while \(\hat{V}_{\rm res}\) is the last term. We use the common convention of indicating with the latin letters \(h,i,j,k,l\) s.p. states below the Fermi surface (hole states) and with the \(m,n,p,q,r\) letters the s.p. states above the Fermi energies (particle states). Greek letters indicate indexes which have to be defined; therefore, in the above equation, their sums run on all the set of s.p. states. In Equation (12), \(\epsilon_{\nu}\) is the energy of the s.p. state characterized by the \(\nu\) quantum numbers and \(\overline{V}\) is the antisymmetrized matrix element of the interaction defined as
\[\overline{V}_{\nu\mu\nu^{\prime}\mu^{\prime}}\equiv\langle\nu\mu|\hat{V}|\nu^ {\prime}\mu^{\prime}\rangle-\langle\nu\mu|\hat{V}|\mu^{\prime}\nu^{\prime} \rangle\;\;. \tag{13}\]
With the symbol \(\hat{\mathbb{N}}\), we indicate the normal order operator which, by definition, arranges the set of creation and destruction operators in the brackets such that their expectation value on the ground state is zero. By considering this property of \(\hat{\mathbb{N}}\), the expectation value of the hamiltonian between two Slater determinants assumes the expression
\[\langle\Phi_{0}|\hat{H}|\Phi_{0}\rangle=\langle\Phi_{0}|\hat{H}_{ 0}|\Phi_{0}\rangle+\langle\Phi_{0}|\hat{V}_{\rm res}|\Phi_{0}\rangle\] \[= \sum_{\nu}\epsilon_{\nu}\langle\Phi_{0}|\hat{a}^{+}_{\nu}\hat{a} _{\nu}|\Phi_{0}\rangle-\frac{1}{2}\sum_{ij}\overline{V}_{ijij}\langle\Phi_{0}| \Phi_{0}\rangle\] \[+ \frac{1}{4}\sum_{\mu\mu^{\prime}\nu\nu^{\prime}}\overline{V}_{ \nu\mu\nu^{\prime}\mu^{\prime}}\langle\Phi_{0}|\hat{\mathbb{N}}[\hat{a}^{+}_{ \nu}\hat{a}^{+}_{\mu}\,\hat{a}_{\mu^{\prime}}\,\hat{a}_{\nu^{\prime}}]|\Phi_ {0}\rangle\]
\[= \sum_{i}\epsilon_{i}-\frac{1}{2}\sum_{ij}\overline{V}_{ijij}\equiv \mathcal{E}_{0}[\Phi_{0}]\;\;,\]
which clearly indicates that the contribution of the residual interaction is zero and the only part of the interaction which is considered is the one-body term \(\hat{H}_{0}\). This is a consequence of considering a single Slater determinant to describe the system ground state.
In Equation (14), we expressed the energy \(\mathcal{E}_{0}\) as a functional of the Slater determinant \(\Phi_{0}\). The search for the minimum of the energy functional is carried out in the Hilbert subspace spanned by Slater determinants. The quantities to be varied are the s.p. wave functions forming these determinants. These s.p. wave functions must be orthonormalized and this is an additional condition which has to be imposed in doing the variations. Therefore, the problem to be solved is the search for a constrained minimum and it is tackled by using the Lagrange multipliers technique.
The calculation is well known in the literature (see, for example, chapter XVIII-9 of [16] or Chapter 8.4 of [17]). The final result is a set of non-linear integro-differential equations providing the s.p. wave functions \(\phi_{k}\) and the values of the Lagrange multipliers \(\epsilon_{k}\). In coordinate space, these equations can be expressed as
\[\hat{h}\phi_{k}({\bf r})=-\frac{\hbar^{2}\nabla^{2}}{2m}\phi_{k}({\bf r})+ \underbrace{\hat{U}({\bf r})\phi_{k}({\bf r})}_{\rm Hartree}-\underbrace{ \int d^{3}r^{\prime}\hat{W}({\bf r},{\bf r}^{\prime})\phi_{k}({\bf r}^{\prime} )}_{\rm Fock--Dirac}=\epsilon_{k}\phi_{k}({\bf r})\;\;. \tag{15}\]
where the Hartree average potential is defined as
\[\hat{U}({\bf r})\equiv\sum_{j}\int d^{3}r^{\prime}\phi_{j}^{*}({\bf r}^{ \prime})\hat{V}({\bf r},{\bf r}^{\prime})\phi_{j}({\bf r}^{\prime})\;\;, \tag{16}\]
and the non-local Fock-Dirac term is
\[\hat{W}({\bf r},{\bf r}^{\prime})\equiv\sum_{j}\phi_{j}^{*}({\bf r}^{\prime}) \hat{V}({\bf r},{\bf r}^{\prime})\phi_{j}({\bf r})\;\;. \tag{17}\]
At this stage, the \(\epsilon_{k}\) are the values of the Lagrange multipliers. A theorem, called Koopmans [18], shows that these quantities are the differences between the energies of systems with \(A+1\) and \(A\) particles; therefore, they are identified as s.p. energies.
By neglecting the Fock-Dirac term, we obtain a differential equation of MF type. The Fock-Dirac term, also called the exchange term, changes the bare mean-field equation by inserting the effect of the Pauli exclusion principle.
The differential Equation (15) is solved numerically by using an iterative procedure. One starts with a set of trial wave functions \(\phi_{k}^{(1)}\) built with MF methods. With these trial wave functions, the Hartree (16) and Fock-Dirac (17) terms are calculated and included in Equation (15) which is solved with standard numerical methods. In this way, a new set of s.p. wave functions \(\phi_{k}^{(2)}\) is obtained and it is used to calculate new \(\hat{U}\) and \(\hat{W}\) potentials. The process continues up to convergence.
As already pointed out in the introduction, the interactions used in the HF calculations are not the microscopic interactions built to reproduce the experimental data of the two-particle systems. These microscopic interactions contain a strongly repulsive core and, if inserted in the integrals of Equations (15) and (16), they would produce terms much larger than \(\epsilon_{k}\). This would attempt calculating a relatively small number by summing and subtracting relatively large numbers. HF calculations require interactions which have already tamed the strongly repulsive core (an early discussion of this problem can be found in Chapter 13 of [12]).
### Density Functional Theory
The HF theory is widely utilized in nuclear and atomic physics, but there are two problems concerning its use. A first one is related to the formal development of the theory and it shows up mainly in the nuclear physics framework where the commonly used effective interactions have a phenomenological input containing also terms explicitly dependent on the density of the system. Without these terms, the HF calculations do not reproduce binding energies and densities of nuclei. The addition of these terms allows the construction of interactions able to produce high quality results all through the nuclide table. The physics simulated by these density dependent terms
is still a matter of study. Formally, the variational principle used to derive the HF equation is not valid when the interaction depends explicitly on the density.
The second problem is of pragmatic type and it is related to the difficulty in evaluating the Fock-Dirac term of Equation (15) for complicated systems which do not show a well defined symmetry, for example, complex molecules.
The Density Functional Theory (DFT) solves both problems. This theory is based on a theorem of Hohenberg and Kohn [19], formulated in the 1960s.
Let us express the hamiltonian of a system of \(A\) fermions of mass \(m\) as:
\[\hat{H}=\hat{T}+\hat{U}_{ext}+\hat{V}\, \tag{18}\]
with
\[\hat{T}=\sum_{i=1}^{A}-\hbar^{2}\frac{\bigtriangledown_{i}^{2}}{2m}\quad,\quad \hat{U}_{ext}=\sum_{i=1}^{A}\hat{u}_{ext}(i)\quad,\quad\hat{V}=\frac{1}{2}\sum _{i,j=1}^{A}\hat{v}(i,j)\, \tag{19}\]
The kinetic energy term, \(\hat{T}\) and the external potential \(\hat{U}_{ext}\), are one-body operators, while the interaction term \(\hat{V}\) is a two-body potential. The kinetic energy term plus \(\hat{V}\) are characteristic of the many-fermion system, while \(\hat{U}_{ext}\) depends on external situations and therefore, in principle, can be modified.
The Hohenberg-Kohn theorem states that there is a bijective correspondence between the external potential \(\hat{U}_{ext}\), the ground state \(|\Psi_{0}\rangle\) and the number density
\[\rho_{0}(\mathbf{r})=\langle\Psi_{0}|\sum_{i=1}^{A}\delta(\mathbf{r}-\mathbf{ r}_{i})|\Psi_{0}\rangle, \tag{20}\]
of the system.
The theorem has the following implications.
1. Because of the bijective mapping \[\hat{U}_{ext}\Longleftrightarrow|\Psi_{0}\rangle\Longleftrightarrow\rho_{0}\.\] (21) we can consider the states \(|\Psi_{0}\rangle\) as functionals of the density \(\rho_{0}\).
2. Because of (a), every observable is also a functional of \(\rho_{0}\). Specifically, this is true for the energy of the system \[E[\rho_{0}]=\langle\Psi[\rho_{0}]|\hat{H}|\Psi[\rho_{0}]\rangle=F[\rho_{0}]+ \int d^{3}r\,\hat{U}_{ext}(\mathbf{r})\,\rho_{0}(\mathbf{r}),\] (22) where the universal part, the part independent of the external potential, is defined as \[F[\rho_{0}]\equiv\langle\Psi[\rho_{0}]|\left(\hat{T}+\hat{V}\right)|\Psi[\rho_ {0}]\rangle.\] (23)
3. The variational principle implies that for each \(\rho\neq\rho_{0}\) the following relation holds: \[E_{0}\equiv E[\rho_{0}]<E[\rho].\] (24)
The focus of the theory has moved from the many-body wave function \(|\Psi_{0}\rangle\) to the much simpler one-body density \(\rho_{0}\). The idea of Kohn and Sham [20] is to reproduce the ground state density \(\rho_{0}\) of a system of interacting fermions by using a fictitious system of non-interacting fermions. This is done by changing the external part of the hamiltonian. In this view, the density (20) is expressed as a sum of orthonormalized s.p. wave functions
\[\rho_{0}(\mathbf{r})=\sum_{i<\epsilon_{\mathrm{F}}}|\phi_{i}^{\mathrm{KS}}( \mathbf{r})|^{2}, \tag{25}\]
where \(\epsilon_{\mathrm{F}}\) is the Fermi energy and KS indicates Kohn and Sham. The density (25) is generated by a one-body hamiltonian whose eigenstate is a Slater determinant \(|\Phi^{\mathrm{KS}}\rangle\). The energy functional built in the Kohn and Sham approach is usually expressed as:
\[E[\rho_{0}]=T^{\mathrm{KS}}[\rho_{0}]+E_{\mathrm{H}}^{\mathrm{KS}}[\rho_{0}]+E _{\mathrm{ext}}^{\mathrm{KS}}[\rho_{0}]+E_{\mathrm{xc}}^{\mathrm{KS}}[\rho_{0}], \tag{26}\]
where there is a kinetic energy term,
\[T^{\rm KS}[\rho_{0}]=\langle\Phi^{\rm KS}|\hat{T}|\Phi^{\rm KS}\rangle=\int\,d^{3 }r\,\sum_{i}\phi_{i}^{\rm KS}({\bf r})\left(-\frac{\hbar^{2}\nabla^{2}}{2m} \right)\phi_{i}^{\rm KS}({\bf r}), \tag{27}\]
a Hartree term,
\[E_{\rm H}^{\rm KS}[\rho_{0}]=\int\,d^{3}r_{i}\int\,d^{3}r_{j}\rho_{0}({\bf r}_{ i})\hat{v}({\bf r}_{i},{\bf r}_{j})\rho_{0}({\bf r}_{j}), \tag{28}\]
and an external mean-field term
\[E_{\rm ext}^{\rm KS}[\rho_{0}]=\int\,d^{3}r\rho_{0}({\bf r}_{i})\hat{U}_{\rm ext }^{\rm KS}({\bf r}_{i}). \tag{29}\]
The additional term, \(E_{\rm xc}^{\rm KS}\), is said to be of exchange and correlation.
The variational principle is applied to the energy functional (26) and the final result is, again, a set of non-linear integro-differential equations, which allows the evaluation of the Kohn and Sham s.p. wave functions
\[\left\{-\frac{\hbar^{2}\nabla^{2}}{2m}+\int d^{3}r_{j}\hat{v}({\bf r},{\bf r} _{j})\rho_{0}({\bf r}_{j})+\hat{U}_{\rm ext}^{\rm KS}({\bf r})+\hat{U}_{\rm xc }^{\rm KS}({\bf r})\right\}\phi_{i}^{\rm KS}({\bf r})=\epsilon_{i}\phi_{i}^{ \rm KS}({\bf r}). \tag{30}\]
This set of equations is solved numerically with iterative techniques analogous to those used in the HF case. In Equation (30), only local terms appear, contrary to the HF equations which contain the non-local Fock-Dirac term. This makes the numerical solution of the KS equations much simpler than that of the HF equations and allows an application of the theory to systems difficult to treat with HF.
While the only input of the HF theory is the effective interaction \(\hat{V}\), in the DFT one has, in addition, to define the exchange and correlation term \(\hat{U}_{\rm xc}^{\rm KS}\). The strategy for choosing this term is an open problem of investigation in the field.
Formally speaking, the s.p. wave functions \(\phi^{\rm KS}\) and the Lagrange multipliers \(\epsilon_{k}\) of Equation (30) do not have a well defined physical interpretation. From the pragmatical point of view, the values of these latter quantities are very close to the s.p. energies of the HF theory defined by Koopmans' theorem.
### Excited States in the Independent Particle Model
The IPM is quite successful in describing the ground state properties of the fermion systems. This is also due to the fact that effective interactions are tailored to make this work. A good example of this is provided by the AMEDEE compilation of Hartree-Fock-Bogolioubov results concerning the ground states of nuclear isotope chains from \(Z=6\) up to \(Z=130\)[21]. Experimental values of binding energies and charge density radii are described with excellent accuracy by using a unique and universal effective nucleon-nucleon interaction. The situation changes immediately as soon as one tries to apply the same theoretical scheme to describe excited states.
The basic ansatz of the IPM is that a many fermion system can be described by a single Slater determinant \(|\Phi\rangle\). The Slater determinant describing the ground state, \(|\Phi_{0}\rangle\), has all the s.p. states below the Fermi energy (hole states) fully occupied, while those above it (particle states) are completely empty. In this picture, excited states are obtained by promoting particles from states below the Fermi surface to states above it. By using the ONR, this procedure can be formally described as
\[|\Phi_{N}\rangle=\hat{a}_{p_{1}}^{+}\cdots\hat{a}_{p_{N}}^{+}\hat{a}_{h_{1}} \cdots\hat{a}_{h_{N}}\,|\Phi_{0}\rangle\,, \tag{31}\]
where the \(p\)'s indicate particle states and the \(h\)'s the hole states. The number \(N\) of creation or destruction operators is obviously smaller than \(A\), the number of fermions. The state \(|\Phi_{N}\rangle\) is a Slater determinant where \(N\) hole states have been changed with \(N\) particle states and it is the eigenstate of the IPM hamiltonian
\[\hat{H}_{0}\,|\Phi_{N}\rangle=\mathcal{E}_{N}\,|\Phi_{N}\rangle\,. \tag{32}\]
The excitation energy of this system is given by the difference between the s.p. energies of the particle states and that of the hole states
\[\omega_{N}^{\rm IPM}\equiv\mathcal{E}_{N}-\mathcal{E}_{0}=\epsilon_{p_{1}}+ \epsilon_{p_{2}}+\cdots+\epsilon_{p_{N}}-(\epsilon_{h_{1}}+\epsilon_{h_{2}}+ \cdots+\epsilon_{h_{N}}). \tag{33}\]
A good example of the failure of this approach in describing the excitations of a many-body systems is provided by the case of the \({}^{208}\)Pb nucleus. We show in Figure 1 the scheme of the s.p. levels around the Fermi energy of this nucleus. The energies of these levels have been obtained by exploiting Koopmans' theorem, i.e., by subtracting the experimental binding energies of the nuclei with one nucleon more or less, with respect to \({}^{208}\)Pb. These nuclei are \({}^{207}\)Tl, \({}^{209}\)Bi and the two lead isotopes \({}^{207}\)Pb and \({}^{209}\)Pb. From the experimental values of the angular momenta of these odd-even nuclei, we identified the quantum numbers of the s.p. levels.
The first excited state in the IPM framework is that obtained by promoting the nucleon lying on the s.p. state just below the Fermi surface to the state just above it. In the present case, this one-particle one-hole (\(1p-1h\)) excitation for the protons will be produced by the transition from the \(3s_{1/2}\) state to the \(1h_{9/2}\) state. The excitation energy of this transition is 4.209 MeV, the parity is negative and the total angular momentum is 4 or 5. The analogous transition for the neutrons also implies a negative parity value and excitation energies of 3.431 MeV and also in this case the angular momentum values of the excited state can be 4 or 5. Measurements indicate that the first excited state of the \({}^{208}\)Pb has an excitation energy of 2.614 MeV with angular momentum 3 and negative parity. Evidently, the IPM is unable to predict the presence of this state. The part of the hamiltonian disregarded by the IPM, the residual interaction, plays an important role. RPA considers the presence of the residual interaction in the description of the excitations of a many-body system.
## 3 RPA with the Equation of Motion Method
The first approach I present in order to obtain the RPA secular equations is the Equation of Motion (EOM) method inspired by the Heisenberg picture of quantum mechanics.
Let us define an operator, \(\hat{Q}_{\nu}^{+}\), whose action on the ground state of the system defines its excited states
\[\hat{Q}_{\nu}^{+}\left|\Psi_{0}\right\rangle=\left|\Psi_{\nu}\right\rangle, \tag{34}\]
which satisfy the eigenvalue equation
\[\hat{H}\left|\Psi_{\nu}\right\rangle=E_{\nu}\left|\Psi_{\nu}\right\rangle. \tag{35}\]
In the above equations, the index \(\nu\) indicates all the quantum numbers characterizing the excited state. For example, in a finite fermion system, they are the excitation energy, the total angular momentum and the parity. The choice of \(\hat{Q}_{\nu}^{+}\) defines completely the problem to be solved, and also the ground state of the system through the equation
\[\hat{Q}_{\nu}\left|\Psi_{0}\right\rangle=0. \tag{36}\]
Figure 1: Sketch of the s.p. levels around the Fermi surface in \({}^{208}\)Pb. The numbers indicate, in MeV units, the s.p. energies obtained as differences between the experimental binding energies of the nuclei with one nucleon more or less than \({}^{208}\)Pb.
It is worth remarking that the states \(\left|\Psi_{\nu}\right\rangle\) are not eigenstates of the full hamiltonian \(\hat{H}\) but, depending on the choice of \(\hat{Q}_{\nu}^{+}\), they are eigenstates of only a part of the hamiltonian. For example, if \(\hat{Q}_{\nu}^{+}=\hat{a}_{p}^{+}\hat{a}_{h}\), ground and excited states are Slater determinants of the IPM described in Section 2.4. As has been already pointed out by discussing Equation (14), this choice does not consider the contribution of the residual interaction.
Let us calculate the commutator of the \(\hat{Q}_{\nu}^{+}\) operator with the hamiltonian
\[\left[\hat{H},\hat{Q}_{\nu}^{+}\right]\left|\Psi_{0}\right\rangle = \left(\hat{H}\hat{Q}_{\nu}^{+}-\hat{Q}_{\nu}^{+}\hat{H}\right) \left|\Psi_{0}\right\rangle=\hat{H}\left|\Psi_{\nu}\right\rangle-\hat{Q}_{\nu }^{+}E_{0}\left|\Psi_{0}\right\rangle \tag{37}\] \[= E_{\nu}\left|\Psi_{\nu}\right\rangle-\hat{Q}_{\nu}^{+}E_{0} \left|\Psi_{0}\right\rangle=\left(E_{\nu}-E_{0}\right)\hat{Q}_{\nu}^{+}\left| \Psi_{0}\right\rangle,\]
and for the operator \(\hat{Q}_{\nu}\), we obtain
\[\left[\hat{H},\hat{Q}_{\nu}\right]\left|\Psi_{0}\right\rangle=\left(\hat{H} \hat{Q}_{\nu}-\hat{Q}_{\nu}\hat{H}\right)\left|\Psi_{0}\right\rangle=\hat{H} \hat{Q}_{\nu}\left|\Psi_{0}\right\rangle-E_{0}\hat{Q}_{\nu}\left|\Psi_{0} \right\rangle=0, \tag{38}\]
because of Equation (36).
We multiply Equation (37) by a generic operator \(\hat{\Theta}\) and by \(\left\langle\Psi_{0}\right|\) and we subtract the complex conjugate. For Equations (37) and (38), we obtain
\[\left\langle\Psi_{0}\right|\left[\hat{\Theta},\hat{[H},\hat{Q}_{\nu}^{+}] \right]\left|\Psi_{0}\right\rangle=\left(E_{\nu}-E_{0}\right)\left\langle\Psi_ {0}\right|\hat{\Theta}\hat{Q}_{\nu}^{+}\left|\Psi_{0}\right\rangle=\left(E_{ \nu}-E_{0}\right)\left\langle\Psi_{0}\right|\left[\hat{\Theta},\hat{Q}_{\nu} ^{+}\right]\left|\Psi_{0}\right\rangle. \tag{39}\]
since \(\left\langle\Psi_{0}\right|\hat{Q}_{\nu}^{+}=0\).
This result is independent of the expression of the operator \(\hat{\Theta}\). In the construction of the various theories describing the system excited states, the \(\hat{\Theta}\) operator is substituted by the \(\delta\hat{Q}_{\nu}\) operator representing an infinitesimal variation of the excitation operator defined by Equation (34).
### Tamm-Dankoff Approximation
A first choice of the \(\hat{Q}_{\nu}^{+}\) consists in considering the excited state as a linear combination of particle-hole excitations. This means that the excited state is not any more a single Slater determinant as in the IPM, but it is described by a sum of them. This choice of \(\hat{Q}_{\nu}^{+}\), leading to the so-called Tamm-Dankoff approximation (TDA), is
\[\hat{Q}_{\nu}^{+}\equiv\sum_{m\,i}X_{mi}^{\nu}\hat{a}_{m}^{+}\hat{a}_{i}, \tag{40}\]
where \(X_{mi}^{\nu}\) is a real number and the usual convention of indicating the hole states with the letters \(h,i,j,k,l\) and the particle states with \(m,n,p,q,r\) has been adopted.
The definition (40) of the \(\hat{Q}_{\nu}^{+}\) operator implies that the ground state \(\left|\Psi_{0}\right\rangle\) satisfying Equations (37) and (38) is the IPM ground state \(\left|\Phi_{0}\right\rangle\). In effect
\[\hat{Q}_{\nu}\left|\Phi_{0}\right\rangle=\sum_{m\,i}X_{mi}^{\nu}\hat{a}_{i}^{+ }\hat{a}_{m}\left|\Phi_{0}\right\rangle=0, \tag{41}\]
since it is not possible to remove particles above the Fermi surface or to put particles below it.
An infinitesimal variation of the \(\hat{Q}_{\nu}\) operator can be expressed as
\[\delta\hat{Q}_{\nu}=\sum_{m\,i}\hat{a}_{i}^{+}\hat{a}_{m}\delta X_{mi}^{*\nu}, \tag{42}\]
since only the amplitudes \(X_{mi}^{*\nu}\) can change. By substituting \(\hat{\Theta}\) with \(\delta\hat{Q}_{\nu}\) in Equation (39), we obtain
\[\left\langle\Phi_{0}\right|\biggl{[}\sum_{m\,i}\hat{a}_{i}^{+} \hat{a}_{m}\delta X_{mi}^{*\nu},\biggl{[}\hat{H},\sum_{n\,j}X_{nj}^{\nu}\hat{a} _{n}^{+}\hat{a}_{j}\biggr{]}\biggr{]}\left|\Phi_{0}\right\rangle \tag{43}\] \[= \left(E_{\nu}-E_{0}\right)\left\langle\Phi_{0}\right|\biggl{[} \sum_{m\,i}\hat{a}_{i}^{+}\hat{a}_{m}\delta X_{mi}^{*\nu},\sum_{n\,j}X_{nj}^{ \nu}\hat{a}_{n}^{+}\hat{a}_{j}\biggr{]}\left|\Phi_{0}\right\rangle.\]
Every variation \(\delta X_{ph}^{*\nu}\) is independent of the other ones. For this reason, the above equation is a sum of terms independent of each other. The equation is satisfied if all the terms related to the same variation of \(X_{ph}^{\nu}\) satisfy the
relation. We can formally express this concept by considering a single term of the sum and by dividing it by \(\delta X_{ph}^{*\nu}\) which is, by our choice, different from zero
\[\langle\Phi_{0}|\biggl{[}\hat{a}_{i}^{+}\hat{a}_{m},[\hat{H},\sum_{n\,j}X_{nj}^{ \nu}\hat{a}_{n}^{+}\hat{a}_{j}]\biggr{]}|\Phi_{0}\rangle=(E_{\nu}-E_{0})\sum_{n \,j}X_{nj}^{\nu}\,\langle\Phi_{0}|\left[a_{i}^{+}a_{m},a_{n}^{+}a_{j}\right]| \Phi_{0}\rangle\,. \tag{44}\]
Let us calculate the right hand side of Equation (44):
\[\langle\Phi_{0}|\left[\hat{a}_{i}^{+}\hat{a}_{m},\hat{a}_{n}^{+}\hat{a}_{j} \right]|\Phi_{0}\rangle=\langle\Phi_{0}|\hat{a}_{i}^{+}\hat{a}_{m}\hat{a}_{n}^ {+}\hat{a}_{j}|\Phi_{0}\rangle-\langle\Phi_{0}|\hat{a}_{n}^{+}\hat{a}_{j},\hat {a}_{i}^{+}\hat{a}_{m}|\Phi_{0}\rangle\,. \tag{45}\]
We apply Wick's theorem (see, for example, Ref. [22]) to the first term
\[\langle\Phi_{0}|\overbrace{\hat{a}_{i}^{+}\overleftarrow{a}_{m}\hat{a}_{n}^{+ }\hat{a}_{j}}^{\dagger}|\Phi_{0}\rangle=\delta_{mn}\delta_{ij}, \tag{46}\]
where the lines indicate the operators to be contracted.
The second term of Equation (45) is zero since \(\hat{a}_{m}\,|\Phi_{0}\rangle=0\;.\) By using this result in Equation (44), we obtain
\[\langle\Phi_{0}|\biggl{[}\hat{a}_{i}^{+}\hat{a}_{m},[\hat{H},\sum_{n\,j}X_{nj }^{\nu}\hat{a}_{n}^{+}\hat{a}_{j}]\biggr{]}|\Phi_{0}\rangle=(E_{\nu}-E_{0})\,X _{mi}^{\nu}. \tag{47}\]
The evaluation of the double commutator of the left hand side of Equation (47) is explicitly presented in Appendix B. We insert the results of Equations (305) and (310) into Equation (47) and we consider the symmetry properties of the antisymmetrized matrix element of the interaction \(\overline{V}_{\alpha,\beta,\alpha^{\prime},\beta^{\prime}}\), Equation (13). Finally, we obtain the TDA equations:
\[\sum_{nj}X_{nj}^{\nu}\left[(\epsilon_{n}-\epsilon_{j})\delta_{mn}\delta_{ij}+ \overline{V}_{mjin}\right]=(E_{\nu}-E_{0})X_{mi}^{\nu}. \tag{48}\]
The expression (48) represents a homogenous system of linear equations whose unknowns are the \(X_{mi}^{\nu}\). The number of unknowns, and therefore of solutions, is given by the number of particle-hole pairs which truncates the sum.
The normalization condition of the excited state induces a relation between the \(X_{mi}^{\nu}\) amplitudes:
\[1 = \langle\Psi_{\nu}|\Psi_{\nu}\rangle=\langle\Phi_{0}|\hat{Q}_{\nu }\hat{Q}_{\nu}^{+}|\Phi_{0}\rangle=\langle\Phi_{0}|\sum_{p\,h}\hat{a}_{h}^{+} \hat{a}_{p}X_{ph}^{*\nu}\sum_{p^{\prime}h^{\prime}}X_{p^{\prime}h^{\prime}}^{ \nu}\hat{a}_{p^{\prime}}^{+}\hat{a}_{h^{\prime}}|\Phi_{0}\rangle \tag{49}\] \[= \sum_{p\,h}\sum_{p^{\prime}\,h^{\prime}}X_{ph}^{*\nu}X_{p^{\prime }h^{\prime}}^{\nu}\langle\Phi_{0}|\overbrace{\hat{a}_{h}^{+}\hat{a}_{p}\hat{a}_ {p^{\prime}}^{+}}^{\dagger}\hat{a}_{h^{\prime}}|\Phi_{0}\rangle=\sum_{p\,h}|X _{ph}^{\nu}|^{2},\]
which defines without ambiguity the values of the \(X_{ph}^{\nu}\) and suggests their probabilistic interpretation.
The TDA theory describes not only the energy spectrum of the system, but also for each excited state it provides the many-body wave function written in terms of single-particle states. This allows the calculation of the transition probability from the ground state to an excited state.
Let us assume that the action of the external field which excites the system is described by a one-body operator
\[\hat{F}=\sum_{\mu\mu^{\prime}}\,\langle\mu|\hat{f}|\mu^{\prime}\rangle\,\hat{a }_{\mu}^{+}\hat{a}_{\mu^{\prime}}\equiv\sum_{\mu\mu^{\prime}}f_{\mu\mu^{ \prime}}\hat{a}_{\mu}^{+}\hat{a}_{\mu^{\prime}}. \tag{50}\]
The transition probability from the ground state to a TDA excited state is
\[\langle\Psi_{\nu}|\hat{F}|\Psi_{0}\rangle = \langle\Phi_{0}|\hat{Q}_{\nu}\hat{F}|\Phi_{0}\rangle \tag{51}\] \[= \langle\Phi_{0}|\sum_{mi}X_{mi}^{*\nu}\hat{a}_{i}^{+}\hat{a}_{m} \sum_{\mu\mu^{\prime}}f_{\mu\mu^{\prime}}\hat{a}_{\mu}^{+}\hat{a}_{\mu^{ \prime}}|\Phi_{0}\rangle\] \[= \sum_{mi}X_{mi}^{*\nu}\sum_{\mu\mu^{\prime}}f_{\mu\mu^{\prime}} \,\langle\Phi_{0}|\hat{a}_{i}^{+}\hat{a}_{m}\hat{a}_{\mu}^{+}\hat{a}_{\mu^{ \prime}}|\Phi_{0}\rangle\] \[= \sum_{mi}X_{mi}^{*\nu}\sum_{\mu\mu^{\prime}}f_{\mu\mu^{\prime}} \delta_{\mu^{\prime}}\delta_{m\mu}=\sum_{mi}X_{mi}^{*\nu}f_{mi}.\]
where we used Wick's theorem as in Equation (46). The many-body transition probabilities are described in terms of single-particle transition probabilities.
### Random Phase Approximation
#### 3.2.1 Limits of the TDA
The comparison between the TDA results and the experimental data is not satisfactory, especially in nuclear physics. For this reason, since the second half of the 1960s, the assumptions related to the TDA theory have been carefully analyzed. These assumptions are related to the choice of the expression (42) of the \(\hat{Q}_{\nu}\) operator. From these studies, it appeared clear that this choice is inconsistent with the equations of motion (39).
This inconsistency can be seen in the following manner. The equation of motions (39) were obtained without making any assumption on the operator \(\hat{\Theta}\). For the operator \(\hat{\Theta}=\hat{a}_{m}^{+}\hat{a}_{i}\), the equations of motion are:
\[\left\langle\Psi_{0}\right|\left[\hat{a}_{m}^{+}\hat{a}_{i},[\hat{H},\hat{Q}_{ \nu}^{+}]\right]\left|\Psi_{0}\right\rangle=\left(E_{\nu}-E_{0}\right)\left\langle \Psi_{0}|\hat{a}_{m}^{+}\hat{a}_{i}\hat{Q}_{\nu}^{+}|\Psi_{0}\right\rangle= \left(E_{\nu}-E_{0}\right)\left\langle\Psi_{0}\right|\left[\hat{a}_{m}^{+} \hat{a}_{i},\hat{Q}_{\nu}^{+}\right]|\Psi_{0}\rangle\,. \tag{52}\]
By inserting the expression of the TDA operator (42) in the right hand side of the above equation, we obtain
\[\sum_{nj}X_{nj}^{\nu}\left\langle\Phi_{0}\right|\left[\hat{a}_{m}^{+}\hat{a}_ {i},\hat{a}_{n}^{+}\hat{a}_{j}\right]|\Phi_{0}\rangle=\sum_{nj}X_{nj}^{\nu} \left\{\langle\Phi_{0}|\hat{a}_{m}^{+}\hat{a}_{i}\hat{a}_{m}^{+}\hat{a}_{j}| \Phi_{0}\rangle-\langle\Phi_{0}|\hat{a}_{m}^{+}\hat{a}_{j}\hat{a}_{m}^{+}\hat{ a}_{i}|\Phi_{0}\rangle\right\}=0. \tag{53}\]
This result requires that also the left hand side of Equation (52) be zero. The one-body term of the hamiltonian has a double commutator equal to zero
\[\sum_{\alpha\beta}h_{\alpha,\beta}\left\langle\Phi_{0}\right|\left[\hat{a}_{m }^{+}\hat{a}_{i},(\hat{a}_{\alpha}^{+}\hat{a}_{j}\delta_{n\beta}-\hat{a}_{n}^{ +}\hat{a}_{\beta}\delta_{j\alpha})\right]|\Phi_{0}\rangle=0,\]
but the double commutator of the interaction term is not equal to zero.
\[\sum_{\alpha,\beta,\alpha^{\prime},\beta^{\prime}}\overline{V}_{\alpha,\beta, \alpha^{\prime},\beta^{\prime}}\left\langle\Phi_{0}\right|\biggl{[}\hat{a}_{ m}^{+}\hat{a}_{i},\Bigl{[}\hat{\mathbb{N}}[\hat{a}_{\alpha}^{+}\hat{a}_{ \beta}^{+}\hat{a}_{\beta^{\prime}}\hat{a}_{\alpha^{\prime}}],\hat{a}_{n}^{+} \hat{a}_{j}]\biggr{]}\biggl{|}\Phi_{0}\rangle\neq 0.\]
#### 3.2.2 RPA Equations
The most straightforward way of extending the TDA is to consider the RPA excitation operator (42) defined as
\[\hat{Q}_{\nu}^{+}\equiv\sum_{p\,h}X_{ph}^{\nu}\hat{a}_{p}^{+}\hat{a}_{h}-\sum _{p\,h}Y_{ph}^{\nu}\hat{a}_{h}^{+}\hat{a}_{p}, \tag{54}\]
where both \(X_{ph}^{\nu}\) and \(Y_{ph}^{\nu}\) are numbers.
RPA ground state is defined by the equation \(\hat{Q}_{\nu}\left|\nu_{0}\right\rangle=0\). Evidently \(\left|\nu_{0}\right\rangle\) is not an IPM ground state, i.e., a single Slater determinant. In this last case, we would have
\[\hat{Q}_{\nu}\left|\Phi_{0}\right\rangle=\sum_{p\,h}X_{ph}^{\ast\nu}\hat{a}_{h} ^{+}\hat{a}_{p}\left|\Phi_{0}\right\rangle-\sum_{p\,h}Y_{ph}^{\ast\nu}\hat{a}_ {p}^{+}\hat{a}_{h}\left|\Phi_{0}\right\rangle\neq 0.\]
The first term is certainly zero, while the second one is not zero. RPA ground state \(\left|\nu_{0}\right\rangle\) is more complex than the IPM ground state and it contains effects beyond it. These effects, called generically correlations, are here described in terms of hole-particle excitations, as we shall discuss in Section 3.2.6.
From the definition (54) of RPA amplitudes, we obtain \(\delta\hat{Q}_{\nu}\) and by inserting it as \(\hat{\Theta}=\delta\hat{Q}_{\nu}\) in the equations of motion (39) we obtain
\[\sum_{mi}\delta X_{mi}^{\nu}\left\langle\nu_{0}\right|\biggl{[} \hat{a}_{i}^{+}\hat{a}_{m},\Bigl{[}\hat{H},\hat{Q}_{\nu}^{+}\Bigr{]}\biggl{]} \left|\nu_{0}\right\rangle-\sum_{mi}\delta Y_{mi}^{\nu}\left\langle\nu_{0} \right|\biggl{[}\hat{a}_{m}^{+}\hat{a}_{i},\Bigl{[}\hat{H},\hat{Q}_{\nu}^{+} \Bigr{]}\biggl{]}\left|\nu_{0}\right\rangle \tag{55}\] \[= \left(E_{\nu}-E_{0}\right)\left\{\sum_{mi}\delta X_{mi}^{\nu} \left\langle\nu_{0}\right|\biggl{[}\hat{a}_{i}^{+}\hat{a}_{m},\hat{Q}_{\nu}^{+} \biggr{]}\biggl{|}\nu_{0}\right\rangle-\sum_{mi}\delta Y_{mi}^{\nu}\left\langle \nu_{0}\right|\biggl{[}\hat{a}_{m}^{+}\hat{a}_{i},\hat{Q}_{\nu}^{+}\biggr{]} \bigl{|}\nu_{0}\right\}.\]
As in the TDA case, the above equation represents a sum of independent terms since each variation is independent of the other ones. By making equal the terms related to the same variation, we obtain the following relations
\[\left\langle\nu_{0}\right|\biggl{[}\hat{a}_{i}^{+}\hat{a}_{m}, \Bigl{[}\hat{H},\hat{Q}_{\nu}^{+}\Bigr{]}\biggl{]}\left|\nu_{0}\right\rangle = \left(E_{\nu}-E_{0}\right)\left\langle\nu_{0}\right|\biggl{[}\hat{a} _{i}^{+}\hat{a}_{m},\hat{Q}_{\nu}^{+}\biggr{]}\bigl{|}\nu_{0}\right\rangle \tag{56}\] \[\left\langle\nu_{0}\right|\biggl{[}\hat{a}_{m}^{+}\hat{a}_{i}, \Bigl{[}\hat{H},\hat{Q}_{\nu}^{+}\Bigr{]}\biggl{]}\bigl{|}\nu_{0}\right\rangle = \left(E_{\nu}-E_{0}\right)\left\langle\nu_{0}\right|\biggl{[}\hat{a} _{m}^{+}\hat{a}_{i},\Bigl{[}\hat{Q}_{\nu}^{+}\biggr{]}\bigl{|}\nu_{0}\right\rangle. \tag{57}\]
Let us consider the left hand side of Equation (56)
\[\langle\nu_{0}|\bigg{[}\hat{a}_{i}^{+}\hat{a}_{m},\Big{[}\hat{H}, \hat{Q}_{\nu}^{+}\Big{]}\bigg{]}|\nu_{0}\rangle \tag{58}\] \[= \sum_{nj}X_{nj}^{\nu}\,\langle\nu_{0}|\bigg{[}\hat{a}_{i}^{+}\hat{ a}_{m},\Big{[}\hat{H},\hat{a}_{n}^{+}\hat{a}_{j}\Big{]}\bigg{]}|\nu_{0}\rangle-\sum_{nj}Y_{nj}^ {\nu}\,\langle\nu_{0}|\bigg{[}\hat{a}_{i}^{+}\hat{a}_{m},\Big{[}\hat{H},\hat{a} _{j}^{+}\hat{a}_{n}\Big{]}\bigg{]}|\nu_{0}\rangle\] \[\equiv \sum_{nj}X_{nj}^{\nu}A_{m\,inj}+\sum_{nj}Y_{nj}^{\nu}B_{minj}.\]
These equations define the elements of the \(A\) and \(B\) matrices.
We calculate the right hand side of Equation (56) by using an approximation known in the literature as _Quasi-Boson-Approximation_ (QBA) consisting in assuming that the expectation value of a commutator between RPA ground states has the same value of the commutator between IPM states \(|\Phi_{0}\rangle\). In the specific case under study, we have that
\[\langle\nu_{0}|\bigg{[}\hat{a}_{i}^{+}\hat{a}_{m},\hat{Q}_{\nu}^{+}\bigg{]}| \nu_{0}\rangle\simeq\langle\Phi_{0}|\bigg{[}\hat{a}_{i}^{+}\hat{a}_{m},\hat{Q }_{\nu}^{+}\bigg{]}|\Phi_{0}\rangle\,. \tag{59}\]
It is worth remarking that the QBA can be applied only for expectation values of commutators. The idea is that pairs of creation and destruction operators follow the rule
\[[\hat{a}_{i}^{+}\hat{a}_{m},\hat{a}_{n}^{+}\hat{a}_{j}]\simeq\delta_{mn}\delta _{ij},\]
which means that the operators \(\hat{\Theta}_{im}\equiv\hat{a}_{i}^{+}\hat{a}_{m}\) and \(\hat{\Theta}_{jn}^{+}\equiv a_{n}^{+}a_{j}\) behave as boson operators.
By using the QBA, we can write
\[\langle\nu_{0}|\bigg{[}\hat{a}_{i}^{+}\hat{a}_{m},\hat{Q}_{\nu}^{ +}\bigg{]}|\nu_{0}\rangle \tag{60}\] \[\simeq \sum_{nj}X_{nj}^{\nu}\,\langle\Phi_{0}|[\hat{a}_{i}^{+}\hat{a}_{m },\hat{a}_{n}^{+}\hat{a}_{j}]|\Phi_{0}\rangle-\sum_{nj}Y_{nj}^{\nu}\,\langle \Phi_{0}|[\hat{a}_{i}^{+}\hat{a}_{m},\hat{a}_{j}^{+}\hat{a}_{n}]|\Phi_{0}\rangle\] \[= \sum_{nj}X_{nj}^{\nu}\bigg{\{}\,\langle\Phi_{0}|\hat{a}_{i}^{+} \hat{a}_{m}\hat{a}_{n}^{+}\hat{a}_{j}|\Phi_{0}\rangle-\langle\Phi_{0}|\hat{a}_ {n}^{+}\hat{a}_{j}\hat{a}_{i}^{+}\hat{a}_{m}|\Phi_{0}\rangle\,\bigg{\}}\] \[- \sum_{nj}Y_{nj}^{\nu}\bigg{\{}\,\langle\Phi_{0}|\hat{a}_{i}^{+} \hat{a}_{m}\hat{a}_{j}^{+}\hat{a}_{n}|\Phi_{0}\rangle-\langle\Phi_{0}|\hat{a}_ {j}^{+}\hat{a}_{n}\hat{a}_{i}^{+}\hat{a}_{m}|\Phi_{0}\rangle\,\bigg{\}}\] \[= \sum_{nj}X_{nj}^{\nu}\,\langle\Phi_{0}|\hat{a}_{i}^{+}\hat{a}_{m }\hat{a}_{n}^{+}\hat{a}_{j}|\Phi_{0}\rangle=X_{mi}^{\nu}\delta_{mn}\delta_{ij},\]
where we have taken into account that the terms multiplying \(Y_{nj}^{\nu}\) do not conserve the particle number and, furthermore, that \(a_{m}\,|\Phi_{0}\rangle=0\). Equation (56) becomes
\[\sum_{nj}X_{nj}^{\nu}A_{minj}+\sum_{nj}Y_{nj}^{\nu}B_{minj}=(E_{\nu}-E_{0})X_{ mi}^{\nu}. \tag{61}\]
For the calculation of the left hand side of Equation (57), we consider that:
\[[\hat{H},\hat{a}_{n}^{+}\hat{a}_{j}]^{+}=(\hat{H}\hat{a}_{n}^{+}\hat{a}_{j}- \hat{a}_{n}^{+}\hat{a}_{j}\hat{H})^{+}=\hat{a}_{j}^{+}\hat{a}_{n}\hat{H}-\hat {H}\hat{a}_{j}^{+}\hat{a}_{n}=-[\hat{H},\hat{a}_{j}^{+}\hat{a}_{n}], \tag{62}\]
since \(\hat{H}=\hat{H}^{+}\) and then
\[\bigg{[}\hat{a}_{i}^{+}\hat{a}_{m},[\hat{H},\hat{a}_{j}^{+}\hat{a}_{n}]\bigg{]} ^{+}=-\bigg{[}\hat{a}_{m}^{+}\hat{a}_{i},-[\hat{H},\hat{a}_{n}^{+}\hat{a}_{j}] \bigg{]}=\bigg{[}\hat{a}_{m}^{+}\hat{a}_{i},[\hat{H},\hat{a}_{n}^{+}\hat{a}_{j} ]\bigg{]}. \tag{63}\]
The double commutator becomes
\[\langle\nu_{0}|\bigg{[}a_{m}^{+}a_{i},\Big{[}\hat{H},Q_{\nu}^{+}\bigg{]}\bigg{]} |\nu_{0}\rangle\]
\[= \sum X_{nj}^{\nu}\left\langle\nu_{0}\right|\biggr{[}a_{m}^{+}a_{i}, \left[\hat{H},a_{n}^{+}a_{j}\right]\biggr{]}|\nu_{0}\rangle-\sum Y_{nj}^{\nu} \left\langle\nu_{0}\right|\biggr{[}a_{m}^{+}a_{i},\left[\hat{H},a_{j}^{+}a_{n} \right]\biggr{]}|\nu_{0}\rangle\] \[= \sum X_{nj}^{\nu}\left\langle\nu_{0}\right|\biggr{[}a_{i}^{+}a_{ m},\left[\hat{H},a_{j}^{+}a_{n}\right]\biggr{]}^{+}|\nu_{0}\rangle-\sum Y_{nj}^{\nu} \left\langle\nu_{0}\right|\biggr{[}a_{i}^{+}a_{m},\left[\hat{H},a_{n}^{+}a_{j }\right]\biggr{]}^{+}|\nu_{0}\rangle\] \[= \sum_{nj}X_{nj}^{\nu}(-B_{minij}^{*})+\sum_{nj}Y_{nj}^{\nu}(-A_{ minij}^{*}), \tag{64}\]
where we considered the definitions of the matrix elements \(A\) and \(B\) in Equation (58).
For the calculation of the right hand side of Equation (57) by using the QBA, we have
\[\left\langle\nu_{0}\right|\biggr{[}\hat{a}_{m}^{+}\hat{a}_{i},Q_{ \nu}^{+}\biggr{]}|\nu_{0}\rangle\rightarrow\ (\mbox{QBA})\rightarrow-\sum_{nj}Y_{nj}^{\nu}\left\langle\Phi_{0}\right| \biggr{[}\hat{a}_{m}^{+}\hat{a}_{i},\hat{a}_{j}^{+}\hat{a}_{n}\biggr{]}|\Phi_ {0}\rangle=Y_{mi}^{\nu}\delta_{ij}\delta_{mn}; \tag{65}\]
therefore, Equation (57) becomes
\[\sum_{nj}X_{nj}^{\nu}(-B_{minj}^{*})+\sum_{nj}Y_{nj}^{\nu}(-A_{ minij}^{*})=(E_{\nu}-E_{0})Y_{mi}^{\nu}. \tag{66}\]
Equations (61) and (66) represent a homogenous system of linear equations whose unknowns are RPA amplitudes \(X_{ph}^{\nu}\) and \(Y_{ph}^{\nu}\). Usually, this system is presented as
\[\left(\begin{array}{cc}A&B\\ B^{*}&A^{*}\end{array}\right)\left(\begin{array}{c}X^{\nu}\\ Y^{\nu}\end{array}\right)=(E_{\nu}-E_{0})\left(\begin{array}{cc}I&0\\ 0&-I\end{array}\right)\left(\begin{array}{c}X^{\nu}\\ Y^{\nu}\end{array}\right)=(E_{\nu}-E_{0})\left(\begin{array}{c}X^{\nu}\\ -Y^{\nu}\end{array}\right), \tag{67}\]
where \(A\) and \(B\) are square matrices whose dimensions are those of the number of the particle-hole pairs describing the excitation, and \(X\) and \(Y\) are vectors of the same dimensions.
The expressions of the matrix elements of \(A\) and \(B\) in terms of effective interaction between two interacting particles are obtained as in Appendix B and they are:
\[A_{minj} \rightarrow (\mbox{QBA})\rightarrow \left\langle\Phi_{0}\right|\biggr{[}\hat{a}_{i}^{+}\hat{a}_{m}, \left[\hat{H},\hat{a}_{n}^{+}\hat{a}_{j}\right]\biggr{]}|\Phi_{0}\rangle=( \epsilon_{m}-\epsilon_{i})\delta_{mn}\delta_{ij}+\overline{V}_{min}, \tag{68}\] \[B_{minj} \rightarrow (\mbox{QBA})\rightarrow -\left\langle\Phi_{0}\right|\biggr{[}\hat{a}_{i}^{+}\hat{a}_{m}, \left[\hat{H},\hat{a}_{j}^{+}\hat{a}_{n}\right]\biggr{]}|\Phi_{0}\rangle= \overline{V}_{mnij}. \tag{69}\]
#### 3.2.3 Properties of RPA Equations
We consider RPA equations in the form
\[\left(\begin{array}{cc}A&B\\ B^{*}&A^{*}\end{array}\right)\left(\begin{array}{c}X^{\nu}\\ Y^{\nu}\end{array}\right)=\omega_{\nu}\left(\begin{array}{c}X^{\nu}\\ -Y^{\nu}\end{array}\right),\]
where \(\omega_{\nu}=E_{\nu}-E_{0}\) is the excitation energy.
* If \(B=0\), we obtain the TDA equations.
* We take the complex conjugate of the above equations and obtain \[\left(\begin{array}{cc}A&B\\ B^{*}&A^{*}\end{array}\right)\left(\begin{array}{c}Y^{*\nu}\\ X^{*\nu}\end{array}\right)=-\omega_{\nu}\left(\begin{array}{c}Y^{*\nu}\\ \\ -X^{*\nu}\end{array}\right).\] (70) This indicates that RPA equations are satisfied by positive and negative eigenvalues with the same absolute value.
* Eigenvectors corresponding to different eigenvalues are orthogonal. \[\left(\begin{array}{cc}A&B\\ B^{*}&A^{*}\end{array}\right)\left(\begin{array}{c}X^{\nu}\\ Y^{\nu}\end{array}\right)=\omega_{\nu}\left(\begin{array}{c}X^{\nu}\\ -Y^{\nu}\end{array}\right)\ ;\ \ \left(\begin{array}{c}A&B\\ B^{*}&A^{*}\end{array}\right)\left(\begin{array}{c}X^{\mu}\\ Y^{\mu}\end{array}\right)=\omega_{\mu}\left(\begin{array}{c}X^{\mu}\\ -Y^{\mu}\end{array}\right).\] Let us calculate the hermitian conjugate of the second equation \[(X^{\mu+},Y^{\mu+})\left(\begin{array}{cc}A&B\\ B^{*}&A^{*}\end{array}\right)=(X^{\mu+},-Y^{\mu})\omega_{\mu}.\] We multiply the first equation by \((X^{\mu+},Y^{\mu+})\) on the left hand side, and the second equation on the right hand side by \[\left(\begin{array}{c}X^{\nu}\\ -Y^{\nu}\end{array}\right),\] and we obtain \[(X^{\mu+},Y^{\mu+})\left(\begin{array}{cc}A&B\\ B^{*}&A^{*}\end{array}\right)\left(\begin{array}{c}X^{\nu}\\ Y^{\nu}\end{array}\right) = \omega_{\nu}(X^{\mu+},Y^{\mu+})\left(\begin{array}{c}X^{\nu}\\ -Y^{\nu}\end{array}\right)\] \[(X^{\mu+},Y^{\mu+})\left(\begin{array}{cc}A&B\\ B^{*}&A^{*}\end{array}\right)\left(\begin{array}{c}X^{\nu}\\ Y^{\nu}\end{array}\right) = \omega_{\mu}(X^{\mu+},-Y^{\mu})\left(\begin{array}{c}X^{\nu}\\ Y^{\nu}\end{array}\right).\] By subtracting the two equations, we have \[0=(\omega_{\nu}-\omega_{\mu})(X^{\mu+}X^{\nu}-Y^{\mu+}Y^{\nu}).\] Since we assumed \(\omega_{\nu}\neq\omega_{\mu}\), we obtain \[(X^{\mu+}X^{\nu}-Y^{\mu+}Y^{\nu})=0.\]
* The normalization between two excited states requires \[\delta_{\nu\nu^{\prime}} = \langle\nu|\nu^{\prime}\rangle=\langle\nu_{0}|\hat{Q}_{\nu}\hat{Q }^{+}_{\nu^{\prime}}|\nu_{0}\rangle=\langle\nu_{0}|[\hat{Q}_{\nu},\hat{Q}^{+} _{\nu^{\prime}}]|\nu_{0}\rangle\rightarrow\ \mbox{QBA}\rightarrow\langle\Phi_{0}|[\hat{Q}_{\nu},\hat{Q}^{+}_{\nu^{ \prime}}]|\Phi_{0}\rangle\] (71) \[= \sum_{mi}\left(X^{\nu}_{mi}X^{\nu^{\prime}}_{mi}-Y^{\nu}_{mi}Y^{ \nu^{\prime}}_{mi}\right),\] where we used the fact that \(\hat{Q}_{\nu}\,|\nu_{0}\rangle=0\) to express the operator as commutator in order to use the QBA.
#### 3.2.4 Transition Probabilities in RPA
In analogy with the TDA case, we assume that the action of the external field exciting the system is described by a one-body operator expressed as in Equation (50). The transition probability between RPA ground state and excited state is described by
\[\langle\nu|\hat{F}|\nu_{0}\rangle=\langle\nu_{0}|\hat{Q}_{\nu}\hat{F}|\nu_{0} \rangle=\langle\nu_{0}|[\hat{Q}_{\nu},\hat{F}]|\nu_{0}\rangle\,, \tag{72}\]
where we used, again, the fact that \(\hat{Q}_{\nu}\,|\nu_{0}\rangle=0\). Since the equation is expressed in terms of commutator we can use the QBA
\[\langle\nu|\hat{F}|\nu_{0}\rangle\rightarrow\ \mbox{QBA} \rightarrow\langle\Phi_{0}|[\hat{Q}_{\nu},\hat{F}]|\Phi_{0}\rangle \tag{73}\] \[= \sum_{\mu\mu^{\prime}}f_{\mu\mu^{\prime}}\bigg{\{}\sum_{mi}X^{ \nu}_{mi}\,\langle\Phi_{0}|[\hat{a}^{+}_{i}\hat{a}_{m},\hat{a}^{+}_{\mu}\hat{a} _{\mu^{\prime}}]|\Phi_{0}\rangle-\sum_{mi}Y^{\nu}_{mi}\,\langle\Phi_{0}|[\hat {a}^{+}_{m}\hat{a}_{i},\hat{a}^{+}_{\mu}\hat{a}_{\mu^{\prime}}]|\Phi_{0}\rangle \,\bigg{\}}.\]
The two matrix elements are
\[\langle\Phi_{0}|[\hat{a}_{i}^{+}\hat{a}_{m},\hat{a}_{\mu}^{+}\hat{a} _{\mu^{\prime}}]|\Phi_{0}\rangle = \langle\Phi_{0}|\hat{a}_{i}^{+}\hat{a}_{m}\hat{a}_{\mu}^{+}\hat{a} _{\mu^{\prime}}|\Phi_{0}\rangle-\langle\Phi_{0}|\hat{a}_{\mu}^{+}\hat{a}_{\mu^ {\prime}}\hat{a}_{i}^{+}\hat{a}_{m}|\Phi_{0}\rangle=\delta_{m\mu}\delta_{\mu \mu^{\prime}}-0,\] \[\langle\Phi_{0}|[\hat{a}_{m}^{+}\hat{a}_{i},\hat{a}_{\mu}^{+}\hat {a}_{\mu^{\prime}}]|\Phi_{0}\rangle = \langle\Phi_{0}|\hat{a}_{m}^{+}\hat{a}_{i}\hat{a}_{\mu}^{+}\hat{a }_{\mu^{\prime}}|\Phi_{0}\rangle-\langle\Phi_{0}|\hat{a}_{\mu}^{+}\hat{a}_{\mu ^{\prime}}\hat{a}_{m}^{+}\hat{a}_{i}|\Phi_{0}\rangle=0-\delta_{m\mu^{\prime}} \delta_{i\mu};\]
therefore,
\[\langle\nu|\hat{F}|\nu_{0}\rangle\simeq\sum_{\mu\mu^{\prime}}f_{\mu\mu^{ \prime}}\left(\sum_{mi}X_{mi}^{\nu}\delta_{m\mu}\delta_{i\mu^{\prime}}+\sum_{ mi}Y_{mi}^{\nu}\delta_{m\mu^{\prime}}\delta_{i\mu}\right)=\sum_{mi}\left(X_{mi}^{ \nu}f_{mi}+Y_{mi}^{\nu}f_{im}\right). \tag{74}\]
Also in RPA, the transition amplitude of a many-body system is expressed as a linear combination of single-particle transitions.
#### 3.2.5 Sum Rules
We show in Appendix C that, in general, by indicating with \(|\Psi_{\nu}\rangle\) the eigenstates of the hamiltonian \(\hat{H}\)
\[\hat{H}\,|\Psi_{\nu}\rangle=E_{\nu}\,|\Psi_{\nu}\rangle\,,\]
for an external operator \(\hat{F}\) inducing a transition of the system from the ground state to the excited state one has that:
\[2\sum_{\nu}(E_{\nu}-E_{0})\left|\langle\Psi_{\nu}|\hat{F}|\Psi_{0}\rangle \right|^{2}=\langle\Psi_{0}|\bigg{[}\hat{F},\Big{[}\hat{H},\hat{F}\Big{]} \bigg{]}|\Psi_{0}\rangle\,. \tag{75}\]
This expression puts a quantitative limit on the total value of the excitation strength of a many-body system. This value is determined only by the ground state properties and the knowledge of the excited states structure is not required. The validity of Equation (75) is related to the fact that the \(|\Psi_{\nu}\rangle\) are eigenstates of \(\hat{H}\). In actual calculations, states based on models or approximated solutions of the Schrodinger equations are used and Equation (75) is not properly satisfied.
On the other hand, for RPA theory, it has been shown [23] that the following relation holds
\[2\sum_{\nu}(E_{\nu}-E_{0})\left|\langle\nu|\hat{F}|\nu_{0}\rangle\right|^{2}= \langle\Phi_{0}|\bigg{[}\hat{F},\Big{[}\hat{H},\hat{F}\Big{]}\bigg{]}|\Phi_{0 }\rangle\,. \tag{76}\]
The above expression, formally speaking, is not a true sum rule since in the left hand side there are RPA states, both ground and excited states, while in the right hand side there is an IPM ground state. These two types of states are not eigenstates of the same hamiltonian. When the residual interaction is neglected, one obtains mean-field excited states \(|\Phi_{ph}\rangle\), i.e., single Slater determinants with a single particle-hole excitation. In this case, Equation (75) is verified since all these mean-field states are eigenstates of the unperturbed hamiltonian \(\hat{H}_{0}\)
\[2\sum_{ph}(\epsilon_{p}-\epsilon_{h})\left|\langle\Phi_{ph}|\hat{F}|\Phi_{0} \rangle\right|^{2}=\langle\Phi_{0}|\bigg{[}\hat{F},\Big{[}\hat{H}_{0},\hat{F} \Big{]}\bigg{]}|\Phi_{0}\rangle\,, \tag{77}\]
where the excitation energies of the full system are given by the difference between the single-particle energies of the particle-hole excitation.
Since in RPA the full hamiltonian \(\hat{H}=\hat{H}_{0}+\hat{V}_{\rm res}\) is considered, by inserting this expression in Equation (76) we obtain
\[2\sum_{\nu}(E_{\nu}-E_{0})\left|\langle\nu|\hat{F}|\nu_{0}\rangle\right|^{2}= \langle\Phi_{0}|\bigg{[}\hat{F},\Big{[}\hat{H}_{0},\hat{F}\Big{]}\bigg{]}|\Phi _{0}\rangle+\langle\Phi_{0}|\bigg{[}\hat{F},\Big{[}\hat{V}_{\rm res},\hat{F} \Big{]}\bigg{]}|\Phi_{0}\rangle\,. \tag{78}\]
For operators \(\hat{F}\) which commute with \(\hat{V}_{\rm res}\), the IPM and RPA sum rules coincide.
#### 3.2.6 RPA Ground State
We have already indicated that RPA ground state is not an IPM state but it contains effects beyond it, correlations, expressed in terms of hole-particle excitations. A more precise representation of the RPA ground state comes from a theorem demonstrated by D. J. Thouless [23] leading to an expression of the RPA ground state of the type [13]:
\[|\nu_{0}\rangle=\hbar e^{\hat{S}}\,|\Phi_{0}\rangle\,, \tag{79}\]
where \(\mathcal{H}\) is a normalization constant and the operator \(\hat{S}\) is defined as
\[\hat{S}\equiv\frac{1}{2}\sum_{\nu,minj}s^{(\nu)}_{minj}\hat{a}^{+}_{m}\hat{a}_{i }\hat{a}^{+}_{j}\hat{a}_{n}. \tag{80}\]
The sum considers all the particle-hole, \(\hat{a}^{+}_{m}\hat{a}_{i}\), and hole-particle, \(\hat{a}^{+}_{j}\hat{a}_{n}\), pairs and the index \(\nu\) runs on all the possible angular momentum and parity combinations allowed by the particle-hole and hole-particle quantum numbers. We indicated with \(s^{(\nu)}_{minj}\) an amplitude weighting the contribution of each couple of \(ph\).
Starting from the above expression, it is possible to calculate the \(s^{(\nu)}_{minj}\) from the knowledge of RPA \(X^{\nu}_{ph}\) and \(Y^{\nu}_{ph}\) amplitudes [14]. By using these expressions, the expectation value of a one-body operator with respect to the RPA ground state can be expressed as [24, 25]
\[\langle\nu_{0}|\hat{F}|\nu_{0}\rangle = \langle\nu_{0}|\sum_{\mu\mu^{\prime}}f_{\mu\,\mu^{\prime}}\hat{a }^{+}_{\mu}\hat{a}_{\mu^{\prime}}|\nu_{0}\rangle \tag{81}\] \[= \sum_{h}f_{h\,h}\left[1-\frac{1}{2}\sum_{\nu}\sum_{p}|Y^{\nu}_{ ph}|^{2}\right]+\sum_{p}f_{p,p}\left[\frac{1}{2}\sum_{\nu}\sum_{h}|Y^{\nu}_{ ph}|^{2}\right].\]
This clearly shows that the \(Y^{\nu}_{ph}\) amplitudes modify the expectation value of the operator with respect to the IPM result. In TDA, the ground state is \(|\Phi_{0}\rangle\) and the \(Y\) amplitudes are zero; therefore, the expectation value of \(\hat{F}\) is given by the sum of the s.p. expectation values of the states below the Fermi energy, as in the pure IPM. The TDA theory does not contain ground state correlations as indicated in Equation (41).
## 4 RPA with Green Function
### Field Operators and Pictures
In this section, we use the field operators \(\hat{\psi}^{+}({\bf r})\), which creates a particle in the point \({\bf r}\). The hermitian conjugate operator \(\hat{\psi}({\bf r})\) destroys a particle positioned in \({\bf r}\). These two operators are related to the creation and destruction operators via the s.p. wave functions \(\phi_{\nu}({\bf r})\) generated by the solution of the IPM problem:
\[\hat{\psi}({\bf r})=\sum_{\nu}\hat{a}_{\nu}\phi_{\nu}({\bf r})\;\;\;;\;\;\; \hat{\psi}^{+}({\bf r})=\sum_{\nu}\hat{a}^{+}_{\nu}\phi^{*}_{\nu}({\bf r}). \tag{82}\]
These equations can be inverted to express the creation and destruction operators in terms of field operator
\[\hat{a}_{\nu}=\int d^{3}r\phi^{*}_{\nu}({\bf r})\hat{\psi}({\bf r})\;\;\;;\; \;\;\hat{a}^{+}_{\nu}=\int d^{3}r\phi_{\nu}({\bf r})\hat{\psi}^{+}({\bf r}). \tag{83}\]
where we exploited the orthonormality of the \(\phi_{\nu}\). By using the anti-commutation relations of the creation and destruction operators, see Equation (294), we obtain analogous relations for the field operators:
\[\left\{\hat{\psi}^{+}({\bf r}),\hat{\psi}^{+}({\bf r}^{\prime})\right\}=0\;\; \;;\;\;\;\;\left\{\hat{\psi}({\bf r}),\hat{\psi}({\bf r}^{\prime})\right\}=0 \;\;\;;\;\;\;\;\left\{\hat{\psi}^{+}({\bf r}),\hat{\psi}({\bf r}^{\prime}) \right\}=\delta({\bf r}-{\bf r}^{\prime}). \tag{84}\]
In the Heisenberg picture [16, 22], the states are defined as
\[|\Psi_{\rm H}(t)\rangle\equiv e^{i\frac{\hat{H}t}{\hbar}}|\Psi_{\rm S}(t)\rangle, \tag{85}\]
with respect to those of the Schrodinger picture \(|\Psi_{\rm S}(t)\rangle\). In the above equations, \(\hat{H}\) is the full many-body hamiltonian. The states in the Heisenberg picture are time-independent and the time evolution of the system is described by the operators whose relation with the time-independent operators of the Schrodinger picture is:
\[\hat{O}_{\rm H}\equiv e^{i\frac{\hat{H}t}{\hbar}}\hat{O}_{S}e^{-i\frac{\hat{H} t}{\hbar}}, \tag{86}\]
satisfying the equation:
\[ih\frac{\partial}{\partial t}\hat{O}_{\rm H}(t)=[\hat{O}_{\rm H}(t),\hat{H}] \tag{87}\]
By separating the hamiltonian in the Schrodinger picture into two parts
\[\hat{H}=\hat{H}_{0}+\hat{H}_{1}, \tag{88}\]
it is possible to define an interaction picture whose states are defined as
\[|\Psi_{1}(t)\rangle\equiv e^{i\frac{\hat{H}_{0}t}{\hbar}}|\Psi_{\rm S}(t)\rangle, \tag{89}\]
and the operators
\[\hat{O}_{\rm I}(t)=e^{i\frac{\hat{H}_{0}t}{\hbar}}\hat{O}_{\rm S}e^{-i\frac{ \hat{H}_{0}t}{\hbar}}. \tag{90}\]
In the interaction picture [22], both states and operators evolve with the time as indicated by the equations
\[i\hbar\frac{\partial}{\partial t}|\Psi_{\rm I}(t)\rangle=\hat{H}_{1,{\rm I}}(t )|\Psi_{1}(t)\rangle, \tag{91}\]
and
\[i\hbar\frac{\partial}{\partial t}\hat{O}_{\rm I}(t)=\left[\hat{O}_{\rm I}(t), \hat{H}_{0}\right]. \tag{92}\]
The fermionic field operators in the Heisenberg and interaction picture are, respectively, defined as:
\[\hat{\psi}_{\rm H}({\bf x},t)=e^{\frac{i}{\hbar}\hat{H}t}\hat{\psi}({\bf x})e ^{-\frac{i}{\hbar}\hat{H}t}\quad;\quad\hat{\psi}_{\rm I}({\bf x},t)=e^{\frac{i }{\hbar}\hat{H}_{0}t}\hat{\psi}({\bf x})e^{-\frac{i}{\hbar}\hat{H}_{0}t} \tag{93}\]
It can be shown that the anti-commutation relations (84) of the field operators, as well as those of the creation and destruction operators, see Equations (294), are conserved in every representation [22].
### Two-Body Green Function and RPA
The two-body Green function is defined as
\[(-i)^{2}{\rm G}({\bf x}_{1},t_{1},{\bf x}_{2},t_{2},{\bf x}_{3},t_{3},{\bf x} _{4},t_{4})\equiv\frac{\langle\Psi_{0}|\hat{\mathbb{T}}[\hat{\psi}_{\rm H}({ \bf x}_{1},t_{1})\hat{\psi}_{\rm H}({\bf x}_{2},t_{2})\hat{\psi}_{\rm H}^{+}({ \bf x}_{3},t_{3})\hat{\psi}_{\rm H}^{+}({\bf x}_{4},t_{4})]|\Psi_{0}\rangle}{ \langle\Psi_{0}|\Psi_{0}\rangle}\;\;. \tag{94}\]
In the above expression, \(|\Psi_{0}\rangle\) indicates the ground state of the system in Heisenberg representation and \(\hat{\mathbb{T}}\) is the time-ordering operator which arranges the field operators in decreasing time order. Because of the possible values that the times \(t_{i}\) can assume, there are \(4!=24\) cases, but, for the symmetry properties
\[{\rm G}(1,2,3,4)=-{\rm G}(2,1,3,4)=-{\rm G}(1,2,4,3)={\rm G}(2,1,4,3)\;\;, \tag{95}\]
only six of them are independent. Out of these six cases, only three have physically interesting properties and between these latter cases we select that where \(t_{1},t_{3}>t_{2},t_{4}\) which implies
\[(-i)^{2}{\rm G}({\bf x}_{1},t_{1},{\bf x}_{2},t_{2},{\bf x}_{3},t_{3},{\bf x} _{4},t_{4})=\frac{-\langle\Psi_{0}|\hat{\psi}_{\rm H}({\bf x}_{1},t_{1})\hat{ \psi}_{\rm H}^{+}({\bf x}_{3},t_{3})\hat{\psi}_{\rm H}({\bf x}_{2},t_{2})\hat{ \psi}_{\rm H}^{+}({\bf x}_{4},t_{4})|\Psi_{0}\rangle}{\langle\Psi_{0}|\Psi_{0 }\rangle}\;\;, \tag{96}\]
and describes the time evolution of a \(ph\) pair.
Since we work in a non-relativistic framework, the creation and also the destruction, of a particle-hole pair is instantaneous; therefore, we have that
\[t_{1}=t_{3}=t^{\prime}\;\;{\rm e}\;\;\;t_{2}=t_{4}=t\;\;. \tag{97}\]
For this case, we express the two-body Green function in terms of creation and destruction operators as
\[{\rm G}({\bf x}_{1},t^{\prime},{\bf x}_{2},t,{\bf x}_{3},t^{ \prime},{\bf x}_{4},t) \tag{98}\] \[= \sum_{\nu_{1}\nu_{2}\nu_{3}\nu_{4}}\frac{\phi_{\nu_{1}}({\bf x}_ {1})\phi_{\nu_{3}}^{*}({\bf x}_{3})\phi_{\nu_{2}}({\bf x}_{2})\phi_{\nu_{4}}^{* }({\bf x}_{4})\langle\Psi_{0}|\hat{\mathbb{T}}[\hat{a}_{\nu_{1}}(t^{\prime}) \hat{a}_{\nu_{3}}^{+}(t^{\prime})\hat{a}_{\nu_{2}}(t)\hat{a}_{\nu_{4}}^{+}(t)] |\Psi_{0}\rangle}{\langle\Psi_{0}|\Psi_{0}\rangle}\] \[= \sum_{\nu_{1}\nu_{2}\nu_{3}\nu_{4}}\phi_{\nu_{1}}({\bf x}_{1}) \phi_{\nu_{2}}({\bf x}_{2})\phi_{\nu_{3}}^{*}({\bf x}_{3})\phi_{\nu_{4}}^{*}({ \bf x}_{4})G(\nu_{1},t^{\prime},\nu_{2},t,\nu_{3},t^{\prime},\nu_{4},t)\;\;.\]
where it is understood that all the creation and destruction operators are expressed in the Heisenberg picture. The previous equation defines a two-body Green function depending on the quantum numbers \(\nu\) characterizing the single-particle states.
Since this Green function depends only on the time difference \(\tau=t^{\prime}-t\), we find it convenient to define the energy dependent two-body Green function as
\[\tilde{G}(\nu_{1},\nu_{2},\nu_{3},\nu_{4},E)=\int_{-\infty}^{\infty}d\tau\,G( \nu_{1},\nu_{2},\nu_{3},\nu_{4},\tau)\,e^{\frac{i}{\hbar}E\tau}\,\,\,, \tag{99}\]
For the case \(\tau>0\), by considering the expression of the creation and destruction operators in the Heisenberg picture, see Equation (86), and the fact that \(|\Psi_{0}\rangle\) is eigenstate of \(\hat{H}\) whose eigenvalue is \(E_{0}\), we obtain the expression
\[\tilde{G}_{\tau>0}(\nu_{1},\nu_{2},\nu_{3},\nu_{4},E)=\frac{\langle\Psi_{0}| \hat{a}_{\nu_{1}}\hat{a}_{\nu_{3}}^{+}\int_{0}^{\infty}d\tau\,e^{-\frac{i}{ \hbar}(\hat{H}-E_{0}-E)\tau}\hat{a}_{\nu_{2}}\hat{a}_{\nu_{4}}^{+}|\Psi_{0} \rangle}{\langle\Psi_{0}|\Psi_{0}\rangle}\,\,\,. \tag{100}\]
We can express the value of the time integral as
\[\lim_{\eta\to 0}\int_{0}^{\infty}d\tau\,e^{-\frac{i}{\hbar}(-\hat{H}+E_{0}+E-i \eta)\tau}=\lim_{\eta\to 0}\frac{-i\hbar}{E-\hat{H}+E_{0}-i\eta}\,\,\,; \tag{101}\]
therefore,
\[\tilde{G}_{\tau>0}(\nu_{1},\nu_{2},\nu_{3},\nu_{4},E)=\hbar\langle\Psi_{0}| \hat{a}_{\nu_{1}}\hat{a}_{\nu_{3}}^{+}\,\frac{-i}{E-\hat{H}+E_{0}+i\eta}\,\hat {a}_{\nu_{2}}\hat{a}_{\nu_{4}}^{+}|\Psi_{0}\rangle\frac{1}{\langle\Psi_{0}| \Psi_{0}\rangle}\,\,\,. \tag{102}\]
With an analogous calculation for the \(\tau<0\) case, we obtain
\[\frac{i}{\hbar}\tilde{G}(\nu_{1},\nu_{2},\nu_{3},\nu_{4},E) = \frac{i}{\hbar}(\tilde{G}_{\tau>0}+\tilde{G}_{\tau<0}) \tag{103}\] \[= \langle\Psi_{0}|\hat{a}_{\nu_{1}}\hat{a}_{\nu_{3}}^{+}\,\frac{1}{ E-\hat{H}+E_{0}-i\eta}\,\hat{a}_{\nu_{2}}\hat{a}_{\nu_{4}}^{+}|\Psi_{0}\rangle \frac{1}{\langle\Psi_{0}|\Psi_{0}\rangle}\] \[- \langle\Psi_{0}|\hat{a}_{\nu_{2}}\hat{a}_{\nu_{4}}^{+}\,\frac{1}{ E+\hat{H}-E_{0}+i\eta}\,\hat{a}_{\nu_{1}}\hat{a}_{\nu_{3}}^{+}|\Psi_{0}\rangle \frac{1}{\langle\Psi_{0}|\Psi_{0}\rangle}\,\,\,\,.\]
By inserting the completeness of the eigenfunctions of \(\hat{H}\), \(\sum_{n}|\Psi_{n}\rangle\langle\Psi_{n}|=1\) and considering \(\hat{H}|\Psi_{n}\rangle=E_{n}|\Psi_{n}\rangle\), we obtain the expression
\[\frac{i}{\hbar}\tilde{G}(\nu_{1},\nu_{2},\nu_{3},\nu_{4},E)=\frac {1}{\langle\Psi_{0}|\Psi_{0}\rangle}\] \[\sum_{n}\left[\frac{\langle\Psi_{0}|\hat{a}_{\nu_{1}}\hat{a}_{ \nu_{3}}^{+}|\Psi_{n}\rangle\langle\Psi_{n}|\,\hat{a}_{\nu_{2}}\hat{a}_{\nu_{4 }}^{+}|\Psi_{0}\rangle}{E-(E_{n}-E_{0})-i\eta}-\frac{\langle\Psi_{0}|\hat{a}_{ \nu_{2}}\hat{a}_{\nu_{4}}^{+}|\Psi_{n}\rangle\langle\Psi_{n}|\hat{a}_{\nu_{1}} \hat{a}_{\nu_{3}}^{+}|\Psi_{0}\rangle}{E+(E_{n}-E_{0})+i\eta}\right]\,\,\,. \tag{104}\]
In this expression, the states \(|\Psi_{n}\rangle\) have the same number of particles as the ground state. The energy values related to the poles, \(E=E_{n}-E_{0}\), represent the excitation energies of the \(A\) particle system.
The unperturbed two-body Green function is obtained by substituting in Equation (104) the eigenstates \(|\Psi\rangle\) of the full hamiltonian with \(|\Phi\rangle\), the eigenstates of the IPM hamiltonian \(\hat{H}_{0}\). In this case, the action of the creation and destruction operators is well defined and the energy eigenvalues are given by the s.p. energies of the \(ph\) pair. Because of the properties of the creation and destruction operators we have that
\[\tilde{G}^{0}(m,i,j,n,E)=\hbar\frac{\delta_{ij}\delta_{mn}}{\epsilon_{m}- \epsilon_{i}-E-i\eta}\,\,\,\,\,,\,\,\,\,\,\,\tilde{G}^{0}(i,m,n,j,E)=\hbar \frac{\delta_{ij}\delta_{mn}}{\epsilon_{m}-\epsilon_{i}+E-i\eta}, \tag{105}\]
and
\[\tilde{G}^{0}(m,i,n,j,E)=\tilde{G}^{0}(i,m,j,n,E)=0. \tag{106}\]
We show in Appendix D that the two-body Green function is strictly related to the response of the system to an external probe. By using \(\hat{F}\), the one-body operator of Equation (50) describing the action of the external probe, we can write the transition amplitude from the ground state to an excited state as
\[S(E) = \sum_{n}|\langle\Psi_{0}|\hat{F}|\Psi_{n}\rangle|^{2}\delta\Big{(} E-(E_{n}-E_{0})\Big{)} \tag{107}\] \[= \sum_{\nu_{1}\nu_{2}}\sum_{\nu_{3}\nu_{4}}f_{\nu_{1}\nu_{2}}f_{\nu _{3}\nu_{4}}^{*}\frac{\Im}{\pi}\left(i\hbar\tilde{G}(\nu_{1},\nu_{3},\nu_{2}, \nu_{4},E)\right)\,\,\,,\]
where \(f_{\nu_{1}\nu_{2}}\) indicates the matrix element between s.p. wave functions.
The expression (107) of the transition amplitude separates the action of the external probe from the many-body response which is described by the two-body Green function. This latter quantity is related to the interaction between the particles composing the system and it is a universal function independent of the kind of probe perturbing the system. The knowledge of \(S(E)\) allows a direct comparison with observable quantities such as scattering cross sections.
In the time-dependent perturbation theory, a theorem, called Gell-Man and Low, indicates that the eigenvector \(|\Psi_{0}\rangle\) of the full hamiltonian can be written as [22]:
\[\frac{\hat{U}(0,-\infty)|\Phi_{0}\rangle}{\langle\Phi_{0}|\hat{U}(0,-\infty)| \Phi_{0}\rangle}\equiv\frac{|\Psi_{0}\rangle}{\langle\Phi_{0}|\Psi_{0}\rangle}\;\;, \tag{108}\]
where the time evolution operator \(\hat{U}\) can be expressed in powers of the interaction \(\hat{H}_{1}\) expressed in the interaction picture
\[\hat{U}(t,t_{0})=\sum_{n=0}^{\infty}\left(\frac{-i}{\hbar}\right)^{n}\frac{1}{ n!}\int_{t_{0}}^{t}dt_{1}\ldots\int_{t_{0}}^{t}dt_{n}\hat{\mathbb{T}}\left[ \hat{H}_{1}(t_{1})\ldots\hat{H}_{1}(t_{n})\right]. \tag{109}\]
In the above equation, we dropped the subindex I to simplify the writing.
We insert Equation (108) into the expression (94) of the two-body Green function and we obtain a perturbative expression of the full interacting Green function in powers of \(\hat{H}_{1}\) and of the unperturbed two-body Green function \({\rm G}^{0}\).
It is useful to consider a graphical representation of the Green function, as indicated in Figure 2. The full two-body Green function is indicated by two continuous thick lines. The upward arrows stand for the particle state and the downward arrows for the hole state. The continuous thin lines indicate the unperturbed Green function \({\rm G}^{0}\). In the figure, we consider only two-body interactions, i.e., \(\hat{H}_{1}=\hat{\cal U}({\rm x}_{1},{\rm x}_{2})\) which is represented by a dashed line, with x indicating both space and time coordinates.
Figure 2 shows some of the terms we obtain by carrying out the perturbation expansion of the two-body Green function. The explicit expressions of the various terms are presented in Appendix E. We observe that there are diagrams which, by cutting particle and hole lines, can be separated into two diagrams already present in the expansion. This is the case, for example, of the diagram E of the figure which is composed by two diagrams of C type and by the diagram G which is given by the product of a diagram of C type and another one of F type. The contribution of these diagrams can be factorized in a term containing the four coordinates \({\rm x}_{1}\cdots{\rm x}_{4}\) of the full Green function times, another term which does not contain them. The sum of all the diagrams of this second type is identical to that of all the diagrams of the denominator; therefore, these two contributions simplify the matter. Finally, the calculation of G can be carried out by considering only the remaining diagrams of the numerators which are called irreducible.
Formally, this can be expressed with an equation similar to the Dyson's equation for the one-body Green function [22]:
\[{\rm G}({\rm x}_{1},{\rm x}_{2},{\rm x}_{3},x_{4})={\rm G}^{0}({ \rm x}_{1},{\rm x}_{2},{\rm x}_{3},{\rm x}_{4}) \tag{110}\] \[+ \int d^{4}{\rm y}_{1}\,d^{4}{\rm y}_{2}\,d^{4}{\rm y}_{3}\,d^{4}{ \rm y}_{4}\,{\rm G}^{0}({\rm x}_{1},{\rm x}_{2},{\rm y}_{1},{\rm y}_{2})\hat{ \cal R}({\rm y}_{1},{\rm y}_{2},{\rm y}_{3},{\rm y}_{4}){\rm G}({\rm y}_{3},{ \rm y}_{4},{\rm x}_{3},{\rm x}_{4})\;\;.\]
A graphical representation of Equation (110) is shown in Figure 3. The dashed area indicates the kernel \(\hat{\cal R}\) containing all the irreducible diagrams which can be inserted between the four \(y\) points.
RPA consists in considering, in the previous equation, instead of the full kernel \(\hat{\cal R}\), a single interaction \(\hat{\cal U}\) depending only on two coordinates
\[\hat{\cal R}^{\rm RPA}({\rm y}_{1},{\rm y}_{2},{\rm y}_{3},{\rm y}_{4})=\hat{ \cal U}({\rm y}_{1},{\rm y}_{4})\left[\delta({\rm y}_{1}-{\rm y}_{2})\delta({ \rm y}_{3}-{\rm y}_{4})-\delta({\rm y}_{1}-{\rm y}_{3})\delta({\rm y}_{2}-{ \rm y}_{4})\right]\;\;; \tag{111}\]
therefore,
\[{\rm G}^{\rm RPA}({\rm x}_{1},{\rm x}_{2},{\rm x}_{3},{\rm x}_{4})={ \rm G}^{0}({\rm x}_{1},{\rm x}_{2},{\rm x}_{3},{\rm x}_{4}) \tag{112}\] \[+ \int d^{4}\,{\rm y}_{1}\,d^{4}\,{\rm y}_{2}\,{\rm G}^{0}({\rm x}_ {1},{\rm x}_{2},{\rm y}_{1},{\rm y}_{1})\hat{\cal U}({\rm y}_{1},{\rm y}_{2}){ \rm G}^{\rm RPA}({\rm y}_{2},{\rm y}_{2},{\rm x}_{3},{\rm x}_{4})\] \[- \int d^{4}y_{1}\,d^{4}y_{2}\,{\rm G}^{0}({\rm x}_{1},{\rm x}_{2},{ \rm y}_{1},{\rm y}_{2})\hat{\cal U}({\rm y}_{1},{\rm y}_{2}){\rm G}^{\rm RPA}({ \rm y}_{1},{\rm y}_{2},{\rm x}_{3},{\rm x}_{4})\;\;,\]
where we separated the direct and the exchange terms. The graphical representation of the above equation is given in Figure 4.
In mixed representation, RPA equations are
\[\tilde{G}^{\rm RPA}(\nu_{1},\nu_{2},\nu_{3},\nu_{4},E)=\tilde{G}^{0}( \nu_{1},\nu_{2},\nu_{3},\nu_{4},E)\] \[+ \sum_{\mu_{1},\mu_{2},\mu_{3},\mu_{4}}\tilde{G}^{0}(\nu_{1},\nu_{ 2},\mu_{1},\mu_{2},E)\left\langle\mu_{1}\mu_{3}|\hat{V}|\mu_{2}\mu_{4}\right\rangle \tilde{G}^{\rm RPA}(\mu_{3},\mu_{4},\nu_{3},\nu_{4},E)\frac{1}{\hbar}\] \[- \sum_{\mu_{1},\mu_{2},\mu_{3},\mu_{4}}\tilde{G}^{0}(\nu_{1},\nu_{ 2},\mu_{1},\mu_{2},E)\left\langle\mu_{1}\mu_{2}|\hat{V}|\mu_{4}\mu_{3}\right\rangle \tilde{G}^{\rm RPA}(\mu_{3},\mu_{4},\nu_{3},\nu_{4},E)\frac{1}{\hbar}\] \[= \sum_{\mu_{1},\mu_{2},\mu_{3},\mu_{4}}\tilde{G}^{0}(\nu_{1},\nu_{ 2},\mu_{1},\mu_{2},E)\bigg{\{}\delta_{\mu_{1},\nu_{3}}\delta_{\mu_{2},\nu_{4}}\] \[+ \frac{1}{\hbar}\left\langle\mu_{1}\mu_{3}|\hat{V}|\mu_{2}\mu_{4} \right\rangle\tilde{G}^{\rm RPA}(\mu_{3},\mu_{4},\nu_{3},\nu_{4},E)\]
Figure 2: Graphical representation of the perturbation expansion of the interacting Green function. The double thick lines represent G, the double thin lines G\({}^{0}\) and the dashed lines the two-body interaction \(\tilde{\cal U}\).
\[- \frac{1}{\hbar}\,\langle\mu_{1}\mu_{2}|\hat{V}|\mu_{4}\mu_{3}\rangle\, \tilde{G}^{\rm RPA}(\mu_{3},\mu_{4},\nu_{3},\nu_{4},E)\bigg{\}}, \tag{113}\]
where we used \(\hat{\cal U}=\hat{V}/\hbar\).
As indicated by Equations (105) and (106), there are two possibilities of forming non-zero unperturbed Green functions. By adopting the usual convention of indicating with \(i,j,k,l\) the hole states and with \(m,n,p,q\) the particle states, we express RPA Equation (113) as:
\[\sum_{q,l}\bigg{\{}\left[A_{miql}-E\,\delta_{m,q}\,\delta_{i,l}\right]\tilde{ G}^{\rm RPA}(q,l,j,n,E)+B_{miql}\tilde{G}^{\rm RPA}(l,q,j,n,E)\bigg{\}}=\delta_{m,n} \delta_{i,j}, \tag{114}\]
\[\sum_{q,l}\bigg{\{}\left[A_{miql}^{*}+E\,\delta_{m,q}\delta_{i,l}\right]\tilde {G}^{\rm RPA}(l,q,j,n,E)+B_{miql}^{*}\tilde{G}^{\rm RPA}(q,l,j,n,E)\bigg{\}}=0, \tag{115}\]
\[\sum_{q,l}\bigg{\{}\left[A_{miql}-E)\delta_{m,q}\delta_{i,l}\right]\tilde{G}^ {\rm RPA}(q,l,n,j,E)+B_{miql}\tilde{G}^{\rm RPA}(l,q,n,j,E)\bigg{\}}=0, \tag{116}\]
\[\sum_{q,l}\bigg{\{}\left[A_{miql}^{*}+E\,\delta_{m,q}\delta_{i,l}\right]\tilde {G}^{\rm RPA}(l,q,n,j,E)+B_{miql}^{*}\tilde{G}^{\rm RPA}(q,l,n,j)\bigg{\}}= \delta_{m,n}\delta_{i,j}, \tag{117}\]
where we have defined the matrices
\[A_{miql} = (\epsilon_{m}-\epsilon_{i})\delta_{m,q}\delta_{i,l}+\overline{V} _{iqml}, \tag{118}\] \[B_{miql} = -\overline{V}_{ilmq}. \tag{119}\]
Figure 3: Graphical representation of Equation (110). The criss-cross box represents the proper kernel \(\hat{\cal R}\).
The calculation is outlined in detail in Appendix F. These equations can be expressed in matrix form. By defining
\[G_{1}(E)\equiv\tilde{G}^{\rm RPA}(m,i,j,n,E)\ \ ;\ \ G_{2}(E)\equiv \tilde{G}^{\rm RPA}(m,i,n,j,E)\ \ ;\] \[G_{3}(E)\equiv\tilde{G}^{\rm RPA}(i,m,j,n,E)\ \ ;\ \ G_{4}(E)\equiv \tilde{G}^{\rm RPA}(i,m,n,j,E), \tag{120}\]
we obtain
\[\left(\begin{array}{cc}A-E\,\mathbb{I}&B\\ B^{*}&A^{*}+E\,\mathbb{I}\end{array}\right)\left(\begin{array}{cc}G_{1}(E) &G_{2}(E)\\ G_{3}(E)&G_{4}(E)\end{array}\right)=\left(\begin{array}{cc}\mathbb{I}&0\\ 0&\mathbb{I}\end{array}\right). \tag{121}\]
The two-body Green functions depend on the energy \(E\). The poles \(\omega_{n}=E_{n}-E_{0}\) of these Green functions are the excitation energies of RPA excited states \(|\Psi_{n}\rangle\). When the value of the energy \(E\) corresponds to that of a pole, the value of the Green function goes to infinity; therefore, Equation (121) remains valid only if the matrix of the coefficients goes to zero. For this reason RPA excitation energies are those of the non-trivial solution of the homogeneous system of equations
\[\left(\begin{array}{cc}A-\omega_{n}\,\mathbb{I}&B\\ B^{*}&A^{*}+\omega_{n}\,\mathbb{I}\end{array}\right)\left(\begin{array}{c}X_ {n}\\ Y_{n}\end{array}\right)=0\, \tag{122}\]
which is the expression (67) of RPA equations.
In Section 3.2.3, we have shown that RPA equations for each positive eigenvalue \(\omega_{n}\) admit also a negative eigenvalue \(-\omega_{n}\). The set of the vectors of the \(X\) and \(Y\) amplitudes is orthogonal
\[(X_{m}^{*},-Y_{m}^{*})\left(\begin{array}{c}X_{n}\\ Y_{n}\end{array}\right)=\delta_{m,n},\]
and complete
\[\sum_{n>0}\left(\begin{array}{c}X_{n}\\ Y_{n}\end{array}\right)(X_{n}^{*},Y_{n}^{*})-\sum_{n<0}\left(\begin{array}{c}X _{n}^{*}\\ Y_{n}^{*}\end{array}\right)(X_{n},Y_{n})=\mathbb{I},\]
where \(n>0\) indicates the sum on the positive \(\omega_{n}\) and \(n<0\) the sum on the negative values, as indicated by Equation (70). By inserting the above expressions in Equation (121), we identify the solution as
\[\left(\begin{array}{cc}G_{1}(E)&G_{2}(E)\\ G_{3}(E)&G_{4}(E)\end{array}\right) = \sum_{n}\frac{1}{\omega_{n}-E}\left(\begin{array}{c}X_{n}\\ Y_{n}\end{array}\right)(X_{m}^{*},Y_{m}^{*})\]
Figure 4: Graphical representation of the two-body Green function in RPA.
\[- \sum_{n}\frac{1}{-|\omega_{n}|-E}\left(\begin{array}{c}Y_{n}^{*}\\ X_{n}^{*}\end{array}\right)(Y_{n},X_{n})\] \[= \left(\begin{array}{cc}\sum_{n}\left(\frac{X_{n}Y_{n}^{*}}{ \omega_{n}-E}+\frac{Y_{n}Y_{n}^{*}}{\omega_{n}+E}\right)&\sum_{n}\left(\frac{X_ {n}Y_{n}^{*}}{\omega_{n}-E}+\frac{X_{n}Y_{n}^{*}}{\omega_{n}+E}\right)\\ \sum_{n}\left(\frac{Y_{n}X_{n}^{*}}{\omega_{n}-E}+\frac{X_{n}^{*}Y_{n}}{\omega _{n}+E}\right)&\sum_{n}\left(\frac{Y_{n}Y_{n}^{*}}{\omega_{n}-E}+\frac{X_{n}X_ {n}^{*}}{\omega_{n}+E}\right)\end{array}\right), \tag{123}\]
where \(\omega_{n}>0\) always. The comparison with the expression of the two-body Green function in the representation of Equation (104) allows the identification of the \(X\) and \(Y\) amplitudes as
\[X_{mi}=\langle\Psi_{0}|\hat{a}_{m}\hat{a}_{i}^{+}|\Psi_{n}\rangle \qquad;\qquad X_{mi}^{*}=\langle\Psi_{n}|\hat{a}_{i}\hat{a}_{m}^{+}| \Psi_{n}\rangle\ \ ; \tag{124}\] \[Y_{mi}=\langle\Psi_{0}|\hat{a}_{i}\hat{a}_{m}^{+}|\Psi_{n}\rangle \qquad;\qquad Y_{mi}^{*}=\langle\Psi_{n}|\hat{a}_{m}\hat{a}_{i}^{+}| \Psi_{0}\rangle\,, \tag{125}\]
where \(|\Psi_{n}\rangle\) and \(|\Psi_{0}\rangle\) are, respectively, RPA excited and ground states, which we called \(|\nu\rangle\) and \(|\nu_{0}\rangle\) in Section 3.2.2.
### Infinite Systems
In an infinite and homogeneous system with translational invariance, the s.p. wave functions are the plane waves (11) characterized by the modules of the wave vector \(k\equiv|{\bf k}|\). If we use the representation of Equation (104) of the unperturbed two-body Green function, we obtain terms of the kind
\[\langle\Phi_{0}|\hat{a}_{\nu_{1}}\hat{a}_{\nu_{3}}^{+}|\Phi_{n}\rangle \langle\Phi_{n}|\,\hat{a}_{\nu_{2}}\hat{a}_{\nu_{4}}^{+}|\Phi_{0}\rangle= \delta_{k_{1},k_{4}}\theta(k_{1}-k_{\rm F})\,\delta_{k_{2},k_{3}}\theta(k_{ \rm F}-k_{2}), \tag{126}\]
since the action of the creation and destruction operators on the IPM states \(|\Phi\rangle\) is well defined.
We consider Green functions depending on the energy, as indicated by Equation (99). In this representation, by inserting the plane wave function in Equation (98), we obtain for the unperturbed two-body Green function the expression
\[\tilde{\rm G}^{0}({\bf x}_{1},{\bf x}_{2},{\bf x}_{3},{\bf x}_{4};E)=\frac{1} {(2\pi)^{6}}\int d{\bf k}_{1}\,d{\bf k}_{2}e^{i{\bf k}_{1}\cdot({\bf x}_{1}-{ \bf x}_{4})}e^{-i{\bf k}_{2}\cdot({\bf x}_{2}-{\bf x}_{3})}\tilde{G}^{0}(k_{1},k_{2};E), \tag{127}\]
clearly indicating that there is a dependence only on the difference between the particle and hole coordinates. This is a consequence of the translational invariance of the system. The interacting Green function and the kernel of Equation (110) depend only on the coordinate difference. We can define the Fourier transforms of these quantities depending on the modules of two momenta
\[\tilde{\rm G}({\bf x}_{1},{\bf x}_{2},{\bf x}_{3},{\bf x}_{4};E) = \frac{1}{(2\pi)^{6}}\int d{\bf k}_{1}\,d{\bf k}_{2}e^{i{\bf k}_{1} \cdot({\bf x}_{1}-{\bf x}_{4})}e^{i{\bf k}_{2}\cdot({\bf x}_{2}-{\bf x}_{3})} \tilde{G}(k_{1},k_{2};E)\ \, \tag{128}\] \[\hat{\mathscr{K}}({\bf x}_{1},{\bf x}_{2},{\bf x}_{3},{\bf x}_{4}) = \frac{1}{(2\pi)^{6}}\int d{\bf k}_{1}\,d{\bf k}_{2}e^{i{\bf k}_{1} \cdot({\bf x}_{1}-{\bf x}_{4})}e^{-i{\bf k}_{2}\cdot({\bf x}_{2}-{\bf x}_{3})} \hat{\tilde{\mathscr{K}}}(k_{1},k_{2})\ \ ; \tag{129}\]
the kernel does not depend on the energy \(E\).
By inserting these definitions in RPA Equation (112) and substituting \(\hat{\mathscr{K}}\) with \(\hat{\mathscr{U}}\), which is RPA ansatz, we obtain
\[\int d{\bf k}_{1}\,d{\bf k}_{2}e^{i{\bf k}_{1}\cdot({\bf x}_{1}- {\bf x}_{4})}e^{-i{\bf k}_{2}\cdot({\bf x}_{2}-{\bf x}_{3})}\tilde{G}^{\rm RPA }(k_{1},k_{2};E)=\] \[\int d{\bf k}_{1}\,d{\bf k}_{2}e^{i{\bf k}_{1}\cdot({\bf x}_{1}- {\bf x}_{4})}e^{-i{\bf k}_{2}\cdot({\bf x}_{2}-{\bf x}_{3})}\tilde{G}^{0}(k_{1},k_{2};E)\] \[+ \int d{\bf y}_{1}\,d{\bf y}_{2}d{\bf y}_{3}d{\bf y}_{4}\int d{\bf k }_{1}\,d{\bf k}_{2}e^{i{\bf k}_{1}\cdot({\bf x}_{1}-{\bf y}_{2})}e^{-i{\bf k}_{2 }\cdot({\bf x}_{2}-{\bf y}_{1})}\] \[\tilde{G}^{0}(k_{1},k_{2};E)\int d{\bf k}_{a}\,d{\bf k}_{b}e^{i{ \bf k}_{a}\cdot({\bf y}_{1}-{\bf y}_{4})}e^{-i{\bf k}_{b}\cdot({\bf y}_{2}-{\bf y }_{3})}\hat{\tilde{\mathscr{U}}}(k_{a},k_{b})\] \[\int d{\bf k}_{3}\,d{\bf k}_{4}e^{i{\bf k}_{3}\cdot({\bf y}_{3}-{ \bf x}_{4})}e^{-i{\bf k}_{4}\cdot({\bf y}_{4}-{\bf x}_{3})}\tilde{G}^{\rm RPA }(k_{3},k_{4};E)\] \[+ \int d{\bf y}_{1}d{\bf y}_{2}d{\bf y}_{3}d{\bf y}_{4}\int d{\bf k }_{1}\,d{\bf k}_{2}e^{i{\bf k}_{1}\cdot({\bf x}_{1}-{\bf y}_{2})}e^{-i{\bf k}_{2 }\cdot({\bf x}_{2}-{\bf y}_{1})}\]
\[\tilde{G}^{0}(k_{1},k_{2};E)\int d{\bf k}_{a}\,d{\bf k}_{b}e^{i{\bf k }_{a}\cdot({\bf y}_{1}-{\bf y}_{4})}e^{-i{\bf k}_{b}\cdot({\bf y}_{2}-{\bf y}_{3} )}\tilde{\tilde{\mathcal{U}}}(k_{a},k_{b})\] \[\int d{\bf k}_{3}\,d{\bf k}_{4}e^{i{\bf k}_{3}\cdot({\bf y}_{4}-{ \bf x}_{4})}e^{-i{\bf k}_{4}\cdot({\bf y}_{3}-{\bf x}_{3})}\tilde{G}^{\rm RPA}( k_{3},k_{4};E),\]
where the second term of the right hand side is called direct and the third term is the exchange. The integration on the \({\bf y}\) coordinates in the direct term leads to the relations
\[-{\bf k}_{a}={\bf k}_{2}={\bf k}_{4}\quad{\rm and}\quad-{\bf k}_{b}={\bf k}_{1 }={\bf k}_{3}, \tag{130}\]
while that of the exchange term leads to
\[{\bf k}_{a}={\bf k}_{2}=-{\bf k}_{3}\quad{\rm and}\quad{\bf k}_{b}={\bf k}_{1 }=-{\bf k}_{4}. \tag{131}\]
By considering the above conditions, we have
\[\int d{\bf k}_{1}\,d{\bf k}_{2}e^{i{\bf k}_{1}\cdot({\bf x}_{1}- {\bf x}_{4})}e^{-i{\bf k}_{2}\cdot({\bf x}_{2}-{\bf x}_{3})}\tilde{G}^{\rm RPA }(k_{1},k_{2};E)=\] \[\int d{\bf k}_{1}\,d{\bf k}_{2}e^{i{\bf k}_{1}\cdot({\bf x}_{1}- {\bf x}_{4})}e^{-i{\bf k}_{2}\cdot({\bf x}_{2}-{\bf x}_{3})}\tilde{G}^{0}(k_{ 1},k_{2};E)\] \[+ \int d{\bf k}_{1}\,d{\bf k}_{2}e^{i{\bf k}_{1}\cdot({\bf x}_{1}- {\bf x}_{4})}e^{-i{\bf k}_{2}\cdot({\bf x}_{2}-{\bf x}_{3})}\tilde{G}^{0}(k_{ 1},k_{2};E)\tilde{\tilde{\mathcal{U}}}(k_{1},k_{2})\tilde{G}^{\rm RPA}(k_{1},k _{2};E)\] \[+ \int d{\bf k}_{1}\,d{\bf k}_{2}e^{i{\bf k}_{1}\cdot({\bf x}_{1}+ {\bf x}_{3})}e^{-i{\bf k}_{2}\cdot({\bf x}_{2}+{\bf x}_{4})}\tilde{G}^{0}(k_{ 1},k_{2};E)\tilde{\tilde{\mathcal{U}}}(k_{2},k_{1})\tilde{G}^{\rm RPA}(k_{2},k _{1};E).\]
If we neglect the exchange term, we have a simple algebraic relation between the Green functions in momentum space
\[\tilde{G}^{\rm RPA,D}(k_{1},k_{2};E)=\tilde{G}^{0}(k_{1},k_{2};E )+\tilde{G}^{0}(k_{1},k_{2};E)\tilde{\tilde{\mathcal{U}}}(k_{1}-k_{2}=q)\tilde {G}^{\rm RPA,D}(k_{1},k_{2};E),\] \[\tilde{G}^{\rm RPA,D}(k_{1},k_{1}+q;E)\left[1-\tilde{G}^{0}(k_{1},k_{1}+q;E)\tilde{\tilde{\mathcal{U}}}(q)\right]=\tilde{G}^{0}(k_{1},k_{1}+q; E),\] \[\tilde{G}^{\rm RPA,D}(k_{1},k_{1}+q;E)=\frac{\tilde{G}^{0}(k_{1},k_{1}+q;E)}{1-\tilde{G}^{0}(k_{1},k_{1}+q;E)\tilde{\tilde{\mathcal{U}}}(q)}. \tag{132}\]
This expression is commonly used to calculate the linear response of infinite fermion systems to an external probe. The graphical representation of the above equation is given in Figure 5. The equation represents an infinite sum of diagrams of ring form and it is, therefore, called ring approximation.
The exchange term cannot be factorized as the direct one. The inclusion of this term is sometimes treated by using approximated treatments. One of them is the continuous fraction technique [26, 27].
## 5 RPA with Time-Dependent Hartree-Fock
Another way of obtaining RPA secular Equation (67) is that of using time-dependent Hartree-Fock (TDHF) equations and the variational principle. We apply the variational principle to the time-dependent Schrodinger equation
\[\delta\left\langle\Psi(t)\right|\left(\hat{H}-i\hbar\frac{\partial}{\partial t }\right)|\Psi(t)\rangle=0. \tag{133}\]
The search for the minimum of the above functional of \(\Psi(t)\) is carried out in the Hilbert subspace spanned by many-body wave functions of the form
\[|\Psi(t)\rangle=e^{\sum_{mi}}C_{mi}(t)\hat{a}^{+}_{m}\hat{a}_{i}\] \[|\Phi_{0}(t)\rangle\,, \tag{134}\]
where the time-dependent IPM state has been defined as
\[|\Phi_{0}(t)\rangle=e^{-\frac{i}{\hbar}\xi_{0}t}\,|\Phi_{0}\rangle\,. \tag{135}\]
In the above equation, \(|\Phi_{0}\rangle\) is the stationary Hartree-Fock ground state of which \(\mathcal{E}_{0}\), Equation (14), is the energy eigenvalue. In Equation (133), the variation of the real and of the imaginary part of \(\Psi(t)\) are independent. The variation is carried on the only time dependent terms which are \(C_{mi}(t)\) and \(C_{mi}^{*}(t)\). We obtain a system composed by the equations
\[\frac{\delta}{\delta C_{mi}^{*}(t)}\left\langle\Psi(t)|\hat{H}-i \hbar\frac{\partial}{\partial t}|\Psi(t)\right\rangle = 0, \tag{136}\] \[\frac{\delta}{\delta C_{mi}(t)}\left\langle\Psi(t)|\hat{H}-i \hbar\frac{\partial}{\partial t}|\Psi(t)\right\rangle = 0. \tag{137}\]
We consider Equation (136) and calculate the expectation value of operators by using the power expansion of the exponential
\[\underset{m^{i}}{\sum}C_{mi}(t)\hat{a}_{m}^{+}\hat{a}_{i}\] \[=\hat{1}+\sum_{mi}C_{mi}(t)\hat{a}_{m}^{+}\hat{a}_{i}+\frac{1}{2 }\sum_{min}C_{mi}(t)\hat{a}_{m}^{+}\hat{a}_{i}C_{nj}(t)\hat{a}_{n}^{+}\hat{a} _{j}+\cdots, \tag{138}\]
For the hamiltonian expectation value, we obtain the expression
\[\langle\Psi(t)|\hat{H}|\Psi(t)\rangle = \langle\Phi_{0}(t)|\hat{H}|\Phi_{0}(t)\rangle \tag{139}\] \[+ \sum_{mi}C_{mi}^{*}(t)\left\langle\Phi_{0}(t)|\hat{a}_{i}^{+}\hat {a}_{m}\hat{H}|\Phi_{0}(t)\right\rangle\] \[+ \sum_{mi}C_{mi}(t)\left\langle\Phi_{0}(t)|\hat{H}\hat{a}_{m}^{+} \hat{a}_{i}|\Phi_{0}(t)\right\rangle\] \[+ \frac{1}{2}\sum_{minj}C_{mi}^{*}(t)C_{nj}^{*}(t)\left\langle\Phi_ {0}(t)|\hat{a}_{j}^{+}\hat{a}_{n}\hat{a}_{i}^{+}\hat{a}_{m}\hat{H}|\Phi_{0}(t )\right\rangle\] \[+ \frac{1}{2}\sum_{minj}C_{mi}(t)C_{nj}(t)\left\langle\Phi_{0}(t)| \hat{H}\hat{a}_{m}^{+}\hat{a}_{i}\hat{a}_{j}^{+}\hat{a}_{j}|\Phi_{0}(t)\right\rangle\] \[+ \sum_{minj}C_{mi}^{*}(t)C_{nj}(t)\left\langle\Phi_{0}(t)|\hat{a} _{i}^{+}\hat{a}_{m}\hat{H}\hat{a}_{n}^{+}\hat{a}_{j}\hat{H}|\Phi_{0}(t)\right\rangle+\cdots.\]
The first term of the above equation is the \(\mathcal{E}_{0}\) of Equation (14). The linear terms in \(C_{mi}(t)\) are all zero since they overlap with orthogonal Slater determinants, or, in other words, since the number of \(ph\) operators is odd.
Figure 5: Graphical representation of the ring diagram approximation in RPA.
Let us calculate the matrix element of the fifth term by using the expression of the hamiltonian given in Equation (12)
\[\langle\Phi_{0}(t)|\hat{H}\hat{a}_{m}^{+}\hat{a}_{i}\hat{a}_{n}^{+} \hat{a}_{j}|\Phi_{0}(t)\rangle=\sum_{\nu}\epsilon_{\nu}\,\langle\Phi_{0}(t)|\hat {a}_{\nu}^{+}\hat{a}_{\nu}\hat{a}_{m}^{+}\hat{a}_{i}\hat{a}_{n}^{+}\hat{a}_{j}| \Phi_{0}(t)\rangle \tag{140}\] \[- \frac{1}{2}\sum_{kl}\overline{V}_{klkl}\,\langle\Phi_{0}(t)|\hat {a}_{m}^{+}\hat{a}_{i}\hat{a}_{n}^{+}\hat{a}_{j}|\Phi_{0}(t)\rangle\] \[+ \frac{1}{4}\sum_{\mu\mu^{\prime}\nu\nu^{\prime}}\overline{V}_{ \nu\mu\nu^{\prime}\mu^{\prime}}\,\langle\Phi_{0}(t)|\hat{\mathbb{N}}[\hat{a}_{ \nu}^{+}\hat{a}_{\mu}^{+}\,\hat{a}_{\mu^{\prime}}\,\hat{a}_{\nu^{\prime}}] \hat{a}_{m}^{+}\hat{a}_{i}\hat{a}_{n}^{+}\hat{a}_{j}|\Phi_{0}(t)\rangle\ \.\]
The first and second terms are zero because of the orthogonality of the Slater determinants. With a calculation analogous to that leading to the interacting term of \(A_{minj}\) in Equation (68) (this calculation is presented in Equation (310)), we obtain
\[\frac{1}{4}\sum_{\mu\mu^{\prime}\nu\nu^{\prime}}\overline{V}_{\nu\mu\nu^{ \prime}\mu^{\prime}}\,\langle\Phi_{0}(t)|\hat{\mathbb{N}}[\hat{a}_{\nu}^{+} \hat{a}_{\mu}^{+}\,\hat{a}_{\mu^{\prime}}\,\hat{a}_{\nu^{\prime}}]\hat{a}_{m}^ {+}\hat{a}_{i}\hat{a}_{n}^{+}\hat{a}_{j}|\Phi_{0}(t)\rangle=\overline{V}_{ijmn}. \tag{141}\]
The fifth term of Equation (139) can be written as
\[\frac{1}{2}\sum_{minj}C_{mi}(t)C_{nj}(t)\,\langle\Phi_{0}(t)|\hat{H}\hat{a}_{ m}^{+}\hat{a}_{i}\hat{a}_{n}^{+}\hat{a}_{j}|\Phi_{0}(t)\rangle=\frac{1}{2}\sum_{ minj}C_{mi}(t)C_{nj}(t)\overline{V}_{ijmn}. \tag{142}\]
By working in an analogous manner for the fourth term of Equation (139), we obtain
\[\frac{1}{2}\sum_{minj}C_{mi}^{\star}(t)C_{nj}^{\star}(t)\,\langle\Phi_{0}(t)| \hat{a}_{j}^{+}\hat{a}_{n}\hat{a}_{l}^{+}\hat{a}_{m}\hat{H}|\Phi_{0}(t)\rangle =\frac{1}{2}\sum_{minj}C_{mi}^{\star}(t)C_{nj}^{\star}(t)\overline{V}_{mnij}. \tag{143}\]
The expression of the last term of Equation (139) is
\[\sum_{minj}C_{mi}^{\star}(t)C_{nj}(t)\,\langle\Phi_{0}(t)|\hat{a}_ {i}^{+}\hat{a}_{m}\hat{H}\hat{a}_{n}^{+}\hat{a}_{j}\hat{H}|\Phi_{0}(t)\rangle \tag{144}\] \[= \sum_{mi}|C_{mi}|^{2}\sum_{k}\epsilon_{k}+\sum_{mi}|C_{mi}|^{2}( \epsilon_{m}-\epsilon_{i})\] \[-\frac{1}{2}\sum_{mi}|C_{mi}|^{2}\sum_{kl}\overline{V}_{klkl}+ \sum_{minj}C_{mi}C_{nj}^{\star}\overline{V}_{mjin}\] \[\equiv \mathcal{E}_{0}\sum_{mi}|C_{mi}|^{2}+\sum_{mi}|C_{mi}|^{2}( \epsilon_{m}-\epsilon_{i})+\sum_{minj}C_{mi}C_{nj}^{\star}\overline{V}_{mjin},\]
where we used the definition (14) of \(\mathcal{E}_{0}\). The final expression of Equation (139) is then
\[\langle\Psi(t)|\hat{H}|\Psi(t)\rangle = \mathcal{E}_{0}\left(1+\sum_{mi}|C_{mi}|^{2}\right)+\sum_{mi}|C_{ mi}|^{2}(\epsilon_{m}-\epsilon_{i})+\sum_{minj}C_{mi}C_{nj}^{\star}\overline{V}_{mjin} \tag{145}\] \[+\frac{1}{2}\sum_{minj}C_{mi}(t)C_{nj}(t)\overline{V}_{ijmn}+\frac {1}{2}\sum_{minj}C_{mi}^{\star}(t)C_{nj}^{\star}(t)\overline{V}_{mnij}.\]
Let us calculate the second term of Equation (136), containing the time derivation. By considering the expression (134) of \(|\Psi(t)\rangle\), we have
\[i\hbar\,\langle\Psi(t)|\frac{\partial}{\partial t}|\Psi(t)\rangle=\mathcal{E}_ {0}\,\langle\Psi(t)|\Psi(t)\rangle+i\hbar\sum_{mi}\frac{d}{dt}C_{mi}(t)\, \langle\Psi(t)|\hat{a}_{m}^{+}\hat{a}_{i}|\Psi(t)\rangle\,. \tag{146}\]
We make a power expansion of the exponential function in Equation (134) and obtain
\[\langle\Psi(t)|\Psi(t)\rangle = \langle\Phi_{0}(t)|\Phi_{0}(t)\rangle \tag{147}\] \[+ \sum_{minj}C_{mi}^{\star}(t)C_{nj}(t)\,\langle\Phi_{0}(t)|\hat{a}_ {i}^{+}\hat{a}_{m}\hat{a}_{n}^{+}\hat{a}_{j}|\Phi_{0}(t)\rangle+\cdots,\]
and after the application of the Wick's theorem
\[\left\langle\Psi(t)|\Psi(t)\right\rangle=1+\sum_{mi}\left|C_{mi}(t)\right|^{2}+\cdots. \tag{148}\]
The terms of first order in \(C\) are zero because of the odd number of \(ph\) excitation pairs. By using the power expansion of the exponential to calculate the second term of Equation (146), we have
\[\left\langle\Psi(t)|\hat{a}_{m}^{+}\hat{a}_{i}|\Psi(t)\right\rangle=\sum_{nj}C _{nj}^{*}\left\langle\Phi_{0}(t)|\hat{a}_{j}^{+}\hat{a}_{n}\hat{a}_{m}^{+}\hat {a}_{i}|\Phi_{0}(t)\right\rangle+\cdots=C_{mi}^{*}+\cdots. \tag{149}\]
The term related to the time derivative becomes
\[i\hbar\left\langle\Psi(t)|\frac{\partial}{\partial t}|\Psi(t)\right\rangle= \mathcal{E}_{0}\left(1+\sum_{mi}\left|C_{mi}(t)\right|^{2}\right)+i\hbar\sum_ {mi}C_{mi}^{*}(t)\frac{d}{dt}C_{mi}(t)+\cdots. \tag{150}\]
We put together the results of Equations (145) and (150); we consider terms up to the second order in \(C\) and obtain the expression
\[\left\langle\Psi(t)|\hat{H}-i\hbar\frac{\partial}{\partial t}| \Psi(t)\right\rangle \simeq \sum_{mi}\left|C_{mi}(t)\right|^{2}(\epsilon_{m}-\epsilon_{i})+ \sum_{minj}C_{mi}C_{nj}^{*}\overline{V}_{mjin} \tag{151}\] \[+ \frac{1}{2}\sum_{minj}C_{mi}(t)C_{nj}(t)\overline{V}_{ijmn}+\frac {1}{2}\sum_{minj}C_{mi}^{*}(t)C_{nj}^{*}(t)\overline{V}_{mnij}\] \[- i\hbar\sum_{mi}C_{mi}^{*}(t)\frac{d}{dt}C_{mi}(t).\]
We have to impose the variational condition
\[\frac{\delta}{\delta C_{mi}^{*}(t)}\left\langle\Psi(t)|\hat{H}-i\hbar\frac{ \partial}{\partial t}|\Psi(t)\right\rangle=\frac{\partial}{\partial C_{mi}^{ *}(t)}\left\langle\Psi(t)|\hat{H}-i\hbar\frac{\partial}{\partial t}|\Psi(t) \right\rangle=0, \tag{152}\]
where the variational derivative has been changed in partial derivatives since the \(C\)'s are the only terms depending on time. By working out the derivative, we obtain the expression
\[C_{mi}(t)(\epsilon_{m}-\epsilon_{i})+\sum_{nj}C_{nj}^{*}\overline{V}_{mnij}+ \sum_{nj}C_{nj}\overline{V}_{mjin}=i\hbar\sum_{mi}\frac{d}{dt}C_{mi}(t). \tag{153}\]
We consider harmonic oscillations around the ground state
\[C_{mi}(t)=X_{mi}e^{-i\omega t}+Y_{mi}e^{i\omega t}. \tag{154}\]
where \(X\), \(Y\) and \(\omega\) are real numbers. By inserting this expression in Equation (153) and separating the positive and negative frequencies, we obtain the system of equations
\[X_{mi}(\epsilon_{m}-\epsilon_{i})+\sum_{nj}\overline{V}_{mjin}X_ {nj}+\sum_{nj}\overline{V}_{mnij}Y_{nj} = \hbar\omega X_{mi}, \tag{155}\] \[Y_{mi}^{*}(\epsilon_{m}-\epsilon_{i})+\sum_{nj}\overline{V}_{mjin }Y_{nj}^{*}+\sum_{nj}\overline{V}_{mnij}X_{nj}^{*} = -\hbar\omega^{*}X_{mi}, \tag{156}\]
which is identical to Equation (67) where the \(A\) and \(B\) matrices have been defined by Equations (68) and (69).
Continuum RPA
If the excitation energy \(\omega\) of the system is larger than \(|\epsilon_{h}|\), the particle lying on this state can be emitted and leave the system. In an atom, this effect produces a positive ion, in a nucleus a new nucleus with \(A-1\) nucleons. The RPA approach which explicitly considers the emission of a particle is called Continuum RPA (CRPA), where continuum refers to the fact that for \(\epsilon_{p}>0\) the IPM Schrodinger equation has a continuous spectrum. In this case, the s.p. wave function has an asymptotically oscillating behavior.
In CRPA, the operator (54) defining the excited state is written as
\[\hat{Q}^{\dagger}_{\nu}\,=\,\sum_{[p]h}\,\sum_{\not\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
The continuum threshold, \(\omega_{\rm thr}\), is the minimum value of the energy necessary to emit the particle, i.e., the absolute value of the s.p. energy of the hole state closest to the Fermi surface. For \(\omega<\omega_{\rm thr}\), no particle can be emitted. In this case, all the \(A^{\nu}_{p^{\prime}h^{\prime}}=0\); therefore, the system is homogeneous. Solutions, different from the trivial one, are obtained when the determinant of the known coefficients is zero. This happens for some specific values of the excitation energy \(\omega\). Below the emission threshold, the CRPA equations predict a discrete spectrum of solutions. When \(\omega>\omega_{\rm thr}\), some \(ph\) pairs have enough energy to put the particle in the continuum, i.e., with \(\epsilon_{p}>0\). In the CRPA jargon these \(ph\) pairs are called _open channels_. Obviously, the other \(ph\) pairs where \(\epsilon_{p}<0\) are called _closed channels_. Every open channel generates a coefficient different from zero in the right hand side of Equations (163) and (164). The problem is defined by imposing boundary conditions, which is equivalent to saying that we have to select specific values of the \(A^{\nu}_{ph}\) coefficients. The choice commonly adopted consists in imposing that the particle is emitted in a specific open channel, called _elastic channel_. This means
\[A^{\nu}_{ph}=\delta_{p,p_{0}}\delta_{h,h_{0}}\,, \tag{165}\]
where \(p_{0}\) and \(h_{0}\) are the quantum numbers characterizing the elastic channel. The sums in terms of the right hand sides of Equations (163) and (164) collapse to a single term. For each value of the excitation energy \(\omega\), the system has to be solved a number of times equal to the number of open channels.
The solution of the CRPA system of equations can be obtained by solving directly the set of Equations (163) and (164). The s.p. particle wave functions in the continuum are obtained by solving the s.p. Schrodinger equation with asymptotically oscillating boundary conditions. This is the classical problem of a particle elastically scattered by a potential. This problem has to be solved for a set of \(\epsilon_{p}\) energy values mapping the continuum in such a way that the integral of Equation (158) is numerically stable. This means that \(\epsilon_{p}\) must reach values much larger than those of the excitation energy region one wants to investigate. The selection of the \(\epsilon_{p}\) values to obtain the s.p. wave function is not a simple problem to be solved. The various particles may have more or less sharp resonances and they have to be properly described by the choice of the \(\epsilon_{p}\) values mapping the continuum.
There is another technical problem in the direct approach to the solution of the CRPA Equations (163) and (164). The numerical stability of the interaction matrix elements \(\overline{V}\) is due to the fact that, in the integrals, hole wave functions, which asymptotically go to zero, are present. This works well for the direct matrix elements
\[\langle p_{1}h_{2}|\hat{V}|h_{1}p_{2}\rangle=\int d^{3}r_{1}\int d^{3}r_{2} \phi^{*}_{p_{1}}({\bf r}_{1})\phi^{*}_{h_{2}}({\bf r}_{2})V({\bf r}_{1},{\bf r }_{2})\phi_{h_{1}}({\bf r}_{1})\phi_{p_{2}}({\bf r}_{2}), \tag{166}\]
but it is a problem for the exchange matrix element
\[\langle p_{1}h_{2}|\hat{V}|p_{2}h_{1}\rangle=\int d^{3}r_{1}\int d^{3}r_{2} \phi^{*}_{p_{1}}({\bf r}_{1})\phi^{*}_{h_{2}}({\bf r}_{2})V({\bf r}_{1},{\bf r }_{2})\phi_{p_{2}}({\bf r}_{1})\phi_{h_{1}}({\bf r}_{2}), \tag{167}\]
where the two particle wave functions, both oscillating, are integrated together. The direct approach is suitable to be used with zero-range interactions \(V({\bf r}_{1},{\bf r}_{2})=V_{0}\delta({\bf r}_{1}-{\bf r}_{2})\). In this case, direct and exchange matrix elements are identical
\[\langle p_{1}h_{2}|\hat{V}|h_{1}p_{2}\rangle=\langle p_{1}h_{2}|\hat{V}|p_{2}h _{1}\rangle=V_{0}\int d^{3}r_{1}\phi^{*}_{p_{1}}({\bf r}_{1})\phi^{*}_{h_{2}}( {\bf r}_{1})\phi_{p_{2}}({\bf r}_{1})\phi_{h_{1}}({\bf r}_{1}), \tag{168}\]
and the hole wave functions are always present in the integral. This ensures the numerical convergence. The direct approach is used, for example, in Refs. [28, 29], where the CRPA equations are expanded on a Fourier-Bessel basis.
Another method of solving the CRPA equations consists in reformulating the secular Equations (159) and (160) with new unknown functions which do not have explicit dependence on the continuous particle energy \(\epsilon_{p}\). The new unknowns are the _channel functions_\(f\) and \(g\) defined as:
\[f^{\nu}_{ph}({\bf r})\,=\,\sum\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
and we obtain
\[(\epsilon_{p}-\epsilon_{h}-\omega)\phi_{p}({\bf r},\epsilon_{p})\,X^{\nu}_{ph}( \epsilon_{p})\,=\,\hat{h}\,\phi_{p}({\bf r},\epsilon_{p})\,X^{\nu}_{ph}(\epsilon _{p})\,-\,(\epsilon_{h}+\omega)\,\phi_{p}({\bf r},\epsilon_{p})\,X^{\nu}_{ph}( \epsilon_{p})\,. \tag{172}\]
Since the s.p. hamiltonian \(\hat{h}\) does not depend on \(\epsilon_{p}\), we can write
\[\sum\hskip-10.0pt\int_{{\cal G}_{p}}\,\hat{h}\,\phi_{p}({\bf r},\epsilon_{p}) \,X^{\nu}_{ph}(\epsilon_{p})=\hat{h}\,f^{\nu}_{ph}({\bf r})\,\,\,. \tag{173}\]
We apply this procedure, i.e., multiplication of \(\phi_{p}({\bf r})\) and integration on \(\epsilon_{p}\), to all the terms of Equations (159) and (160). By considering that
\[\int d\epsilon_{p}\phi^{*}_{p}({\bf r}_{1})\phi_{p}({\bf r}_{2})=\delta({\bf r }_{1}-{\bf r}_{2}),\]
we obtain a new set of CRPA secular equations where the unknowns are the channel functions \(f\) and \(g\),
\[\hat{h}\,f_{ph}({\bf r})-\,(\epsilon_{h}\,+\,\omega)\,f_{ph}({\bf r}) = -\,{\cal T}_{ph}({\bf r}) \tag{174}\] \[+ \sum_{\epsilon_{p}<\epsilon_{\rm F}}\,\phi_{p}({\bf r})\,\int{ \rm d}^{3}r_{1}\phi^{*}_{h}({\bf r}_{1})\,{\cal G}_{ph}({\bf r}_{1})\,,\] \[\hat{h}\,g_{ph}({\bf r})-(\epsilon_{h}\,-\,\omega)\,g_{ph}({\bf r}) = -\,{\cal G}_{ph}({\bf r})\] (175) \[+ \sum_{\epsilon_{p}<\epsilon_{\rm F}}\,\phi_{p}({\bf r})\,\int{ \rm d}^{3}r_{1}\,\phi^{*}_{h}({\bf r}_{1})\,{\cal G}_{ph}({\bf r}_{1})\,,\]
where we have defined
\[{\cal T}_{ph}(r) = \,\sum_{[p^{\prime}]h^{\prime}}\,\int{\rm d}^{3}\,r_{2}V({\bf r },{\bf r}_{2})\Bigg{\{}\phi^{*}_{h^{\prime}}({\bf r}_{2})\,\left[\,\phi_{h}( {\bf r})\,f_{p^{\prime}h^{\prime}}({\bf r}_{2})-f_{p^{\prime}h^{\prime}}({\bf r })\,\phi_{h}({\bf r}_{2})\right] \tag{176}\] \[+ g^{*}_{p^{\prime}h^{\prime}}({\bf r}_{2})\left[\,\phi_{h}({\bf r })\,\phi_{h^{\prime}}({\bf r}_{2})\,-\phi_{h^{\prime}}({\bf r})\,\phi_{h}({\bf r }_{2})\right]\Bigg{\}}\,,\]
and \({\cal G}_{ph}\) is obtained from the above equation by interchanging the \(f\) and \(g\) channel functions. The last terms of both Equations (174) and (175) are the contributions of particle wave functions \(\phi_{p}\) which are not in the continuum.
We changed a set of algebraic equations with unknowns depending on the continuous variable \(\epsilon_{p}\) into a set of integro-differential equations whose unknowns depend on \({\bf r}\). In analogy to what we have discussed above, for the direct solution of the CRPA secular equations, we solve Equations (174) and (175) a number of times equal to the number of the open channels, by imposing that the particle is emitted only in the elastic channel.
In spherical systems, the boundary conditions are imposed on the radial parts of the \(f\) and \(g\) functions. For an open \(ph\) channel, the outgoing asymptotic behavior of the channel function \(f^{p_{0}h_{0}}_{ph}\) is
\[f^{p_{0}h_{0}}_{ph}(r\to\infty)\,\to\,R_{p_{0}}(r,\epsilon_{p})\,\delta_{p,p_{ 0}}\,\delta_{h,h_{0}}\,+\,\lambda\,H^{-}_{p}(\epsilon_{h}+\omega,r)\,, \tag{177}\]
where \(\lambda\) is a complex normalization constant and \(H^{-}_{p}(\epsilon_{h}+\omega,r)\) is an ingoing Coulomb function if the emitted particle is electrically charged or a Hankel function in cases of neutron. The radial part of the s.p. wave function \(R_{p}\) is the eigenfunction of the s.p. hamiltonian for positive energy. In the case of a closed channel, the asymptotic behavior is given by a decreasing exponential function
\[f^{p_{0}h_{0}}_{ph}(r\to\infty)\,\to\,\frac{1}{r}\,\exp\left[-r\left(\frac{2m| \epsilon_{h}+\omega|}{\hbar^{2}}\right)^{\frac{1}{2}}\right]\,, \tag{178}\]
in analogy to the case of the channel functions \(g^{p_{0}h_{0}}_{ph}\),
\[g^{p_{0}h_{0}}_{ph}(r\to\infty)\,\to\,\frac{1}{r}\,\exp\left[-r\left(\frac{2m| \epsilon_{h}-\omega|}{\hbar^{2}}\right)^{\frac{1}{2}}\right]\,. \tag{179}\]
This approach solves the two technical problems of the direct approach indicated above, since the integration \(\epsilon_{p}\) is formally done in the definition of the two channel functions \(f\) and \(g\).
These CRPA secular equations can be solved by using a procedure similar to that presented in Refs. [30, 31]. The channel functions \(f\) and \(g\) are expanded on the basis of Sturm functions \(\Phi_{p}^{\mu}\) which obey the required boundary conditions (177)-(179).
In the IPM, the particle emission process is described by considering that a particle lying on the hole state \(h_{0}\) is emitted in the particle state \(p_{0}\). The CRPA considers this fact in the elastic channel and, in addition, takes care of the fact that the residual interaction mixes this direct emission with all the other \(ph\) pairs compatible with the total angular momentum of the excitation.
## 7 Quasi-Particle RPA (QRPA)
In the derivations presented in the previous sections, we considered that the IPM ground state is defined by a unique Slater determinant \(|\Phi_{0}\rangle\), where all the s.p. states below the Fermi energy are fully occupied and those above it are completely empty. This description does not consider the presence of an effect which is very important in nuclei: the pairing. This effect couples two like-fermions to form a unique bosonic system. In metals this produces the effects of superconductivity. In nuclear physics this leads to the fact that all the even-even nuclei, without exceptions, have spin zero.
A convenient description of pairing effects is based on the Bardeen-Cooper-Schrieffer (BCS) theory of superconductivity [32]. In this approach, the choice of \(|\Phi_{0}\rangle\) for the description of the system ground state is abandoned.
Let us consider a finite fermion system and use the expression Equation (8) for the s.p. wave functions. We introduce a notation to indicate time-reversed s.p. wave functions
\[|k\rangle:=|nl\frac{1}{2}jm\rangle\quad;\quad|-k\rangle:=|nl\frac{1}{2}j-m \rangle\,. \tag{180}\]
The BCS ground state is defined as
\[|{\rm BCS}\rangle:=\prod_{k>0}^{\infty}\left(u_{k}+v_{k}\hat{a}_{k}^{+}\hat{a }_{-k}^{+}\right)|-\rangle\,, \tag{181}\]
where we have indicated with \(|-\rangle\) the state describing the physical vacuum. The \(v_{k}^{2}\) factor is the occupation probability of the \(k\)-th s.p. state, and \(u_{k}^{2}=1-v_{k}^{2}\) the probability of being empty. When pairing effects are negligible, for example, in doubly magic nuclei, \(v_{k}=1\) for all the s.p. states below the Fermi surface and \(v_{k}=0\) for all the states above it; therefore, \(|{\rm BCS}\rangle=|\Phi_{0}\rangle\).
A convenient manner of handling the \(|{\rm BCS}\rangle\) states is to define quasi-particle creation and destruction operators which are linear combinations of usual particle creation and destruction operators. The relations are known as _Bogoliubov-Valatin transformations_
\[\hat{\alpha}_{k} = u_{k}\hat{a}_{k}-v_{k}\hat{a}_{-k}^{+}, \tag{182}\] \[\hat{\alpha}_{k}^{+} = u_{k}\hat{a}_{k}^{+}-v_{k}\hat{a}_{-k}. \tag{183}\]
Since the quasi-particle operators are linear combination of the creation and destruction operators, anti-commutation relations analogous to (294) are valid also for the \(\hat{\alpha}\) and \(\hat{\alpha}^{+}\) operators. It is possible to show that [14]
\[\hat{\alpha}_{k}\,|{\rm BCS}\rangle=0, \tag{184}\]
indicating that the \(|{\rm BCS}\rangle\) states can be appropriately called quasi-particle vacuum. The BCS ground state is not an eigenstate of the number operator
\[\hat{N}=\sum_{k}\hat{a}_{k}^{+}\hat{a}_{k}, \tag{185}\]
and the number of particles is conserved only on average [13, 14]
\[\langle{\rm BCS}|\hat{N}|{\rm BCS}\rangle=2\sum_{k>0}v_{k}^{2}=A. \tag{186}\]
The values of the \(v_{k}\) coefficients, and consequently those of \(u_{k}\), are obtained by exploiting the variational principle. For this reason, it is common practice to use a definition of the hamiltonian containing the Lagrange multiplier \(\lambda\), related to the total number of particle \(A\)
\[\hat{\mathscr{H}}=\hat{H}-\lambda\hat{N}. \tag{187}\]
The hamiltonian \(\hat{H}\) is written by expressing in Equation (304) the \(\hat{a}\) and \(\hat{a}^{+}\) operators in terms of the quasi-particle operators \(\alpha^{+}\) and \(\alpha\). By observing the operator structure, it is possible to identify four different terms (see Eq. (13.32) of [14])
\[\hat{\mathscr{H}}=\hat{H}-\lambda\hat{N}=\hat{\mathscr{H}}_{0}+\hat{\mathscr{ H}}_{11}+\hat{\mathscr{H}}_{22}+\hat{H}_{\rm int}, \tag{188}\]
where \(\lambda\) is present only in the first three terms. The various terms are defined as follows.
1. \(\hat{\mathscr{H}}_{0}\) is purely scalar, \[\hat{\mathscr{H}}_{0}=\sum_{k}\left[(\epsilon_{k}-\lambda-\mu_{k})2v_{k}^{2}+ u_{k}v_{k}\sum_{k^{\prime}}\overline{V}_{k,k^{\prime},k,k^{\prime}}u_{k^{ \prime}}v_{k^{\prime}}\right].\] (189)
2. \(\hat{\mathscr{H}}_{11}\) depends on \(\hat{\alpha}_{k}^{+}\hat{\alpha}_{k}\), \[\hat{\mathscr{H}}_{11}=\sum_{k}\left\{\left[\epsilon_{k}-\lambda-\mu_{k}\right] \left(u_{k}^{2}-v_{k}^{2}\right)+2u_{k}v_{k}\Delta_{k}\right\}\hat{\alpha}_{k }^{+}\hat{\alpha}_{k}.\] (190)
3. \(\hat{\mathscr{H}}_{22}\) depends on \(\hat{\mathbb{N}}[\hat{\alpha}_{k}^{+}\hat{\alpha}_{k^{\prime}}^{+}+\hat{ \alpha}_{k}\hat{\alpha}_{k^{\prime}}]\).
4. \(\hat{H}_{\rm int}=\hat{H}_{40}+\hat{H}_{31}+\hat{H}_{22}\), where \(\hat{H}_{40}\) depends on \([\hat{\alpha}_{k}^{+}\hat{\alpha}_{k}^{+}\hat{\alpha}_{k}^{+}\hat{\alpha}_{k }^{+}+h.c.]\), \(\hat{H}_{31}\) depends on \([\hat{\alpha}_{k}^{+}\hat{\alpha}_{k}^{+}\hat{\alpha}_{k}^{+}\hat{\alpha}_{k }^{+}\hat{\alpha}_{k}+h.c.]\), and finally, \[\hat{H}_{22}=\frac{1}{2}\sum_{abcd}V_{abcd}^{(22)}\hat{\alpha}_{k}^{+}\hat{ \alpha}_{k_{b}}^{+}\hat{\alpha}_{k_{d}}\hat{\alpha}_{k_{e}},\] (191) with \[V_{abcd}^{(22)}=(u_{a}u_{b}u_{c}u_{d}+v_{a}v_{b}v_{c}v_{d}+4u_{a}v_{b}u_{c}v_{ d})\overline{V}_{abcd}.\] (192)
In the above equations, we used the following scalar quantities:
\[\Delta_{k} = -\sum_{k^{\prime}}\overline{V}_{k,-k,k^{\prime},-k^{\prime}}u_{k ^{\prime}}v_{k^{\prime}}, \tag{193}\] \[\epsilon_{k} = \int d^{3}r\,\phi_{k}^{*}(\mathbf{r})\left(\frac{-\hbar^{2} \nabla^{2}}{2m}\right)\phi_{k}(\mathbf{r})+\frac{1}{2}\sum_{k^{\prime}} \overline{V}_{k,k^{\prime},k,k^{\prime}}\,v_{k^{\prime}}^{2},\] (194) \[\mu_{k} = -\frac{1}{2}\sum_{k^{\prime}}\overline{V}_{k,k^{\prime},k,k^{ \prime}}\,v_{k^{\prime}}^{2}. \tag{195}\]
Because of Equation (184) the expectation value of \(\hat{\mathscr{H}}\) with respect to the BCS ground state is
\[\langle BCS|\hat{\mathscr{H}}|BCS\rangle=\langle BCS|\hat{\mathscr{H}}_{0}| BCS\rangle\equiv\mathscr{E}_{A}^{\rm BCS}; \tag{196}\]
therefore, the application of the variational principle is
\[\delta(\langle BCS|\hat{\mathscr{H}}_{0}|BCS\rangle)=0, \tag{197}\]
which implies the relation [13, 14]:
\[(u^{2}-v^{2})\Delta_{k}=2v_{k}u_{k}(\epsilon_{k}-\lambda+\mu_{k})\Rightarrow v _{k}u_{k}=\frac{\Delta_{k}}{2\sqrt{(\epsilon_{k}-\lambda-\mu_{k})^{2}+\Delta_ {k}^{2}}}. \tag{198}\]
We insert the above result in Equation (190) and obtain the BCS s.p. energies
\[\hat{\mathscr{H}}_{11}=\left\{\sqrt{(\epsilon_{k}-\lambda-\mu_{k})^{2}+\Delta_ {k}^{2}}\right\}\hat{\alpha}_{k}^{+}\hat{\alpha}_{k}:=\epsilon_{k}^{\rm BCS} \hat{\alpha}_{k}^{+}\hat{\alpha}_{k}. \tag{199}\]
In the BCS approach, the radial expressions of the s.p. wave functions are obtained by carrying out IPM calculations and only the occupation amplitudes \(v_{k}\) and \(u_{k}\) are modified. There is a more fundamental approach, the Hartree-Fock-Bogolioubov theory, where s.p. wave functions, energies and occupation probabilities are calculated in a unique theoretical framework whose only input is the effective nucleon-nucleon interaction.
After having defined a new ground state containing pairing effects, we can use it to develop the theory describing the harmonic vibrations around it. The derivation of the QRPA secular equations is carried out by using the EOM method described in Section 3. In this case, the Slater determinant \(|\Phi_{0}\rangle\) is substituted by the BCS ground state \(|\text{BCS}\rangle\) and the particle creation and destruction operators \(\hat{a}_{k}\) and \(\hat{a}_{k}^{+}\) by the quasi-particle operators \(\hat{\alpha}_{k}\) and \(\hat{\alpha}_{k}^{+}\). The QRPA excitation operator is given by
\[Q_{\nu}^{+}\equiv\sum_{a\leq b}X_{ab}^{\nu}\hat{\alpha}_{a}^{+}\hat{\alpha}_{b }^{+}-\sum_{a\leq b}Y_{ab}^{\nu}\hat{\alpha}_{a}\hat{\alpha}_{b}\;\;. \tag{200}\]
The indexes \(a\) and \(b\) containing all the quantum numbers which identify the quasi-particle states are not, any more, referred to as particle or hole states. In this approach, the idea of Fermi surface has disappeared. Each quasi-particle state can be partially occupied. For this reason, in the above equation, we had to impose restrictions on the sums in order to avoid double counting.
In the present case, the EOM (39) assumes the expression
\[\left\langle\text{BCS}\right|\left[\delta\hat{Q}_{\nu},[\hat{\mathcal{H}},\hat {Q}_{\nu}^{+}]\right]|\text{BCS}\rangle=\omega\left\langle\text{BCS}\right| \left[\delta\hat{Q}_{\nu},\hat{Q}_{\nu}^{+}\right]|\text{BCS}\rangle\,, \tag{201}\]
where we have substituted \(\hat{O}\) with \(\delta\hat{Q}_{n}\). Following the steps of the derivation of RPA equations, see Section 3.2, and defining \(A\) and \(B\) matrices as
\[A_{ab,cd} \equiv \left\langle\text{BCS}\right|\left[\hat{\alpha}_{a}\hat{\alpha}_{ b},[\hat{\mathcal{H}},\hat{\alpha}_{c}^{+}\hat{\alpha}_{d}^{+}]\right]|\text{BCS} \rangle\,, \tag{202}\] \[B_{ab,cd} \equiv -\left\langle\text{BCS}\right|\left[\hat{\alpha}_{a}\hat{\alpha} _{b},[\hat{\mathcal{H}},\hat{\alpha}_{c}\hat{\alpha}_{d}]\right]|\text{BCS} \rangle\,, \tag{203}\]
it is possible to obtain a set of linear equations analogous to those of RPA
\[\sum_{c\leq d}A_{ab,cd}X_{cd}^{\nu}+\sum_{c\leq d}B_{ab,cd}Y_{cd} ^{\nu} = \omega_{\nu}X_{ab}^{\nu}, \tag{204}\] \[\sum_{c\leq d}B_{cd,ab}^{\star}X_{cd}^{\nu}+\sum_{c\leq d}A_{cd, ab}Y_{cd}^{\nu} = -\omega_{\nu}Y_{ab}^{\nu}, \tag{205}\]
which can be written in matrix form analogously to Equation (67). This calculation is explicitly carried out in Chapter 18 of Ref. [14] and it shows that only \(\hat{\mathcal{H}}_{0}\), \(\hat{\mathcal{H}}_{11}\) and \(\hat{\mathcal{H}}_{22}\) contribute to the \(A\) and \(B\) matrices. These matrices contain, in addition to the particle-hole excitations present in the common RPA, also particle-particle and hole-hole transitions, since each s.p. state is only partially occupied. The solution of the QRPA secular equations, for each excited state, provides the \(X\) and \(Y\) amplitudes which indicate the contribution of each quasi-particle excitation pair.
The QRPA solutions have the same properties of those of RPA solutions. The QRPA equations allow positive and negative eigenergies with the same absolute value. Eigenvectors corresponding to different energy eigenvalues are orthogonal. The set of QRPA eigenstates is complete.
The transition amplitudes from the QRPA ground state to an excited state, induced by an external one-body operator \(\hat{F}\), Equation (50), is
\[\left\langle\text{QRPA};\nu|\hat{F}|\text{QRPA};0\right\rangle=\sum_{a\leq b}f_ {ab}\left(v_{a}u_{b}+u_{a}v_{b}\right)(X_{ab}^{\nu}+Y_{ab}^{\nu}). \tag{206}\]
For \(ph\) transitions only, when \(v_{a}=1,u_{a}=0\) and \(v_{b}=0,u_{b}=1\) one recovers the ordinary RPA expression (74).
Specific Applications
In this section, I discuss some pragmatic issues arising in actual RPA calculations. The input of RPA calculations is composed by the s.p. energies and wave functions and also by the effective interaction between the particles forming the system. There are various possible choices of these quantities and they define different types of calculations.
A fully _phenomenological approach_ is based on the Landau-Migdal theory of finite Fermi systems [33, 34]. In this theory, the attention is concentrated on the small vibrations on top of the ground state, which is assumed to be perfectly known. The s.p. wave functions are generated by solving the MF Equation (4) with a phenomenological potential whose parameters are chosen to reproduce at best the empirical values of the s.p. energies of the levels around the Fermi surface. In RPA calculations, these empirical values are used when available; otherwise, the s.p. energies obtained by solving the MF equation are considered. The interaction is a zero-range density dependent Landau-Migdal force whose parameters are selected to reproduce some empirical characteristics of the excitation spectrum.
This approach has shown a great capacity to describe features of the excited states and also a remarkable predictive power. For example, the presence of a collective monopole excitation in \({}^{208}\)Pb was predicted at the right energy [35] before it was identified by experiments with \(\alpha\)[36] and \({}^{3}\)He scattering [37]. The drawback consists in the need for a continuous tuning of the MF potential and the interaction parameters, since the results strongly depend on the input. This means that there is a set of force parameters for each nucleus, and also in the same nucleus the values of these parameters change if the dimensions of the configuration space are modified.
An approach which avoids this continuous setting of the free parameters is the so-called _self-consistent_ approach. In this case, the s.p. wave functions and energies are generated by solving HF or DFT equations. The parameters of the effective interaction are tuned to reproduce at best experimental binding energies and charge radii, all along the isotope table. The same interaction, unique for all the nuclei, is used also in RPA calculations.
The density dependent zero-range Skyrme force is probably the interaction most used in this type of calculation [38]. The zero-range characteristic allows great simplifications of the expressions of the interaction matrix elements and the numerical calculations are relatively fast. There are tens of different sets of parameters of the Skyrme force, each of them properly tuned to describe some specific characteristics of the nuclei. The zero-range feature of the Skyrme force is mitigated by the presence of momentum dependent terms. On the other hand, the sensitivity on the dimensions of the s.p. configuration space is not negligible. For this reason, in BCS and QRPA calculations it is necessary to use a different interaction to treat the pairing.
These drawbacks are overcome by interactions which have finite range, a feature which clearly makes much more involved the numerical calculations. A widely used finite-range interaction is that of Gogny [39]. Despite this difference, the philosophy of the calculations carried out with the two kinds of interaction is the same: a unique force, valid all along the nuclide chart, tuned to reproduce ground state properties with HF calculations and used in RPA. Discrete RPA calculations carried out with Gogny force show a convergence of the results after certain dimensions of the configuration space have been reached.
The self-consistent approach does not provide an accurate description of the excited states obtained with the phenomenological approach. On the other hand, by using self-consistent approaches it is possible to make connections between the properties of the ground and of the excited state and also between features appearing in different nuclei, everything described within a unique theoretical framework. This approach can make predictions on properties of nuclei far from stability where empirical quantities have not yet been measured.
As an example of the RPA result, we consider the case of the \(3^{-}\) state in \({}^{208}\)Pb already mentioned in Section 2.4. We show in Figure 6 a comparison between the transition densities calculated with an RPA theory (full line), that obtained by an s.p. transition (dashed lines) and the empirical transition density (dots) extracted from the inelastic electron scattering data of Ref. [40]. The s.p. excitation was obtained by considering the proton transition from the \(1s_{1/2}\) hole state to the \(2f_{7/2}\) particle state with the excitation energy of 5.29 MeV. RPA calculation was carried out with the phenomenological approach and the excitation energy is of 2.66 MeV to be compared with an experimental value of 2.63 MeV.
The s.p. transition, which is what the IPM at its best can provide, is unable to describe the large values of the transition density at the nuclear surface. This surface vibration is a characteristic feature of this highly collective state. RPA is able to reproduce the value of the excitation energy and also the correct behavior of the transition density.
## 9 Extensions of RPA
### Second RPA
The main limitation of RPA theory is due to the fact that the \(\hat{Q}_{\nu}^{+}\) operator considers only \(1p-1h\) and \(1h-1p\) types of excitations, see Equation (54). The many-body system allows more complicated excitation modes where \(n\)-particle and \(n\)-holes are created. The extension of \(\hat{Q}_{\nu}^{+}\) to consider also \(2p-2h\) excitations is called Second RPA (SRPA) [41, 42, 43]. In this theory, the operator which defines the excited states is
\[\hat{Q}_{\nu}^{+} \equiv \sum_{m,i}\left(X_{mi}^{\nu}\hat{a}_{m}^{+}\hat{a}_{i}-Y_{mi}^{\nu }\hat{a}_{i}^{+}\hat{a}_{m}\right) \tag{207}\] \[+ \sum_{m<n,i<j}\left(X_{mnij}^{\nu}\hat{a}_{m}^{+}\hat{a}_{n}^{+} \hat{a}_{i}\hat{a}_{j}-Y_{mnij}^{\nu}\hat{a}_{i}^{+}\hat{a}_{j}^{+}\hat{a}_{m} \hat{a}_{n}\right)+\sum_{m,n,i,j}Z_{mjn}^{\nu}\hat{a}_{m}^{+}\hat{a}_{j}^{+} \hat{a}_{n}\hat{a}_{i}.\]
where the \(X\), \(Y\) and \(Z\) factors are real numbers.
We insert this operator into the EOM Equation (39) and substitute \(\hat{\Theta}\) with \(\delta\hat{Q}_{\nu}\). Since \(\delta\hat{Q}_{\nu}\) implies variations of the coefficients in (207) and these variations are independent of each other, we obtain five equations
\[\left\langle\mbox{RPAII}\right|\left[\hat{a}_{i}^{+}\hat{a}_{m}, \hat{Q}_{\nu}^{+}\right]\left|\mbox{RPAII}\right\rangle = \omega_{\nu}\left\langle\mbox{RPAII}\right|\left[\hat{a}_{i}^{+} \hat{a}_{m},\hat{Q}_{\nu}^{+}\right]\left|\mbox{RPAII}\right\rangle, \tag{208}\] \[\left\langle\mbox{RPAII}\right|\left[\hat{a}_{m}^{+}\hat{a}_{i}, \hat{Q}_{\nu}^{+}\right]\left|\mbox{RPAII}\right\rangle = \omega_{\nu}\left\langle\mbox{RPAII}\right|\left[\hat{a}_{m}^{+} \hat{a}_{i},\hat{Q}_{\nu}^{+}\right]\left|\mbox{RPAII}\right\rangle,\] (209) \[\left\langle\mbox{RPAII}\right|\left[\hat{a}_{i}^{+}\hat{a}_{j}^{+ }\hat{a}_{n}\hat{a}_{m},[\hat{H},\hat{Q}_{\nu}^{+}]\right]\left|\mbox{RPAII} \right\rangle = \omega_{\nu}\left\langle\mbox{RPAII}\right|\left[\hat{a}_{i}^{+} \hat{a}_{j}^{+}\hat{a}_{n}\hat{a}_{m},\hat{Q}_{\nu}^{+}\right]\left|\mbox{RPAII }\right\rangle,\] (210) \[\left\langle\mbox{RPAII}\right|\left[\hat{a}_{m}^{+}\hat{a}_{n}^{+ }\hat{a}_{j}\hat{a}_{i},[\hat{H},\hat{Q}_{\nu}^{+}]\right]\left|\mbox{RPAII} \right\rangle = \omega_{\nu}\left\langle\mbox{RPAII}\right|\left[\hat{a}_{m}^{+ }\hat{a}_{n}^{+}\hat{a}_{j}\hat{a}_{i},\hat{Q}_{\nu}^{+}\right]\left|\mbox{RPAII }\right\rangle,\] (211) \[\left\langle\mbox{RPAII}\right|\left[\hat{a}_{i}^{+}\hat{a}_{n}^{+ }\hat{a}_{j}\hat{a}_{m},[\hat{H},\hat{Q}_{\nu}^{+}]\right]\left|\mbox{RPAII} \right\rangle = \omega_{\nu}\left\langle\mbox{RPAII}\right|\left[\hat{a}_{i}^{+ }\hat{a}_{n}^{+}\hat{a}_{j}\hat{a}_{m},\hat{Q}_{\nu}^{+}\right]\left|\mbox{RPAII }\right\rangle, \tag{212}\]
Figure 6: Electron scattering transition densities for the first \(3^{-}\) excited state of \({}^{208}\)Pb. The empirical density, indicated by the dots, is extracted from the inelastic electron scattering data of Ref. [40]. The dashed line shows the transition density calculated in an IPM particle model where the state is described by the s.p. proton transition from the \(1s_{1/2}\) hole state to the \(2f_{7/2}\) particle state. The full line shows the RPA result.
where \(\left|\text{RPAII}\right\rangle\) is the SRPA ground state defined by the equation
\[Q_{\nu}\left|\text{RPAII}\right\rangle=0. \tag{213}\]
In analogy to what is presented in Section 3.2.2, we use the QBA by assuming
\[\left\langle\text{RPAII}\right|\left[\hat{\Theta}_{1}\,,\,\hat{\Theta}_{2} \right]\left|\text{RPAII}\right\rangle\simeq\left\langle\Phi_{0}\right|\left[ \hat{\Theta}_{1}\,,\,\hat{\Theta}_{2}\right]\left|\Phi_{0}\right\rangle, \tag{214}\]
where \(\hat{\Theta}\) are two generic operators and \(\left|\Phi_{0}\right\rangle\) indicates, as usual, the IPM ground state. It is convenient to define the following matrix elements
\[\left\langle\Phi_{0}\right|\left[\hat{a}_{i}^{+}\hat{a}_{m},[\hat {H},\hat{a}_{n}^{+}\hat{a}_{j}]\right]\left|\Phi_{0}\right\rangle = A_{mi,nj}, \tag{215}\] \[\left\langle\Phi_{0}\right|\left[\hat{a}_{i}^{+}\hat{a}_{m},[\hat {H},\hat{a}_{j}^{+}\hat{a}_{n}]\right]\left|\Phi_{0}\right\rangle = -B_{mi,nj},\] (216) \[\left\langle\Phi_{0}\right|\left[\hat{a}_{i}^{+}\hat{a}_{m},[\hat {H},\hat{a}_{n}^{+}\hat{a}_{p}^{+}\hat{a}_{i}\hat{a}_{j}]\right]\left|\Phi_{0}\right\rangle = A_{mi,npjl},\] (217) \[\left\langle\Phi_{0}\right|\left[\hat{a}_{i}^{+}\hat{a}_{m},[\hat {H},\hat{a}_{j}^{+}\hat{a}_{l}^{+}\hat{a}_{p}\hat{a}_{n}]\right]\left|\Phi_{0}\right\rangle = -B_{mi,npjl},\] (218) \[\left\langle\Phi_{0}\right|\left[\hat{a}_{i}^{+}\hat{a}_{j}^{+} \hat{a}_{n}\hat{a}_{m},[\hat{H},\hat{a}_{p}^{+}\hat{a}_{q}^{+}\hat{a}_{i}\hat{ a}_{k}]\right]\left|\Phi_{0}\right\rangle = A_{mnij,pqkl},\] (219) \[\left\langle\Phi_{0}\right|\left[\hat{a}_{i}^{+}\hat{a}_{j}^{+} \hat{a}_{n}\hat{a}_{m},[\hat{H},\hat{a}_{k}^{+}\hat{a}_{l}^{+}\hat{a}_{q}\hat{ a}_{p}]\right]\left|\Phi_{0}\right\rangle = -B_{mnij,pqkl}. \tag{220}\]
The \(A_{mi,nj}\) and \(B_{mi,nj}\) matrix elements are identical to those defined in Equation (58). Explicit expressions of the other matrix elements can be found in Ref. [41]. With the help of these definitions, Equations (208-212) can be expressed as:
\[\sum_{pk}\left(A_{mi,pk}X_{pk}^{\nu}+B_{mi,pk}Y_{pk}^{\nu}\right)+ \sum_{p<q,k<l}A_{mi,pqkl}X_{pqkl}^{\nu} = \omega_{\nu}X_{mi}^{\nu}, \tag{221}\] \[\sum_{pk}\left(B_{mi,pk}^{+}X_{pk}^{\nu}+A_{mi,pk}^{+}Y_{pk}^{\nu }\right)+\sum_{p<q,k<l}A_{mi,pqkl}^{+}Y_{pqkl}^{\nu} = -\omega_{\nu}Y_{mi}^{\nu},\] (222) \[\sum_{pk}A_{mnij,pk}X_{pk}^{\nu}+\sum_{p<q,k<l}A_{mnij,pqkl}X_{pqkl }^{\nu} = \omega_{\nu}X_{mnij}^{\nu},\] (223) \[\sum_{pk}A_{mnij,pk}^{+}X_{pk}^{\nu}+\sum_{p<q,k<l}A_{mnij,pqkl}X _{pqkl}^{\nu} = -\omega_{\nu}Y_{mnij}^{\nu}, \tag{224}\]
where it appears evident that the \(Z\) terms of Equation (207) do not contribute.
The above equations form the complete set of SRPA secular equations. Usually, one does not search for the whole solution of these equations, but one considers only the unknowns \(X_{mi}^{\nu}\) and \(Y_{mi}^{\nu}\). This is done by formally extracting \(X_{mnij}^{\nu}\) and \(Y_{mnij}^{\nu}\) from Equations (223) and (224), respectively, and by inserting the obtained expressions into Equations (221) and (222). In this way, we obtain two equations where the only unknowns are \(X_{mi}^{\nu}\) and \(Y_{mi}^{\nu}\)
\[\sum_{pk}\left[A_{mi,pk}-\sum_{p_{1}<q_{1},k_{1}<l_{1}}\sum_{p_{2 }<q_{2},k_{2}<l_{2}}\right.\] \[\left.A_{mi,p_{1}q_{1}k_{1}l_{1}}\left(A_{p_{1}q_{1}k_{1}l_{1},p_{2 }q_{2}k_{2}l_{2}}-\omega_{\nu}\delta_{p_{1},p_{2}}\delta_{q_{1},q_{2}}\delta_{ k_{1},k_{2}}\delta_{l_{1},l_{2}}\right)^{-1}A_{p_{2}q_{2}k_{2}l_{2},pk}\right] X_{pk}^{\nu}\] \[+\sum_{pk}B_{mi,pk}Y_{pk}^{\nu}=\omega_{\nu}X_{mi}^{\nu}, \tag{225}\] \[\sum_{pk}\left[A_{mi,pk}^{+}-\sum_{p_{1}<q_{1},k_{1}<l_{1}}\sum_ {p_{2}<q_{2},k_{2}<l_{2}}\right.\]
\[A^{+}_{mi,p_{1}q_{1}k_{1}l_{1}}\left(A^{+}_{p_{1}q_{1}k_{1}l_{1},p_{2}q_{2}k_{2}l _{2}}-\omega_{\nu}\delta_{p_{1},p_{2}}\delta_{q_{1},q_{2}}\delta_{k_{1},k_{2}} \delta_{l_{1},l_{2}}\right)^{-1}A^{+}_{p_{2}q_{2}k_{2}l_{2},pk}\Bigg{]}Y^{\nu}_ {pk}\] \[+\sum_{pk}B^{+}_{mi,pk}X^{\nu}_{pk}=-\omega_{\nu}Y^{\nu}_{mi}, \tag{226}\]
which, in matrix form, can be written in analogy to Equation (67) as
\[\left(\begin{array}{cc}\mathcal{A}&\mathcal{B}\\ \mathcal{B}^{+}&\mathcal{A}^{+}\end{array}\right)\left(\begin{array}{c}X^{ \nu}\\ Y^{\nu}\end{array}\right)=\omega_{\nu}\left(\begin{array}{c}X^{\nu}\\ -Y^{\nu}\end{array}\right). \tag{227}\]
The second terms in square brackets of Equations (225) and (226) couples \(1p-1h\) excitations to \(2p-2h\) excitations. If these terms are zero, RPA equations are recovered. The secular SRPA equations have the same properties as RPA equations.
1. Positive and negative energy eigenvalues with the same absolute value are allowed.
2. Eigenvectors of different eigenvalues are orthogonal.
3. The normalization between the excited states implies \[\sum_{mi}\left(X^{\nu}_{mi}X^{\nu^{\prime}}_{mi}-Y^{\nu}_{mi}Y^{\nu^{\prime}}_ {mi}\right)=\delta_{\nu,\nu^{\prime}}.\] (228)
The number of terms of the \(A^{+}_{p_{1}q_{1}k_{1}l_{1},p_{2}q_{2}k_{2}l_{2}}\) and \(A_{p_{1}q_{1}k_{1}l_{1},p_{2}q_{2}k_{2}l_{2}}\) matrix elements is quite large; for this reason, the so-called diagonal approximation is often used. This approximation consists in considering in \(A_{p_{1}q_{1}k_{1}l_{1},p_{2}q_{2}k_{2}l_{2}}\) only the diagonal part depending on the s.p. energies involved in the \(2p-2h\) excitations
\[A^{+}_{p_{1}q_{1}k_{1}l_{1},p_{2}q_{2}k_{2}l_{2}}\longrightarrow(\epsilon_{p_ {1}}+\epsilon_{q_{1}}-\epsilon_{k_{1}}-\epsilon_{l_{1}})\delta_{p_{1},p_{2}} \delta_{q_{1},q_{2}}\delta_{k_{1},k_{2}}\delta_{l_{1},l_{2}}. \tag{229}\]
The expression of the transition amplitude between the SRPA ground state and excited states can be calculated as indicated in Section 3.2.4 and the same result, Equation (74), is obtained. In this theoretical framework, the SRPA approach modifies the values of the \(X\) and \(Y\) RPA amplitudes by coupling them to the \(2p-2h\) excitation space.
### Particle-Vibration Coupling RPA
The approach presented in the previous section is general but rather difficult to implement because of the large number of \(2p-2h\) pairs to consider. Many of the \(2p-2h\) matrix elements are relatively small with respect to the \(1p-1h\) terms. Instead of evaluating many irrelevant matrix elements, it is more convenient to identify the important ones and calculate only them.
This is the basic idea of Particle-Vibration Coupling RPA (PVCRPA) [44], also called Core Coupling RPA (CCRPA), where RPA excited states are coupled to s.p. states. In this approach, the excited states have the expression
\[\left|\mathcal{R}\right\rangle=\sum_{\nu}\sum_{ph}\left|ph\right\rangle\otimes \left|\nu\right\rangle, \tag{230}\]
where \(\left|\nu\right\rangle\) is an RPA excited state, \(\left|ph\right\rangle\) is \(ph\) excitation pairs and \(\otimes\) indicates a tensor coupling.
We define a set of operators which project the eigenstate \(\Psi\) of the hamiltonian on IPM eigenstates \(\left|\Phi_{0}\right\rangle\), RPA states \(\left|\nu\right\rangle\) (composed by \(1p-1h\) excitations) and particle-vibration coupled states \(\left|\mathcal{R}\right\rangle\) (composed by \(2p-2h\)) excited pairs:
\[\hat{P}\left|\Psi\right\rangle = \left|\Phi_{0}\right\rangle \tag{231}\] \[\hat{Q}_{1}\left|\Psi\right\rangle = \left|\nu\right\rangle\] (232) \[\hat{Q}_{2}\left|\Psi\right\rangle = \left|\mathcal{R}\right\rangle \tag{233}\]
These operators have the properties
\[\hat{P}^{2}=\hat{P};\ \ \hat{Q}_{1}^{2}=\hat{Q}_{1};\ \ \hat{Q}_{2}^{2}=\hat{Q}_{2}; \tag{234}\] \[\hat{P}\hat{Q}_{1}=\hat{P}\hat{Q}_{2}=\hat{P}_{1}\hat{P}_{2}=0;\] (235) \[\hat{P}+\hat{Q}_{1}+\hat{Q}_{2}=\hat{\mathbb{I}}. \tag{236}\]
The latter property implies that \(\left|\Psi\right\rangle\) does not contain excitations more complex than \(2p-2h\) and automatically neglects some term of the many-body hamiltonian.
We can write the eigenvalue equation as
\[\hat{H}\left|\Psi\right\rangle=\hat{H}(\hat{P}+\hat{Q}_{1}+\hat{Q}_{2})\left| \Psi\right\rangle=\omega(\hat{P}+\hat{Q}_{1}+\hat{Q}_{2})\left|\Psi\right\rangle. \tag{237}\]
We multiply both sides of the above equation, respectively, by \(\hat{P}\), \(\hat{Q}_{1}\) and \(\hat{Q}_{2}\), and, by using the properties (234)-(236), we obtain the following equations
\[(\omega-\hat{P}\hat{H}\hat{P})\hat{P}\left|\Psi\right\rangle = \hat{P}\hat{H}\hat{Q}_{1}\left|\Psi\right\rangle+\hat{P}\hat{H} \hat{Q}_{2}\left|\Psi\right\rangle \tag{238}\] \[(\omega-\hat{Q}_{1}\hat{H}\hat{Q}_{1})\hat{Q}_{1}\left|\Psi\right\rangle = \hat{Q}_{1}\hat{H}\hat{P}\left|\Psi\right\rangle+\hat{Q}_{1}\hat{H }\hat{Q}_{2}\left|\Psi\right\rangle\] (239) \[(\omega-\hat{Q}_{2}\hat{H}\hat{Q}_{2})\hat{Q}_{2}\left|\Psi\right\rangle = \hat{Q}_{2}\hat{H}\hat{P}\left|\Psi\right\rangle+\hat{Q}_{2}\hat{H }\hat{Q}_{1}\left|\Psi\right\rangle. \tag{240}\]
We formally obtain \(\hat{P}\left|\Psi\right\rangle\) from Equation (238) and \(\hat{Q}_{2}\left|\Psi\right\rangle\) from Equation (240) and we insert it into Equation (239). This allows us to express this latter equation as:
\[(\omega-\hat{Q}_{1}\hat{H}\hat{Q}_{1})\hat{Q}_{1}\left|\Psi\right\rangle = \hat{Q}_{1}\hat{H}\hat{P}\frac{1}{\omega-\hat{P}\hat{H}\hat{P}+i \eta}\hat{P}\hat{H}\hat{Q}_{1}\left|\Psi\right\rangle \tag{241}\] \[+ \hat{Q}_{1}\hat{H}\hat{P}\frac{1}{\omega-\hat{P}\hat{H}\hat{P}+i \eta}\hat{P}\hat{H}\hat{Q}_{2}\left|\Psi\right\rangle\] \[+ \hat{Q}_{1}\hat{H}\hat{Q}_{2}\frac{1}{\omega-\hat{Q}_{2}\hat{H} \hat{Q}_{2}+i\eta}\hat{Q}_{2}\hat{H}\hat{P}\left|\Psi\right\rangle\] \[+ \hat{Q}_{1}\hat{H}\hat{Q}_{2}\frac{1}{\omega-\hat{Q}_{2}\hat{H} \hat{Q}_{2}+i\eta}\hat{Q}_{2}\hat{H}\hat{Q}_{1}\left|\Psi\right\rangle,\]
where we inserted in the denominator a term \(i\eta\) to avoid divergences. In the two terms containing \(\hat{P}\left|\Psi\right\rangle\) and \(\hat{Q}_{2}\left|\Psi\right\rangle\), we could insert again the results of Equation (238) and Equation (240) and we obtain terms with many denominator factors. We neglect these terms and obtain an eigenvalue equation of the form [45]
\[\hat{\mathcal{H}}(\omega)\hat{Q}_{1}\left|\Psi^{\mathcal{R}} \right\rangle_{N} = (\Omega_{N}-i\Gamma_{N})\hat{Q}_{1}\left|\Psi\right\rangle_{N} \tag{242}\] \[=\Bigg{[}\hat{Q}_{1}\hat{H}\hat{Q}_{1} + \hat{Q}_{1}\hat{H}\hat{P}\frac{1}{\omega-\hat{P}\hat{H}\hat{P}+i \eta}\hat{P}\hat{H}\hat{Q}_{1}\] \[+ \hat{Q}_{1}\hat{H}\hat{Q}_{2}\frac{1}{\omega-\hat{Q}_{2}\hat{H} \hat{Q}_{2}+i\eta}\hat{Q}_{2}\hat{H}\hat{Q}_{1}\Bigg{]}\hat{Q}_{1}\left|\Psi^ {\mathcal{R}}\right\rangle_{N},\]
where we distinguished the energy \(\omega\) characterizing the effective hamiltonian \(\hat{\mathcal{H}}\) from the energy eigenvalue which can be complex because of the imaginary parts inserted in the denominators.
Since the Hilbert subspace spanned by the \(\hat{Q}_{1}\left|\Psi^{\mathcal{R}}\right\rangle\) states is composed by \(1p-1h\) components only, we can expand each \(\hat{Q}_{1}\left|\Psi^{\mathcal{R}}\right\rangle\) state in terms of RPA eigenstates \(\left|\nu\right\rangle\) which form a basis
\[\hat{Q}_{1}\left|\Psi^{\mathcal{R}}\right\rangle_{N}=\sum_{\nu}\mathcal{S}_{ \nu}^{N}\left|\nu\right\rangle, \tag{243}\]
and write the eigenvalue Equation (242) in a matrix form
\[\sum_{\nu^{\prime}}\left\langle\nu|\hat{\mathcal{H}}(\omega)|\nu^{\prime} \right\rangle\mathcal{S}_{\nu^{\prime}}^{N}=(\Omega_{N}-i\Gamma_{N})\mathcal{S }_{\nu}^{N}. \tag{244}\]
The solution of the above eigenvalue problem provides the values of the \({\cal T}^{\lambda}_{\nu^{\prime}}\) coefficients. The transition probability of a transition from the ground state \(\left|\Psi^{\delta}\right\rangle_{0}\) to an excited state induced by a one-body operator \(\hat{F}\) is given by:
\[{}_{N}\langle\Psi^{\delta}|\,\hat{Q}_{1}^{+}\hat{F}\hat{Q}_{1}^{+}\left|\Psi^{ \delta}\right\rangle_{0}=\sum_{\nu}{\cal T}^{N}_{\nu}\left\langle\nu|\hat{F}| \nu_{0}\right\rangle=\sum_{\nu}{\cal T}^{N}_{\nu}\sum_{mi}\left(X^{\nu}_{mi}f_{ mi}+Y^{\nu}_{mi}f_{im}\right), \tag{245}\]
where we used the result of Equation (74).
In this approach, one has first to solve RPA equations for various multipoles which have to be inserted in the sums on \(\nu\). The choice of RPA solutions to be inserted is an input of the method and it is based on plausible physics hypotheses.
### Renormalized RPA
The extensions of RPA theory presented in Sections 9.1 and 9.2 aimed at including excitation modes more complicated than \(1p-1h\). The renormalized RPA (r-RPA) attacks another weak point of RPA theory: the QBA (59). This approximation forces pairs of fermionic operators to work as they would be bosonic operators. For this reason, in the literature, the QBA is associated to the statement that RPA violates the Pauli principle. The r-RPA theory avoids the use of the QBA.
As in the ordinary RPA, we indicated with \(\left|\nu_{0}\right\rangle\) the ground state of the system and with \(\left|\nu\right\rangle\) the excited state which is a combination of \(1p-1h\) and \(1h-1p\) excitations. We consider a \(\hat{Q}_{\nu}^{+}\) operator whose action is
\[\left|\nu\right\rangle=\hat{Q}_{\nu}^{+}\left|\nu_{0}\right\rangle\equiv\sum_ {ph}\left(X^{\nu}_{ph}\hat{\B}^{+}_{ph}-Y^{\nu}_{ph}\hat{\B}_{ph}\right)\left| \nu_{0}\right\rangle, \tag{246}\]
where the renormalized \(p-h\) operator is
\[\hat{\B}^{+}_{ph}\equiv\sum_{p^{\prime}h^{\prime}}N_{ph,p^{\prime}h^{\prime}} \hat{a}^{+}_{p}\hat{a}_{h}, \tag{247}\]
and \(N_{ph,p^{\prime}h^{\prime}}\) is a number. The EOM method implies that the correlated ground state satisfies the equation
\[\hat{Q}_{\nu}\left|\nu_{0}\right\rangle=0.\]
By using the anti-commutation relations (294) of the creation and destruction operators, we express the orthonormality condition relating the excited states as
\[\delta_{\nu\nu^{\prime}}=\left\langle\nu|\nu^{\prime}\right\rangle =\left\langle\nu_{0}|\left[\hat{Q}_{\nu^{\prime}},\hat{Q}_{\nu}^{+}\right] \left|\nu_{0}\right\rangle \tag{248}\] \[= \sum_{ph,p^{\prime}h^{\prime}}\left(X^{\nu^{\prime}}_{p^{\prime} h^{\prime}}X^{\nu}_{ph}\left\langle\nu_{0}\right|\left[\hat{\B}_{p^{\prime}h^{ \prime}},\hat{\B}^{+}_{ph}\right]\left|\nu_{0}\right\rangle+Y^{\nu^{\prime}}_{p^ {\prime}h^{\prime}}Y^{\nu}_{ph}\left\langle\nu_{0}\right|\left[\hat{\B}^{+}_{ p^{\prime}h^{\prime}},\hat{\B}_{ph}\right]\left|\nu_{0}\right\rangle\right)\] \[= \sum_{ph,p^{\prime}h^{\prime}}\left(X^{\nu^{\prime}}_{p^{\prime} h^{\prime}}X^{\nu}_{ph}-Y^{\nu^{\prime}}_{p^{\prime}h^{\prime}}Y^{\nu}_{ph}\right)\] \[\sum_{mi,nj}N^{\nu^{\prime}h^{\prime},mi}_{p^{\prime}h^{\prime}, mi}N_{ph,nj}\left(\delta_{mn}\left\langle\nu_{0}|\hat{a}^{+}_{j}\hat{a}_{i}| \nu_{0}\right\rangle-\delta_{ij}\left\langle\nu_{0}|\hat{a}^{+}_{n}\hat{a}_{m }|\nu_{0}\right\rangle\right).\]
The above expression is simplified if we use the s.p. basis formed by the natural orbits. By definition, this is the basis where the one-body density matrix is diagonal [46]
\[\langle\nu_{0}|\hat{a}^{+}_{\alpha}\hat{a}_{\beta}|\nu_{0}\rangle=n_{\alpha} \delta_{\alpha\beta}. \tag{249}\]
If we assume that
\[N_{ph,p^{\prime}h^{\prime}}=\frac{\delta_{pp^{\prime}}\delta_{h,h^{\prime}}}{ \sqrt{n_{h}-n_{p}}}, \tag{250}\]
we obtain
\[\sum_{ph,p^{\prime}h^{\prime}}\left(X^{\nu^{\prime}}_{p^{\prime}h^{\prime}}X^ {\nu}_{ph}-Y^{\nu^{\prime}}_{p^{\prime}h^{\prime}}Y^{\nu}_{ph}\right)=\delta _{\nu\nu^{\prime}}, \tag{251}\]
which is an expression analogous to that of the standard RPA, Equation (71). It is worth remarking that now the indexes \(p\) and \(h\) do not refer any more to s.p. states which, in the ground state, are fully occupied or completely empty. The natural orbit s.p. states are partially occupied with probability \(n_{\alpha}\); therefore, all the indexes in the sums of the above equations run on the full configuration space. To avoid double counting, we assume that the \(i,j,k,h\) indexes indicate natural orbits with energies smaller than those of the states labelled with the \(m,n,p,q\) indexes.
We proceed by using the EOM approach analogously to what was indicated in Section 3.2.2 where, now, the \(\hat{a}^{+}_{p}\hat{a}_{h}\) operators are substituted by \(\hat{\mathscr{B}}^{+}_{ph}\), and we define the following matrix elements
\[\mathbb{A}_{php^{\prime}h^{\prime}}\equiv\left\langle\nu_{0}\right|\left[\hat {\mathscr{B}}_{ph},[\hat{H},\hat{\mathscr{B}}^{+}_{p^{\prime}h^{\prime}}] \right]\left|\nu_{0}\right\rangle, \tag{252}\]
and
\[\mathbb{B}_{php^{\prime}h^{\prime}}\equiv-\left\langle\nu_{0}\right|\left[ \hat{\mathscr{B}}_{ph},[\hat{H},\hat{\mathscr{B}}_{p^{\prime}h^{\prime}}] \right]\left|\nu_{0}\right\rangle. \tag{253}\]
We obtain a set of equations analogous to those of the usual RPA
\[\left(\begin{array}{cc}\mathbb{A}&\mathbb{B}\\ \mathbb{B}^{*}&\mathbb{A}^{*}\end{array}\right)\left(\begin{array}{c}X^{ \nu}\\ Y^{\nu}\end{array}\right)=\omega_{\nu}\left(\begin{array}{c}X^{\nu}\\ \\ -Y^{\nu}\end{array}\right). \tag{254}\]
The evaluation of the \(\mathbb{A}\) and \(\mathbb{B}\) matrix elements is carried out by using the expressions of the \(\hat{\mathscr{B}}\) operators in terms of \(\hat{Q}\) operators
\[\hat{\mathscr{B}}^{+}_{ph} = \sum_{\nu}\left(X^{\nu*}_{ph}\hat{Q}^{+}_{\nu}+Y^{\nu}_{ph}\hat{Q }_{\nu}\right), \tag{255}\] \[\hat{\mathscr{B}}_{ph} = \sum_{\nu}\left(X^{\nu}_{ph}\hat{Q}_{\nu}+Y^{\nu*}_{ph}\hat{Q}^{+ }_{\nu}\right), \tag{256}\]
and we obtain
\[\mathbb{A}_{php^{\prime}h^{\prime}} = \frac{1}{2}\left(\sqrt{\frac{n_{h}-n_{p}}{n_{h^{\prime}}-n^{ \prime}_{p}}}+\sqrt{\frac{n_{h^{\prime}}-n_{p^{\prime}}}{n_{h}-n_{p}}}\right) \left(\tilde{\epsilon}_{pp^{\prime}}\delta_{hh^{\prime}}-\tilde{\epsilon}_{hh ^{\prime}}\delta_{pp^{\prime}}\right) \tag{257}\] \[+\sqrt{(n_{h}-n_{p})(n_{h^{\prime}}-n_{p^{\prime}})}\overline{V} _{ph^{\prime}h^{\prime}p^{\prime}},\]
and
\[\mathbb{B}_{php^{\prime}h^{\prime}}=\sqrt{(n_{h}-n_{p})(n_{h^{\prime}}-n_{p^{ \prime}})}\overline{V}_{hh^{\prime}pp^{\prime}}. \tag{258}\]
In the above expressions, we used the natural orbit energies defined as
\[\tilde{\epsilon}_{\alpha\alpha^{\prime}}=\langle\alpha|\frac{-\hbar^{2}\nabla ^{2}}{2m}|\alpha^{\prime}\rangle+\sum_{\beta}n_{\beta}\overline{V}_{\alpha \beta\alpha^{\prime}\beta}. \tag{259}\]
The key point of the r-RPA consists in expressing the occupation probabilities \(n_{\alpha}\) in terms of \(X\) and \(Y\) amplitudes. In Ref. [47], by using a method which iterates the anti-commutation relations of the creation and destruction operators, it is shown that the expressions of these occupation probabilities up to the fourth order in \(Y\) are
\[n_{h}\simeq 1-\sum_{p}\sum_{\nu\nu^{\prime}}\Delta^{\nu\nu^{\prime}}_{ph}\ \ ;\ \ n_{p}\simeq\sum_{h}\sum_{\nu\nu^{\prime}}\Delta^{\nu\nu^{\prime}}_{ph}, \tag{260}\]
with
\[\Delta^{\nu\nu^{\prime}}_{ph}=\left(\delta_{\nu\nu^{\prime}}-\frac{1}{2}\sum _{p^{\prime}h^{\prime}}(n_{h^{\prime}}-n_{p^{\prime}})X^{\nu^{\prime}}_{p^{ \prime}h^{\prime}}X^{\nu*}_{p^{\prime}h^{\prime}}\right)(n_{h}-n_{p})Y^{\nu}_{ ph}{Y^{\nu^{\prime}}_{ph}}^{*}. \tag{261}\]
This result inserted in Equations (257) and (258) generates expressions of the \(\mathbb{A}\) and \(\mathbb{B}\) matrix elements in terms of \(X\) and \(Y\) amplitudes; therefore, Equation (254) becomes a system of nonlinear equations in the latter unknowns. This is solved by using an iterative procedure. Starting from some initial guess for the \(X\) and \(Y\) amplitudes, obtained, for example, by solving the standard RPA equations, one calculates the \(\mathbb{A}\) and \(\mathbb{B}\) matrix elements. The solution of Equation (254) provides new values of the \(X\) and \(Y\) amplitudes. The procedure is repeated until convergence is reached. A review of recent applications of the r-RPA theory is presented in Ref. [6].
Correlated RPA
Interactions built to describe systems composed of two particles are called _microcospic_. These interactions show similar features independently of the particles considered, nucleons, atoms or molecules. They are short ranged, meaning that they are zero after a certain value of the distance between the two particles. They have an attractive pocket at intermediate distances and a strongly repulsive core at short inter-particle distances. This latter feature inhibits the use of microscopic interactions in theories based on perturbation expansion such as RPA. The derivation of the RPA with the Green function formalism clearly shows that RPA is the first term of a perturbative expansion of the two-body Green functions. The presence of the strongly repulsive core would produce extremely large value of the interaction matrix elements with respect to the energy eigenvalues. This is because the s.p. wave functions obtained in the IPM would allow the particles to get too close to each other. The traditional RPA requires the use of effective interactions, i.e., interactions which do not contain a short range repulsion.
Microscopic many-body theories aim to describe many-particle systems by using microscopic interactions. One method of handling the problem of short range repulsion is to use a correlation function which modifies the IPM wave functions in such a way that two particles do not get too close to each other. This is the basic idea of the Correlated Basis Function theory [48, 49, 46]. The ansatz is that the ground state of the interacting particle system can be expressed as
\[\left|\Psi_{0}\right\rangle=\frac{F\left|\Phi_{0}\right\rangle}{\langle\Phi_ {0}|F^{+}F|\Phi_{0}\rangle^{\frac{1}{2}}}, \tag{262}\]
where \(\left|\Phi_{0}\right\rangle\) is the IPM Slater determinant and \(F\) is a correlation function. These two elements of the state are determined by minimizing the energy functional
\[\delta E[\Psi_{0}]=\delta\left[\frac{\langle\Phi_{0}|F^{+}\hat{H}F|\Phi_{0} \rangle}{\langle\Phi_{0}|F^{+}F|\Phi_{0}\rangle}\right]=0, \tag{263}\]
where the hamiltonian \(\hat{H}\) contains the microscopic interaction. The usual ansatz on the expression of the correlation function \(F\) is
\[F({\bf r}_{1},\cdots,{\bf r}_{A})=\prod_{i<j}^{A}f(r_{ij}), \tag{264}\]
where \(f\) is a two-body correlation function depending only on the distance \(r_{ij}\) between the two interacting fermions. The need to keep finite the product of the interaction \(\hat{V}\) and the wave function \(\left|\Psi\right\rangle\) requires that \(f\) is almost zero for small values of \(r_{ij}\) and it rapidly assumes the value of 1 when the distance becomes larger than that of the short range repulsive core. The minimization of the energy functional is carried out by changing the parameters of \(f\) and also the set of s.p. wave functions forming \(\left|\Phi_{0}\right\rangle\).
After having solved the problem of finding the minimum of \(E[\Psi_{0}]\), the correlated RPA aims to describe the excitations of the system in this theoretical framework. There is an ambiguity in defining the expression of the excited state. If we consider the \(\left|\Psi_{0}\right\rangle\) of Equation (262) as ground state, the approach of the EOM (see Section 3) implies the calculation of matrix elements of the form
\[\langle\Phi_{0}|F^{+}\hat{a}_{i}^{+}\hat{a}_{m}\hat{H}\hat{a}_{n}^{+}\hat{a} _{j}F|\Phi_{0}\rangle\,, \tag{265}\]
whose evaluation requires the knowledge of the effects of creation and destruction operators on \(F\left|\Phi_{0}\right\rangle\). We attack the problem by considering the correlation function acting on an exited IPM state. This implies that the ansatz for the excited states is analogous to that of Equation (262)
\[\left|\Psi\right\rangle=\frac{F\left|\Phi\right\rangle}{\langle\Phi|F^{+}F| \Phi\rangle^{\frac{1}{2}}}, \tag{266}\]
where \(\left|\Phi\right\rangle\) is the Thouless variational ground state (79)
\[\left|\Phi\right\rangle=e^{\,mi}C_{mi}\hat{a}_{m}^{+}\hat{a}_{i}\] \[\left|\Phi_{0}\right\rangle, \tag{267}\]
and the \(C_{mi}\) coefficients are defined by using a variational procedure which minimizes the energy functional
\[\delta E[\Psi]=\delta\left\langle\Psi|\hat{H}|\Psi\right\rangle=0. \tag{268}\]
In the expression (267) of \(|\Phi\rangle\), we can consider \(E\) as a function of the \(C_{mi}\) coefficients and we make a power expansion around the ground state energy value
\[H_{00}=\frac{\langle\Phi_{0}|F^{+}\hat{H}F|\Phi_{0}\rangle}{\langle\Phi_{0}|F^{+} F|\Phi_{0}\rangle}, \tag{269}\]
which is obtained by considering all the \(C_{mi}\) coefficients equal to zero in Equation (267). The power expansion can be written as
\[E[C_{mi},C^{*}_{mi}]= H_{00}+\sum_{mi}\left(\left[\frac{\delta E}{\delta C_{mi}} \right]_{0}C_{mi}+\left[\frac{\delta E}{\delta C^{*}_{mi}}\right]_{0}C^{*}_{ mi}\right) \tag{270}\] \[+ \frac{1}{2}\sum_{minj}\left[\frac{\delta^{2}E}{\delta C_{mi}\delta C _{nj}}\right]_{0}C_{mi}C_{nj}+\frac{1}{2}\sum_{minj}\left[\frac{\delta^{2}E}{ \delta C^{*}_{mi}\delta C^{*}_{nj}}\right]_{0}C^{*}_{mi}C^{*}_{nj}\] \[+ \sum_{minj}\left[\frac{\delta^{2}E}{\delta C^{*}_{mi}\delta C_{nj }}\right]_{0}C^{*}_{mi}C_{nj}+\cdots,\]
where the subindex \(0\) indicates that, after the evaluation of the variational derivative, all the \(C\)'s must be set equal to zero.
The second term of the above equation is \(\delta E\), the first variation of the energy functional. We obtain the minimum when this variation is zero and this implies that each variational term must be zero. Let us consider the term with the variation about \(C^{*}_{mi}\)
\[\frac{\delta E}{\delta C^{*}_{mi}}=\frac{\partial}{\partial C^{*}_{mi}}\left[ \frac{\langle\Phi|F^{+}\hat{H}F|\Phi\rangle}{\langle\Phi|F^{+}F|\Phi\rangle} \right], \tag{271}\]
where we have considered that, in this case, the functional derivative coincides with the partial derivative. By using the expression (267), we obtain
\[\frac{\delta E}{\delta C^{*}_{mi}}=\frac{\langle\Phi|\hat{a}^{+}_{i}\hat{a}_{ m}F^{+}\hat{H}F|\Phi\rangle}{\langle\Phi|F^{+}F|\Phi\rangle}-\langle\Phi|\hat{a}^{+}_{i} \hat{a}_{m}F^{+}F|\Phi\rangle\frac{\langle\Phi|F^{+}\hat{H}F|\Phi\rangle}{ \langle\Phi|F^{+}F|\Phi\rangle^{2}}. \tag{272}\]
After calculating the variation, we have to impose that all the \(C\)'s go to zero; this is equivalent to saying that in Equation (267) \(|\Phi\rangle=|\Phi_{0}\rangle\), and we obtain a relation
\[\frac{\langle\Phi_{0}|\hat{a}^{+}_{i}\hat{a}_{m}F^{+}\hat{H}F|\Phi_{0}\rangle} {\langle\Phi_{0}|F^{+}F|\Phi_{0}\rangle}=H_{00}\frac{\langle\Phi_{0}|\hat{a}^ {+}_{i}\hat{a}_{m}F^{+}F|\Phi_{0}\rangle}{\langle\Phi_{0}|F^{+}F|\Phi_{0} \rangle}. \tag{273}\]
An analogous calculation carried out for the variation about \(C_{mi}\) generates an expression which is the complex conjugate of (273).
The fact that the value of \(E\) is a minimum, when the first variational derivatives are zero, is ensured if the sum of all the second order variational derivatives is positive. It is convenient to tackle this problem in matrix form by defining the matrix elements
\[A_{minj}\equiv\left[\frac{\delta^{2}E}{\delta C^{*}_{mi}\delta C_{nj}}\right]_ {0}\quad\quad\mbox{and}\quad\quad B_{minj}\equiv\left[\frac{\delta^{2}E}{ \delta C^{*}_{mi}\delta C^{*}_{nj}}\right]_{0}. \tag{274}\]
By carrying out calculations analogous to those carried out for the first variational derivatives, i.e., by considering Equation (267) and making the limit for \(C\to 0\), we obtain the expressions
\[A_{minj}=\frac{\langle\Phi_{0}|\hat{a}^{+}_{i}\hat{a}_{m}F^{+}\hat{H}F\hat{a}^ {+}_{n}\hat{a}_{j}|\Phi_{0}\rangle}{\langle\Phi_{0}|F^{+}F|\Phi_{0}\rangle}-H _{00}\frac{\langle\Phi_{0}|\hat{a}^{+}_{i}\hat{a}_{m}F^{+}\hat{H}F\hat{a}^{+}_ {n}\hat{a}_{j}|\Phi_{0}\rangle}{\langle\Phi_{0}|F^{+}F|\Phi_{0}\rangle}. \tag{275}\]
and
\[B_{minj}=\frac{\langle\Phi_{0}|\hat{a}^{+}_{i}\hat{a}_{m}\hat{a}^{+}_{j}\hat{a} _{n}F^{+}\hat{H}F|\Phi_{0}\rangle}{\langle\Phi_{0}|F^{+}F|\Phi_{0}\rangle}-H _{00}\frac{\langle\Phi_{0}|\hat{a}^{+}_{i}\hat{a}_{m}\hat{a}^{+}_{j}\hat{a}_{n} F^{+}\hat{H}F|\Phi_{0}\rangle}{\langle\Phi_{0}|F^{+}F|\Phi_{0}\rangle}. \tag{276}\]
We consider the set of the \(C\)'s as a vector; therefore, we write the condition that the sum of the second variational derivative is positive in matrix form as
\[\frac{1}{2}({C^{*}}^{T}C^{T})\left(\begin{array}{cc}A&B\\ B^{*}&A^{*}\end{array}\right)\left(\begin{array}{c}C\\ C^{*}\end{array}\right)>0. \tag{277}\]
This is equivalent to asking that in the eigenvalue problem
\[\left(\begin{array}{cc}A&B\\ B^{*}&A^{*}\end{array}\right)\left(\begin{array}{c}C\\ C^{*}\end{array}\right)=\lambda\left(\begin{array}{c}C\\ C^{*}\end{array}\right), \tag{278}\]
the eigenvalues \(\lambda\) are all positive. By inserting Equation (278) into Equation (277), we obtain
\[\frac{1}{2}({C^{*}}^{T}C+C^{T}\,C)>0, \tag{279}\]
which is satisfied for \(\lambda>0\) since the part inside the round brackets is certainly positive because it is the sum of squares moduli of complex numbers.
The condition (273) and its complex conjugate, together with (277) allows us to build equations for the Correlated RPA.
We consider Equation (137)
\[\langle\delta\Psi(t)|\hat{H}-i\hbar\frac{\partial}{\partial t}|\Psi(t)\rangle =0, \tag{280}\]
where now the state \(|\Phi(t)\rangle\) is
\[|\Psi(t)\rangle=\frac{F\,|\Phi(t)\rangle}{\langle\Phi(t)|F^{+}F|\Phi(t) \rangle^{\frac{1}{2}}}, \tag{281}\]
In the above equation, the \(|\Phi(t)\rangle\) states are defined analogously to Equation (267) but now the \(C\) coefficients are time-dependent
\[|\Phi(t)\rangle=e^{\sum_{mi}C_{mi}(t)\hat{a}_{m}^{+}\hat{a}_{i}}\] \[|\Phi_{0}(t)\rangle\,, \tag{282}\]
and the time dependence of the ground state is defined as
\[|\Phi_{0}(t)\rangle=e^{\frac{i}{\hbar}H_{00}t}\,|\Phi_{0}\rangle\,, \tag{283}\]
analogously to the usual interaction picture (see Equation (89)).
Since only the \(C\) amplitudes can be varied, we can express Equation (280) as
\[\langle\delta\Psi(t)|\hat{H}-i\hbar\frac{\partial}{\partial t}| \Psi(t)\rangle \tag{284}\] \[= \sum_{mi}\frac{\delta\,\langle\Psi(t)|}{\delta C_{mi}}\left(\hat {H}-i\hbar\frac{\partial}{\partial t}\right)|\Psi(t)\rangle\,\delta C_{mi}\] \[+ \sum_{mi}\frac{\delta\,\langle\Psi(t)|}{\delta C_{mi}^{*}}\left( \hat{H}-i\hbar\frac{\partial}{\partial t}\right)|\Psi(t)\rangle\,\delta C_{ mi}^{*}\] \[\equiv \sum_{mi}\left(S_{mi}\delta C_{mi}+R_{mi}\delta C_{mi}^{*}\right) =0.\]
The above equation is verified only if both the matrix elements \(R_{mi}\) and \(S_{mi}\) are zero for all the \(m\) and \(i\) particle-hole pairs and for all times \(t\).
The evaluation of \(R_{mi}\) proceeds by considering the expressions (281) for \(|\Psi(t)\rangle\), (282) for \(|\Phi(t)\rangle\) and (283) for \(|\Phi_{0}(t)\rangle\). We show in Appendix G the details of the calculation leading to the expression
\[0=R_{mi}=\sum_{nj}A_{minj}C_{nj}(t)+\sum_{nj}B_{minj}C_{nj}^{*}(t)-i\hbar\sum_ {nj}\frac{d}{dt}C_{nj}M_{minj}, \tag{285}\]
where the \(A\) and \(B\) matrix elements are those of Equations (275) and (276), respectively, and we defined
\[M_{minj} = \frac{\langle\Phi_{0}|\hat{a}_{i}^{+}\hat{a}_{m}F^{+}F\hat{a}_{n}^{ +}\hat{a}_{j}|\Phi_{0}\rangle}{\langle\Phi_{0}|F^{+}F|\Phi_{0}\rangle} \tag{286}\] \[- \frac{\langle\Phi_{0}|\hat{a}_{i}^{+}\hat{a}_{m}F^{+}F|\Phi_{0} \rangle\,\langle\Phi_{0}|F^{+}F\hat{a}_{n}^{+}\hat{a}_{j}|\Phi_{0}\rangle}{ \langle\Phi_{0}|F^{+}F|\Phi_{0}\rangle^{2}}.\]
Analogously to what is presented in Section 5, we consider harmonic oscillations of the \(C\) amplitudes
\[C_{mi}(t)=X_{mi}e^{-i\omega t}+Y_{mi}^{*}e^{i\omega t}. \tag{287}\]
We insert Equation (287) into Equation (285); we separate the positive and negative frequency oscillations and obtain
\[\sum_{nj}A_{minj}X_{nj}+\sum_{nj}B_{minj}Y_{nj}(t)=\hbar\omega\sum_{nj}X_{nj} M_{minj}, \tag{288}\]
and
\[\sum_{nj}A_{minj}Y_{nj}^{*}+\sum_{nj}B_{minj}X_{nj}^{*}(t)=-\hbar\omega\sum_{ nj}Y_{nj}^{*}M_{minj}. \tag{289}\]
By considering the complex conjugated of the second equation we can cast the system in a matrix form
\[\left(\begin{array}{cc}A&B\\ &&\\ B^{*}&A^{*}\end{array}\right)\left(\begin{array}{c}X^{\nu}\\ \\ Y^{\nu}\end{array}\right)=\hbar\omega_{\nu}\left(\begin{array}{cc}M&0\\ &&\\ 0&-M\end{array}\right)\left(\begin{array}{c}X^{\nu}\\ Y^{\nu}\end{array}\right). \tag{290}\]
The structure of the standard RPA equations can be recovered by performing a transformation on (290) such that the matrix on the right is converted into a unit-diagonal form
\[\left(\begin{array}{cc}M&0\\ &\\ 0&-M\end{array}\right)\longrightarrow\left(\begin{array}{cc}\mathbb{I}&0\\ &\\ 0&-\mathbb{I}\end{array}\right). \tag{291}\]
The properties of these equations have been studied [12] and they are similar to those quoted in Section 3.2.3.
Obviously, we want to interpret the eigenvalues \(\hbar\omega\) of Equation (290) as excitation energies of the system. The question is if the amplitudes \(X\) and \(Y\) can be used as in Section 3.2.4 to evaluate the transition probabilities. This is not straightforward since in the present approach we have worked with a hamiltonian of the type \(\hat{F}^{+}\hat{H}\hat{F}\). Consequently, the one-body operators describing the external operator should also be described as \(\hat{F}^{+}\hat{O}\hat{F}\).
## 11 Summary and Conclusions
In this article, I presented three different methods to obtain RPA secular equations.
The EOM approach emphasizes the fact that RPA considers only excitations of \(1p-1h\) type and also that the RPA ground state is not the the IPM ground state, but it contains correlations. These correlations are described in terms of \(ph\) pairs; therefore, RPA excited states contain also \(hp\) excitations which are taken into account by the \(Y\) amplitudes.
RPA secular equations are obtained by truncating at the first order the expansion of the two-body Green function in powers of the interaction. As a consequence of this truncation, RPA requires the use of effective interactions, i.e., interactions without the strongly repulsive core at short inter-particle distances, a feature which, instead, characterizes the microscopic interactions.
The derivation of RPA obtained with the TDHF approach clearly indicates that RPA has to be used to describe harmonic oscillations around the many-body ground state, i.e. excitations whose energies are relatively small with respect to the global binding energy of the system.
RPA calculations require in input a set of s.p. wave functions and energies and also the effective particle-hole interaction. The solution of RPA secular equations provides not only the excitation spectrum, but for each excited state also the description of the corresponding wave function in terms of \(1p-1h\) and \(1h-1p\) excitation pairs.
The knowledge of RPA wave functions allows a rather straightforward evaluation of observable quantities because many-body transition amplitudes are expressed as linear combinations of s.p. transitions.
RPA is able to describe in a unique theoretical framework both single-particle and collective excitations. This is particularly useful in atomic nuclei where these two types of excitations are both present in the same energy range.
RPA is able to predict emergent phenomena which are unexpected in the IPM. In the present article, I have considered, as an illustrative example, the case of the \(3^{-}\) state of the \({}^{208}\)Pb nucleus. RPA has been widely used to investigate the giant resonances in nuclei [5]. The position of the peaks of the resonances and the total strengths are well described. These latter quantities are related to RPA sum rules whose values are rather different from those obtained in the IPM, as was pointed out in Section 3.2.5. The accuracy of most modern data indicates that, even though RPA provides reasonable values of the total excitation strengths, it fails in describing their energy distributions. This is the main reason leading to an extension of the theory.
The main limitation of RPA is the fact that it considers \(1p-1h\) excited pairs only. The straightforward extension consists in considering, in addition, also \(2p-2h\) excitations. The formulation of the SRPA has been presented in Section 9.1. Applications of the SRPA are numerically very involved, but the obtained results are rather promising. Another method of including \(2p-2h\) excitations consists in coupling s.p. wave functions to RPA vibrational modes.
It is possible to untie RPA theory from the use of effective interactions. The formulation of a theory which uses microscopic interactions has been presented in Section 10. To the best of my knowledge, this formulation of RPA has never been used in actual calculations. Its validity in the description of observables remains an open question.
RPA is a milestone in many-body theories even though nowadays its role and relevance is sometime overlooked because its relative simplicity in favour of theories makes use of microscopic interactions.
## Appendix A The Hartree-Fock Hamiltonian
In this Appendix we obtain a useful expression of the hamiltonian for its use in HF and RPA calculations. We consider the expression of the hamiltonian in ONR [13, 22]
\[\hat{H}=\sum_{\nu\nu^{\prime}}T_{\nu,\nu^{\prime}}\hat{a}^{+}_{\nu}\hat{a}_{ \nu^{\prime}}+\frac{1}{4}\sum_{\nu\nu^{\prime}\mu\mu^{\prime}}\overline{V}_{ \nu\mu\nu^{\prime}\mu^{\prime}}\hat{a}^{+}_{\nu}\hat{a}^{+}_{\mu}\hat{a}_{\nu ^{\prime}}\hat{a}_{\nu^{\prime}}, \tag{292}\]
where we have defined
\[T_{\nu,\nu^{\prime}}=\langle\nu|\hat{T}|\nu^{\prime}\rangle, \tag{293}\]
and \(\overline{V}_{\nu\mu\nu^{\prime}\mu^{\prime}}\) is the antisymmetric matrix element of Equation (13). We indicate with \(\hat{a}^{+}_{\nu}\) and \(\hat{a}_{\nu}\) the usual fermion creation and destruction operators satisfying the anti-commutation relations
\[\left\{\hat{a}_{\nu},\hat{a}^{+}_{\nu^{\prime}}\right\}=\delta_{\nu\nu^{ \prime}}\qquad\left\{\hat{a}_{\nu},\hat{a}_{\nu^{\prime}}\right\}=0\qquad \left\{\hat{a}^{+}_{\nu},\hat{a}^{+}_{\nu^{\prime}}\right\}=0, \tag{294}\]
From the definition of contraction (see [22]) we have that
\[\stackrel{{\longleftarrow}}{{\hat{a}^{+}_{\nu}}}\hat{a}_{\nu^{ \prime}}=\delta_{\nu\nu^{\prime}}\delta_{\nu^{\prime}i}\ \ ;\ \ \stackrel{{\longleftarrow}}{{\hat{a}^{+}_{\nu^{ \prime}}}}=0\ \ ;\ \ \stackrel{{\longleftarrow}}{{\hat{a}^{+}_{\nu}}}\hat{a}_{\nu^{ \prime}}=0\ \ ;\ \ \stackrel{{\longleftarrow}}{{\hat{a}^{+}_{\nu}}}\hat{a}^{+}_{\nu^{ \prime}}=0\ . \tag{295}\]
where \(i\) indicates a state below the Fermi surface. By considering the definition of normal ordered product \(\hat{\mathbb{N}}\) we obtain
\[\hat{a}^{+}_{\nu}\,\hat{a}_{\nu^{\prime}}=\hat{\mathbb{N}}[\hat{a}^{+}_{\nu} \,\hat{a}_{\nu^{\prime}}]+\stackrel{{\longleftarrow}}{{\hat{a} ^{+}_{\nu}}}\ \, \tag{296}\]
and, for the Wick's theorem,
\[\hat{a}^{+}_{\nu}\,\hat{a}^{+}_{\mu}\,\hat{a}_{\mu^{\prime}}\, \hat{a}_{\nu^{\prime}} = \hat{\mathbb{N}}[\hat{a}^{+}_{\nu}\hat{a}^{+}_{\mu}\,\hat{a}_{ \nu^{\prime}}] \tag{297}\] \[+ \hat{\mathbb{N}}[\hat{a}^{+}_{\mu}\hat{a}_{\mu^{\prime}}]\hat{a}^{ +}_{\nu}\,\hat{a}_{\nu^{\prime}}+\hat{\mathbb{N}}[\hat{a}^{+}_{\nu}\,\hat{a}_{ \nu^{\prime}}]\hat{a}^{+}_{\mu}\,\hat{a}_{\mu^{\prime}}\] \[- \hat{\mathbb{N}}[\hat{a}^{+}_{\mu}\hat{a}_{\nu}]\hat{a}^{ +}_{\nu}\,\hat{a}_{\mu^{\prime}}-\hat{\mathbb{N}}[\hat{a}^{+}_{\nu}\,\hat{a}_{ \nu^{\prime}}]\hat{a}^{+}_{\mu}\,\hat{a}_{\nu^{\prime}}\] \[+ \hat{a}^{+}_{\mu}\hat{a}_{\mu^{\prime}}\hat{a}_{\nu}^{ \prime}\,\hat{a}_{\nu^{\prime}}-\hat{a}^{+}_{\nu}\,\hat{a}_{\mu^{\prime}}\hat{a }^{+}_{\mu}\,\hat{a}_{\nu^{\prime}}\ \.\]
We insert the above expression in Equation (292)
\[\hat{H} = \sum_{\nu\nu^{\prime}}T_{\nu\nu^{\prime}}\hat{a}^{+}_{\nu}\hat{a}_{ \nu^{\prime}}+\frac{1}{4}\sum_{\mu\mu^{\prime}\nu\nu^{\prime}}\overline{V}_{ \nu\mu\nu^{\prime}\mu^{\prime}}\bigg{\{}\hat{\mathbb{N}}[\hat{a}^{+}_{\nu}\hat {a}^{+}_{\mu}\,\hat{a}_{\mu^{\prime}}\,\hat{a}_{\nu^{\prime}}] \tag{298}\] \[+ \hat{\mathbb{N}}[\hat{a}^{+}_{\mu}\hat{a}_{\mu^{\prime}}]\delta_{ \nu\nu^{\prime}}\delta_{\nu i}+\hat{\mathbb{N}}[\hat{a}^{+}_{\nu}\hat{a}_{\nu^ {\prime}}]\delta_{\mu\mu^{\prime}}\delta_{\mu i}-\hat{\mathbb{N}}[\hat{a}^{+}_ {\mu}\hat{a}_{\nu^{\prime}}]\delta_{\nu\mu^{\prime}}\delta_{\nu i}-\hat{ \mathbb{N}}[\hat{a}^{+}_{\nu}\hat{a}_{\mu^{\prime}}]\delta_{\mu\nu^{\prime}} \delta_{\mu i}\] \[+ \delta_{\nu\nu^{\prime}}\delta_{\nu i}\delta_{\mu\mu^{\prime}} \delta_{\mu j}-\delta_{\nu\mu^{\prime}}\delta_{\nu i}\delta_{\mu\nu^{\prime}} \delta_{\mu\nu^{\prime}}\delta_{\mu j}\bigg{\}},\]
where we have already considered the fact that a contraction is different from zero only if the single-particle state is of hole type, i.e., if its energy is below the Fermi surface.
By considering the restrictions imposed by the Kronecker's \(\delta\), we obtain
\[\hat{H} = \sum_{\nu\nu^{\prime}}T_{\nu\nu^{\prime}}\hat{a}^{+}_{\nu}\hat{a }_{\nu^{\prime}}+\frac{1}{4}\sum_{\mu\mu^{\prime}\nu\nu^{\prime}}\overline{V}_ {\nu\mu\nu^{\prime}\mu^{\prime}}\hat{\mathbb{N}}[\hat{a}^{+}_{\nu}\hat{a}^{+}_ {\mu}\,\hat{a}_{\mu^{\prime}}\,\hat{a}_{\nu^{\prime}}] \tag{299}\] \[+ \frac{1}{4}\sum_{\mu\mu^{\prime}i}\overline{V}_{\mu i\mu^{\prime }i}\hat{\mathbb{N}}[\hat{a}^{+}_{\mu}\hat{a}_{\mu^{\prime}}]+\frac{1}{4}\sum_{ \nu\nu^{\prime}i}\overline{V}_{i\nu i\nu^{\prime}}\hat{\mathbb{N}}[\hat{a}^{+ }_{\nu}\hat{a}_{\nu^{\prime}}]\] \[- \frac{1}{4}\sum_{\mu\nu^{\prime}i}\overline{V}_{i\mu\nu^{\prime}i }\hat{\mathbb{N}}[\hat{a}^{+}_{\mu}\hat{a}_{\nu^{\prime}}]-\frac{1}{4}\sum_{ \nu\mu^{\prime}i}\overline{V}_{\nu i\mu i^{\prime}}\hat{\mathbb{N}}[\hat{a}^{+ }_{\nu}\hat{a}_{\mu^{\prime}}]\] \[+ \frac{1}{4}\sum_{ij}\overline{V}_{ijij}-\frac{1}{4}\sum_{ij} \overline{V}_{ijji}\;\;.\]
The definition (13) of the antisymmetric matrix element implies the following relations:
\[\overline{V}_{\nu\mu\nu^{\prime}\mu^{\prime}}=-\overline{V}_{\mu\nu\nu^{ \prime}\mu^{\prime}}=\overline{V}_{\mu\nu\mu^{\prime}\nu^{\prime}}=-\overline {V}_{\nu\mu\mu^{\prime}\nu^{\prime}}\;\;, \tag{300}\]
therefore
\[\hat{H} = \sum_{\nu\nu^{\prime}}T_{\nu\nu^{\prime}}\hat{a}^{+}_{\nu}\hat{ a}_{\nu^{\prime}}+\frac{1}{4}\sum_{\mu\mu^{\prime}\nu\nu^{\prime}}\overline{V}_{ \nu\mu\nu^{\prime}\mu^{\prime}}\hat{\mathbb{N}}[\hat{a}^{+}_{\nu}\hat{a}^{+}_ {\mu}\,\hat{a}_{\mu^{\prime}}\,\hat{a}_{\nu^{\prime}}] \tag{301}\] \[+ \sum_{\nu\nu^{\prime}i}\overline{V}_{\nu i\nu^{\prime}i}\hat{ \mathbb{N}}[\hat{a}^{+}_{\nu}\,\hat{a}_{\nu^{\prime}}]+\frac{1}{2}\sum_{ij} \overline{V}_{ijij}\;\;.\]
We use the definition (296) of \(\hat{\mathbb{N}}\) and we obtain the following expression for the hamiltonian
\[\hat{H} = \sum_{\nu\nu^{\prime}}\left(T_{\nu\nu^{\prime}}+\sum_{i}\overline {V}_{\nu i\nu^{\prime}i}\right)\hat{a}^{+}_{\nu}\hat{a}_{\nu^{\prime}} \tag{302}\] \[+ \frac{1}{4}\sum_{\mu\mu^{\prime}\nu\nu^{\prime}}\overline{V}_{\nu \mu\nu^{\prime}\mu^{\prime}}\hat{\mathbb{N}}[\hat{a}^{+}_{\nu}\hat{a}^{+}_{ \mu}\,\hat{a}_{\mu^{\prime}}\,\hat{a}_{\nu^{\prime}}]-\frac{1}{2}\sum_{ij} \overline{V}_{ijij}\;\;.\]
This expression makes evident the presence of a one-body hamiltonian operator, the term multiplying \(\hat{a}^{+}_{\nu}\hat{a}_{\nu^{\prime}}\).
Up to now, we did not make any assumption on the structure of the basis of single-particle wave functions composing the Slater determinant on which the creation and destruction operators are acting. We choose the single-particle basis which diagonalizes the one-body term of Equation (302)
\[\left(T_{\nu\nu^{\prime}}+\sum_{i}\overline{V}_{\nu i\nu^{\prime}i}\right) \delta_{\nu,\nu^{\prime}}\equiv h_{\nu\nu^{\prime}}\delta_{\nu,\nu^{\prime}}= \epsilon_{\nu}\delta_{\nu,\nu^{\prime}}\;\;. \tag{303}\]
In this basis, the expression of the hamiltonian is
\[\hat{H}=\sum_{\nu}\epsilon_{\nu}\hat{a}^{+}_{\nu}\hat{a}_{\nu}-\frac{1}{2}\sum_ {ij}\overline{V}_{ijij}+\frac{1}{4}\sum_{\mu\mu^{\prime}\nu\nu^{\prime}} \overline{V}_{\nu\mu\nu^{\prime}\mu^{\prime}}\hat{\mathbb{N}}[\hat{a}^{+}_{\nu} \hat{a}^{+}_{\mu}\,\hat{a}_{\mu^{\prime}}\,\hat{a}_{\nu^{\prime}}]\equiv\hat {H}_{0}+\hat{V}_{\rm res}\;\;. \tag{304}\]
where \(\hat{H}_{0}\) is the sum of the first two terms.
## Appendix B RPA Double Commutators
In this Appendix, we calculate the double commutator
\[\langle\Phi_{0}|\bigg{[}\hat{a}_{i}^{+}\hat{a}_{m},[\hat{H},\hat{a}_{n}^{+}\hat{a }_{j}]\bigg{]}|\Phi_{0}\rangle\,,\]
of Equation (47) by considering the hamiltonian expressed as in Equation (304). The second term of the hamiltonian (304) is a number, therefore commuting with every operator. By considering the anti-commutation rules (294) of the creation and destruction operators, we have that
\[[\hat{a}_{\alpha}^{+}\hat{a}_{\beta},\hat{a}_{n}^{+}\hat{a}_{j}]=\delta_{n\beta }\hat{a}_{\alpha}^{+}\hat{a}_{j}-\delta_{j\alpha}\hat{a}_{n}^{+}\hat{a}_{\beta},\]
therefore, the commutator of the hamiltonian can be written as
\[[\hat{H},\hat{a}_{n}^{+}\hat{a}_{j}] = \sum_{\alpha\beta}h_{\alpha\beta}\left(\delta_{n\beta}\hat{a}_{ \alpha}^{+}\hat{a}_{j}-\delta_{j\alpha}\hat{a}_{n}^{+}\hat{a}_{\beta}\right)\] \[+ \frac{1}{4}\sum_{\alpha\alpha^{\prime}\beta\beta^{\prime}}\overline {V}_{\alpha\beta\alpha^{\prime}\beta^{\prime}}\bigg{[}\hat{\mathbb{N}}[\hat{a }_{\alpha}^{+}\hat{a}_{\beta}^{+}\,\hat{a}_{\beta^{\prime}}\,\hat{a}_{\alpha^ {\prime}}],\hat{a}_{n}^{+}\hat{a}_{j}\bigg{]}.\]
The double commutator of the first term of the hamiltonian can be rewritten as
\[h_{\alpha\beta}\,\langle\Phi_{0}|\bigg{[}\hat{a}_{i}^{+}\hat{a}_ {m},\left(\delta_{n\beta}\hat{a}_{\alpha}^{+}\hat{a}_{j}-\delta_{j\alpha}\hat{ a}_{n}^{+}\hat{a}_{\beta}\right)\bigg{]}|\Phi_{0}\rangle \tag{305}\] \[= h_{\alpha\beta}\,\langle\Phi_{0}|\left(\hat{a}_{i}^{+}\hat{a}_{m }\delta_{n\beta}\hat{a}_{\alpha}^{+}\hat{a}_{j}-\hat{a}_{i}^{+}\hat{a}_{m} \delta_{j\alpha}\hat{a}_{n}^{+}\hat{a}_{\beta}\right)|\Phi_{0}\rangle\] \[= h_{\alpha\beta}\,\langle\Phi_{0}|\hat{a}_{i}^{+}\hat{a}_{m}\hat {a}_{\alpha}^{+}\hat{a}_{j}|\Phi_{0}\rangle\,\delta_{n\beta}-h_{\alpha\beta} \,\langle\Phi_{0}|\hat{a}_{i}^{+}\hat{a}_{m}\hat{a}_{n}^{+}\hat{a}_{\beta}| \Phi_{0}\rangle\,\delta_{j\alpha}\] \[= h_{\alpha\beta}\delta_{ij}\delta_{m\alpha}\delta_{n\beta}-h_{ \alpha\beta}\delta_{i\beta}\delta_{mn}\delta_{j\alpha}\] \[= (\epsilon_{m}-\epsilon_{i})\delta_{ij}\delta_{mn},\]
where in the last step we considered the diagonal expression of \(h_{\alpha,\beta}\), Equation (303).
For the calculation of the second term of the hamiltonian we have that
\[\bigg{[}\hat{\mathbb{N}}[\hat{a}_{\alpha}^{+}\hat{a}_{\beta}^{+}\,\hat{a}_{ \beta^{\prime}}\,\hat{a}_{\alpha^{\prime}}],\hat{a}_{n}^{+}\hat{a}_{j}\bigg{]} =\hat{\mathbb{N}}[\hat{a}_{\alpha}^{+}\hat{a}_{\beta}^{+}\,\hat{a}_{\beta^{ \prime}}\,\hat{a}_{\alpha^{\prime}}]\hat{a}_{n}^{+}\hat{a}_{j}-\hat{a}_{n}^{+} \hat{a}_{j}\hat{\mathbb{N}}[\hat{a}_{\alpha}^{+}\hat{a}_{\beta}^{+}\,\hat{a}_{ \beta^{\prime}}\,\hat{a}_{\alpha^{\prime}}],\]
therefore
\[\langle\Phi_{0}|\bigg{[}\hat{a}_{i}^{+}\hat{a}_{m},\hat{\mathbb{N }}[\hat{a}_{\alpha}^{+}\hat{a}_{\beta}^{+}\,\hat{a}_{\beta^{\prime}}\,\hat{a}_ {\alpha^{\prime}}]\hat{a}_{n}^{+}\hat{a}_{j}\bigg{]}-\bigg{[}\hat{a}_{n}^{+} \hat{a}_{j}\hat{\mathbb{N}}[\hat{a}_{\alpha}^{+}\hat{a}_{\beta}^{+}\,\hat{a}_{ \beta^{\prime}}\,\hat{a}_{\alpha^{\prime}}],\hat{a}_{i}^{+}\hat{a}_{m}\bigg{]}| \Phi_{0}\rangle \tag{306}\] \[= \langle\Phi_{0}|\hat{a}_{i}^{+}\hat{a}_{m}\hat{\mathbb{N}}[\hat{a }_{\alpha}^{+}\hat{a}_{\beta}^{+}\,\hat{a}_{\beta^{\prime}}\,\hat{a}_{\alpha^{ \prime}}]\hat{a}_{n}^{+}\hat{a}_{j}|\Phi_{0}\rangle\] (307) \[- \langle\Phi_{0}|\hat{\mathbb{N}}[\hat{a}_{\alpha}^{+}\hat{a}_{ \beta}^{+}\,\hat{a}_{\beta^{\prime}}\,\hat{a}_{\alpha^{\prime}}]\hat{a}_{n}^{+} \hat{a}_{j}\hat{a}_{i}^{+}\hat{a}_{m}|\Phi_{0}\rangle\] (308) \[+ \langle\Phi_{0}|\hat{a}_{i}^{+}\hat{a}_{m}\hat{a}_{n}^{+}\hat{a}_{ j}\hat{\mathbb{N}}[\hat{a}_{\alpha}^{+}\hat{a}_{\beta}^{+}\,\hat{a}_{\beta^{\prime}}\, \hat{a}_{\alpha^{\prime}}]|\Phi_{0}\rangle\] \[- \langle\Phi_{0}|\hat{a}_{n}^{+}\hat{a}_{j}\hat{\mathbb{N}}[\hat{a}_{ \alpha}^{+}\hat{a}_{\beta}^{+}\,\hat{a}_{\beta^{\prime}}\,\hat{a}_{\alpha^{ \prime}}]\hat{a}_{n}^{+}\hat{a}_{m}|\Phi_{0}\rangle\,. \tag{309}\]
The terms (307) and (309) are zero since \(a_{m}\,|\Phi_{0}\rangle=0\). The situation for the term (308) is more involved. In the application of the Wick's theorem one can see that in all the possible set of contractions there are always terms where \(\hat{a}_{n}^{+}\) is contracted with \(\hat{a}_{\alpha^{\prime}}\) or \(\hat{a}_{\beta^{\prime}}\) and these contractions are zero. Only the term (306) is different from zero and, by applying the Wick's theorem we have to consider all the possible contractions and we obtain
\[\langle\Phi_{0}|a_{i}^{+}a_{m}\hat{\mathbb{N}}[a_{\alpha}^{+}a_{ \beta}^{+}\,a_{\beta^{\prime}}\,a_{\alpha^{\prime}}]a_{n}^{+}a_{j}|\Phi_{0}\rangle = \delta_{i\alpha^{\prime}}\delta_{m\alpha}\delta_{\beta^{\prime}n} \delta_{\beta j}-\delta_{i\alpha^{\prime}}\delta_{m\beta}\delta_{\beta^{\prime}n} \delta_{\alpha j} \tag{310}\] \[- \delta_{i\beta^{\prime}}\delta_{m\alpha}\delta_{\alpha^{\prime}n} \delta_{\beta j}+\delta_{i\beta^{\prime}}\delta_{m\beta}\delta_{\alpha^{\prime}n} \delta_{\alpha j}.\]
This expression is used to obtain the TDA equation (48) whose terms are equivalent to the \(A_{minj}\) matrix elements (58) of RPA equation.
For the term \(B_{minj}\) of Equation (58) we use again the expression (304) of the hamiltonian. In this case also the contribution of the one-body term is zero. By considering the anti-commutation properties of the creation and destruction operators we obtain
\[[\hat{a}_{\alpha}^{+}\hat{a}_{\beta},\hat{a}_{j}^{+}\hat{a}_{n}]=\delta_{\beta j} \hat{a}_{\alpha}^{+}\hat{a}_{n}-\delta_{nn}\hat{a}_{j}^{+}\hat{a}_{\beta},\]
therefore
\[\langle\Phi_{0}|\bigg{[}\hat{a}_{i}^{+}\hat{a}_{m},\Big{[}\hat{a}_{ \alpha}^{+}\hat{a}_{\beta},\hat{a}_{j}^{+}a_{n}\Big{]}\bigg{]}|\Phi_{0}\rangle\] \[= \langle\Phi_{0}|\hat{a}_{i}^{+}\hat{a}_{m}\hat{a}_{\alpha}^{+} \hat{a}_{n}|\Phi_{0}\rangle\to 0\] \[- \langle\Phi_{0}|\hat{a}_{\alpha}^{+}\hat{a}_{\alpha}\hat{a}_{j}^ {+}\hat{a}_{\beta}|\Phi_{0}\rangle\to 0\] \[- \langle\Phi_{0}|\hat{a}_{i}^{+}\hat{a}_{m}\hat{a}_{j}^{+}\hat{a} _{\beta}|\Phi_{0}\rangle=\delta_{jj}\delta_{im}\to 0\] \[+ \langle\Phi_{0}|\hat{a}_{j}^{+}\hat{a}_{\beta}\hat{a}_{i}^{+} \hat{a}_{m}|\Phi_{0}\rangle\to 0.\]
For the two-body term we have to evaluate
\[\langle\Phi_{0}|\bigg{[}a_{i}^{+}a_{m},\Big{[}\hat{\mathbb{N}}[a_{\alpha}^{+} a_{\beta}^{+}a_{\beta^{\prime}}a_{\alpha^{\prime}},a_{j}^{+}a_{n}\Big{]}\bigg{]}| \Phi_{0}\rangle\,.\]
Three terms of the double commutators are zero since they contain \(a_{m}\,|\Phi_{0}\rangle=0\). Only the term
\[-\langle\Phi_{0}|\hat{a}_{i}^{+}\hat{a}_{m}\hat{a}_{j}^{+}\hat{a}_{n}\hat{a}_{ \alpha}^{+}\hat{a}_{\beta}^{+}\hat{a}_{\beta^{\prime}}\hat{a}_{\alpha^{\prime} }|\Phi_{0}\rangle\,,\]
is different from zero, therefore
\[B_{minj}=\frac{1}{4}\sum_{\alpha\beta\alpha^{\prime}\beta^{\prime}}\overline{ V}_{\alpha\beta\alpha^{\prime}\beta^{\prime}}\left\langle\Phi_{0}|\hat{a}_{i}^ {+}\hat{a}_{m}\hat{a}_{j}^{+}\hat{a}_{n}\hat{a}_{\alpha}^{+}\hat{a}_{\beta}^{+ }\hat{a}_{\beta^{\prime}}\hat{a}_{\alpha^{\prime}}|\Phi_{0}\rangle\right.\,. \tag{311}\]
By considering the symmetry properties of \(\overline{V}\) and all the possible contractions we obtain Equation (69).
## Appendix C Sum Rules
We derive here the expression of the sum rule (75).
\[\langle\Psi_{0}|\bigg{[}\hat{F},[\hat{H},\hat{F}]\bigg{]}|\Psi_{0}\rangle = \langle\Psi_{0}|\bigg{[}\hat{F}\hat{H}\hat{F}-\hat{F}\hat{F}\hat{ H}-\hat{H}\hat{F}\hat{F}+\hat{F}\hat{H}\hat{F}\bigg{)}\bigg{]}|\Psi_{0}\rangle\] \[= \bigg{[}2\,\langle\Psi_{0}|\hat{F}\hat{H}\hat{F}|\Psi_{0}\rangle- \langle\Psi_{0}|\hat{F}\hat{F}|\Psi_{0}\rangle\,E_{0}-E_{0}\,\langle\Psi_{0}| \hat{F}\hat{F}|\Psi_{0}\rangle\,\bigg{]}\] \[= 2\,\langle\Psi_{0}|\hat{F}(\hat{H}-E_{0})\hat{F}|\Psi_{0}\rangle\,.\]
We insert the completeness \(\sum_{\nu}|\Psi_{\nu}\rangle\,\langle\Psi_{\nu}|=\mathbb{I}\)
\[2\,\langle\Psi_{0}|\hat{F}\sum_{\nu}|\Psi_{\nu}\rangle\,\langle \Psi_{\nu}|(\hat{H}-E_{0})\hat{F}|\Psi_{0}\rangle\] \[= 2\,\langle\Psi_{0}|\hat{F}\sum_{\nu}|\Psi_{\nu}\rangle\,\langle \Psi_{\nu}|(E_{\nu}-E_{0})\hat{F}|\Psi_{0}\rangle=2\sum_{\nu}(E_{\nu}-E_{0})\, \langle\Psi_{0}|\hat{F}|\Psi_{\nu}\rangle\,\langle\Psi_{\nu}|\hat{F}|\Psi_{0} \rangle\,.\]
## Appendix D Linear Response
Let us consider the situation where the many-body system is subject to an external perturbation. We express the total hamiltonian describing the perturbed system as sum of the hamiltonian \(\hat{H}\) describing the system in absence of the perturbation, whose eigenstates are \(|\Psi\rangle\), plus a time-dependent term \(\hat{H}^{\rm ext}(t)\):
\[\hat{H}^{\rm tot}=\hat{H}+\hat{H}^{\rm ext}(t)=\hat{H}+\hat{F}A(t)\,\,\,, \tag{312}\]
where \(\hat{F}\) is the operator describing the action of the external perturbation on the system. The function \(A(t)\) describes the time evolution of the perturbation and is defined such as \(A(t)=0\) for \(t<t_{0}=0\). This means that the perturbation is switched on after a specific time, \(t_{0}\) which we define as zero time.
We assume that, under the action of the external perturbation, the reaction times of the many-body system are much faster than those needed to the perturbation to switch on and off. Then, when the perturbation is totally switched on the hamiltonian is \(\hat{H}^{\rm tot}=\hat{H}+\hat{F}\). In this case we can treat \(\hat{F}\) as a perturbative term of the total time-dependent hamiltonian. For this reason, we can consider the equation of motion (91) in the interaction picture
\[i\hbar\frac{\partial}{\partial t}|\Psi_{\rm I}(t)\rangle=\hat{F}_{\rm I}(t)| \Psi_{\rm I}(t)\rangle\;\;, \tag{313}\]
where
\[\hat{F}_{\rm I}(t)=e^{\frac{i}{\hbar}\hat{H}t}\hat{F}e^{-\frac{i}{\hbar}\hat{ H}t}\;\;\;{\rm e}\;\;\;|\Psi_{\rm I}(t)\rangle=e^{\frac{i}{\hbar}\hat{H}t}| \Psi(t)\rangle\;\;. \tag{314}\]
In this section, we use the convention that states and operators without sub-indexes are expressed in the Schrodinger picture. We formally integrate Equation (313)
\[i\hbar\int_{-\infty}^{t}dt^{\prime}\;\frac{\partial}{\partial t^{\prime}}| \Psi_{\rm I}(t^{\prime})\rangle=\int_{-\infty}^{t}dt^{\prime}\;\hat{F}_{\rm I }(t^{\prime})|\Psi_{\rm I}(t^{\prime})\rangle, \tag{315}\]
and obtain the expression
\[|\Psi_{\rm I}(t)\rangle=|\Psi_{\rm I}(-\infty)\rangle-\frac{i}{\hbar}\int_{- \infty}^{t}dt^{\prime}\;\hat{F}_{\rm I}(t^{\prime})|\Psi_{\rm I}(t^{\prime}) \rangle\;\;. \tag{316}\]
Since the perturbation is switched off when \(t=-\infty\) we have that \(|\Psi_{\rm I}(-\infty)\rangle=|\Psi_{0}\rangle\) which is the ground state of the system. We can express the above equation as perturbative expansion by iterating \(|\Psi_{\rm I}(t)\rangle\)
\[|\Psi_{\rm I}(t)\rangle=|\Psi_{0}\rangle-\frac{i}{\hbar}\int_{-\infty}^{t}dt^ {\prime}\;\hat{F}_{\rm I}(t^{\prime})|\Psi_{0}\rangle\;\;+\;\;\cdots \tag{317}\]
We call \(\hat{D}\) the operator which describe how the system reacts to the external perturbation induced by the operator \(\hat{F}\). The expectation value of this operator is given by
\[\langle\Psi_{\rm I}(t)|\hat{D}_{\rm I}(t)|\Psi_{\rm I}(t)\rangle \tag{318}\] \[= \left\{\langle\Psi_{0}|+\frac{i}{\hbar}\int_{-\infty}^{t}dt^{ \prime}\;\hat{F}_{\rm I}(t^{\prime})\langle\Psi_{0}|\;\;+\;\;\cdots\right\} \hat{D}_{\rm I}(t)\left\{|\Psi_{0}\rangle-\frac{i}{\hbar}\int_{-\infty}^{t}dt^ {\prime}\;\hat{F}_{\rm I}(t^{\prime})|\Psi_{0}\rangle\;\;+\;\;\cdots\right\}\] \[= \langle\Psi_{0}|\hat{D}_{\rm I}(t)|\Psi_{0}\rangle+\frac{i}{\hbar }\int_{-\infty}^{t}dt^{\prime}\langle\Psi_{0}|[\hat{F}_{\rm I}(t^{\prime}), \hat{D}_{\rm I}(t)]|\Psi_{0}\rangle\;\;\;+\cdots\]
We define the response function as
\[R(t^{\prime}-t)=\left\{\begin{array}{ll}0&t^{\prime}<t\\ \frac{i}{\hbar}\frac{\langle\Psi_{0}|[\hat{F}_{\rm I}(t^{\prime}),\hat{D}_{ \rm I}(t)]|\Psi_{0}\rangle}{\langle\Psi_{0}|\Psi_{0}\rangle}&t^{\prime}>t \end{array}\right.\;\;. \tag{319}\]
This definition implies causality. The system cannot respond before that the perturbation is switched on.
By making explicit the time dependence of \(\hat{F}_{\rm I}(t^{\prime})\) and \(\hat{D}_{\rm I}(t)\),
\[\hat{F}_{\rm I}(t^{\prime})=e^{\frac{i}{\hbar}\hat{H}t^{\prime}}\hat{F}e^{- \frac{i}{\hbar}\hat{H}t^{\prime}}\;\;;\;\;\hat{D}_{\rm I}(t)=e^{\frac{i}{\hbar }\hat{H}t}\hat{D}e^{-\frac{i}{\hbar}\hat{H}t}\;\;, \tag{320}\]
we can express the response as
\[R(t^{\prime}-t)=\frac{i}{\hbar}\frac{\langle\Psi_{0}|\hat{B}e^{\frac{i}{\hbar }(\hat{H}-E_{0})(t-t^{\prime})}\hat{D}|\Psi_{0}\rangle}{\langle\Psi_{0}|\Psi_ {0}\rangle}-\frac{i}{\hbar}\frac{\langle\Psi_{0}|\hat{D}e^{-\frac{i}{\hbar}( \hat{H}-E_{0})(t-t^{\prime})}\hat{B}|\Psi_{0}\rangle}{\langle\Psi_{0}|\Psi_{0 }\rangle}\;\;, \tag{321}\]
and, since it depends only on the time difference \(\tau=t-t^{\prime}\), by using the definition of Fourier transform, we obtain
\[\tilde{R}(E)=\int_{-\infty}^{\infty}d\tau\,R(\tau)\,e^{\frac{i}{ \hbar}E\tau} \tag{322}\] \[= \frac{i}{\hbar}\left(\langle\Psi_{0}|\hat{F}\int_{-\infty}^{ \infty}d\tau e^{\frac{i}{\hbar}(\hat{H}-E_{0}+E)\tau}\hat{D}|\Psi_{0}\rangle- \frac{i}{\hbar}\langle\Psi_{0}|\hat{D}\int_{-\infty}^{\infty}d\tau e^{-\frac{ i}{\hbar}(\hat{H}-E_{0}-E)\tau}\hat{F}|\Psi_{0}\rangle\right)\frac{1}{\langle \Psi_{0}|\Psi_{0}\rangle}\] \[= -\frac{\langle\Psi_{0}|\hat{F}(\hat{H}-E_{0}+E+i\eta)^{-1}\hat{ D}|\Psi_{0}\rangle}{\langle\Psi_{0}|\Psi_{0}\rangle}-\frac{\langle\Psi_{0}|\hat{D} (\hat{H}-E_{0}-E-i\eta)^{-1}\hat{F}|\Psi_{0}\rangle}{\langle\Psi_{0}|\Psi_{0} \rangle}\;\;.\]
We insert the completeness \(\sum_{n}|\Psi_{n}\rangle\langle\Psi_{n}|=1\) and obtain
\[\tilde{R}(E)=\sum_{n}\left[\frac{\langle\Psi_{0}|\hat{D}|\Psi_{n}\rangle \langle\Psi_{n}|\hat{F}|\Psi_{0}\rangle}{E-(E_{n}-E_{0})+i\eta}-\frac{\langle \Psi_{0}|\hat{F}|\Psi_{n}\rangle\langle\Psi_{n}|\hat{D}|\Psi_{0}\rangle}{E+(E _{n}-E_{0})+i\eta}\right]\frac{1}{\langle\Psi_{0}|\Psi_{0}\rangle}\;\;. \tag{323}\]
The poles of \(\tilde{R}(E)\) correspond to the excitation energies of the system. For each positive pole there is a negative pole, equal in absolute value to the positive one.
We consider the Dirac expression
\[\frac{1}{x^{\prime}-x\pm i\eta}=\mathscr{D}\frac{1}{x^{\prime}-x}\mp i\pi \delta(x-x^{\prime})\;\;, \tag{324}\]
where \(\mathscr{D}\) indicates the principal part, therefore
\[\delta(x-x^{\prime})=-\frac{1}{\pi}\Im\left(\frac{1}{x^{\prime}-x\pm i\eta} \right)\;\;, \tag{325}\]
with the symbol \(\Im\) indicating the imaginary part.
We assume \(\hat{D}=\hat{F}\), as it usually happens and consider only positive energies. The transition probability from the ground state to an excited state is given by
\[S(E)=-\frac{1}{\pi}\Im\Big{(}R(E)\Big{)}=\sum_{n}|\langle\Psi_{0}|\hat{F}| \Psi_{n}\rangle|^{2}\delta\Big{(}E-(E_{n}-E_{0})\Big{)}\;\;. \tag{326}\]
This is the traditional expression obtained by applying the time-dependent perturbation theory [16]. Assuming that \(\hat{F}\) is a one-body operator
\[\hat{F}=\sum_{\nu_{1}\nu_{2}}f_{\nu_{1}\nu_{2}}\hat{a}_{\nu_{1}}\hat{a}_{\nu_{ 2}}^{+}\;\;\mbox{and}\;\;f_{\nu_{1}\nu_{2}}=\int d^{3}r\,\phi_{\nu_{1}}^{*} \left({\bf r}\right)f({\bf r})\,\phi_{\nu_{2}}({\bf r})\;\;, \tag{327}\]
we obtain
\[\tilde{R}(E)=\sum_{\nu_{1}\nu_{2}}\sum_{\nu_{3}\nu_{4}}\sum_{n} \left[f_{\nu_{1}\nu_{2}}f_{\nu_{3}\nu_{4}}^{*}\frac{\langle\Psi_{0}|\hat{a} _{\nu_{1}}\hat{a}_{\nu_{2}}^{+}|\Psi_{n}\rangle\langle\Psi_{n}|\hat{a}_{\nu_{ 3}}\hat{a}_{\nu_{4}}^{+}|\Psi_{0}\rangle}{E-(E_{n}-E_{0})+i\eta}\right. \tag{328}\] \[- \left.f_{\nu_{3}\nu_{4}}f_{\nu_{1}\nu_{2}}^{*}\frac{\langle\Psi_{ 0}|\hat{a}_{\nu_{3}}\hat{a}_{\nu_{4}}^{+}|\Psi_{n}\rangle\langle\Psi_{n}|\hat {a}_{\nu_{1}}\hat{a}_{\nu_{2}}^{+}|\Psi_{0}\rangle}{E+(E_{n}-E_{0})+i\eta} \right]\!\frac{1}{\langle\Psi_{0}|\Psi_{0}\rangle}\;\;,\]
Since \(\hat{F}\) is hermitian, \(f_{\nu_{1}\nu_{2}}=f_{\nu_{2}\nu_{1}}^{*}\) and the indexes \(\nu\) are dummy, we can write
\[\tilde{R}(E) = \sum_{\nu_{1}\nu_{2}}\sum_{\nu_{3}\nu_{4}}f_{\nu_{1}\nu_{2}}f_{ \nu_{3}\nu_{4}}^{*} \tag{329}\] \[\sum_{n}\left[\frac{\langle\Psi_{0}|\hat{a}_{\nu_{1}}\hat{a}_{ \nu_{2}}^{+}|\Psi_{n}\rangle\langle\Psi_{n}|\hat{a}_{\nu_{3}}\hat{a}_{\nu_{4}}^ {+}|\Psi_{0}\rangle}{E-(E_{n}-E_{0})+i\eta}-\frac{\langle\Psi_{0}|\hat{a}_{\nu _{3}}\hat{a}_{\nu_{4}}^{+}|\Psi_{n}\rangle\langle\Psi_{n}|\hat{a}_{\nu_{1}}\hat{a }_{\nu_{2}}^{+}|\Psi_{0}\rangle}{E+(E_{n}-E_{0})+i\eta}\right]\!\frac{1}{\langle \Psi_{0}|\Psi_{0}\rangle}\] \[= \sum_{\nu_{1}\nu_{2}}\sum_{\nu_{3}\nu_{4}}f_{\nu_{1}\nu_{2}}f_{ \nu_{3}\nu_{4}}^{*}(-i)\tilde{G}(\nu_{1},\nu_{3},\nu_{2},\nu_{4},E)\;\;,\]
where, in the last step, we considered the expression (104) of the two-body Green function. The transition probability is given by
\[S(E)=-\frac{1}{\pi}\Im\Big{(}R(E)\Big{)}=\sum_{\nu_{1}\nu_{2}}\sum_{\nu_{3}\nu_{ 4}}f_{\nu_{1}\nu_{2}}f_{\nu_{3}\nu_{4}}^{*}\frac{\Im}{\pi}\left(ih\tilde{G}( \nu_{1},\nu_{3},\nu_{2},\nu_{4},E)\right)\;\;. \tag{330}\]
## Appendix E Green Function Expansion Terms
In this appendix we show the explicit expressions of the diagrams of Figure 2. The x and y labels indicate both space and time coordinates. The integration on all the y coordinates is understood. The symbol \(\tilde{\mathcal{U}}\) indicates the two-body interaction.
\[A\equiv\mathrm{G}^{0}(\mathrm{x}_{1},\mathrm{x}_{2},\mathrm{x}_{3},\mathrm{x}_ {4}), \tag{331}\]
\[B\equiv\mathrm{G}^{0}(\mathrm{x}_{1},\mathrm{x}_{2},\mathrm{y}_{1},\mathrm{y}_ {1})\,\hat{\mathcal{U}}(\mathrm{y}_{1},\mathrm{y}_{2})\mathrm{G}^{0}(\mathrm{ y}_{2},\mathrm{y}_{2},\mathrm{x}_{3},\mathrm{x}_{4}), \tag{332}\]
\[C\equiv\mathrm{G}^{0}(\mathrm{x}_{1},\mathrm{x}_{2},\mathrm{y}_{1},\mathrm{y}_ {2})\,\hat{\mathcal{U}}(\mathrm{y}_{1},\mathrm{y}_{2})\mathrm{G}^{0}(\mathrm{ y}_{1},\mathrm{y}_{2},\mathrm{x}_{3},\mathrm{x}_{4}), \tag{333}\]
\[D\equiv\mathrm{G}^{0}(\mathrm{x}_{1},\mathrm{x}_{2},\mathrm{y}_{1},\mathrm{y}_ {1})\,\hat{\mathcal{U}}(\mathrm{y}_{1},\mathrm{y}_{2})\mathrm{G}^{0}(\mathrm{ y}_{2},\mathrm{y}_{2},\mathrm{y}_{3},\mathrm{y}_{3})\hat{\mathcal{U}}(\mathrm{y}_{3}, \mathrm{y}_{4})\mathrm{G}^{0}(\mathrm{y}_{4},\mathrm{y}_{4},\mathrm{x}_{3}, \mathrm{x}_{4}), \tag{334}\]
\[E\equiv\mathrm{G}^{0}(\mathrm{x}_{1},\mathrm{x}_{2},\mathrm{y}_{1},\mathrm{y}_ {2})\,\hat{\mathcal{U}}(\mathrm{y}_{1},\mathrm{y}_{2})\mathrm{G}^{0}(\mathrm{ y}_{1},\mathrm{y}_{2},\mathrm{y}_{3},\mathrm{y}_{4})\hat{\mathcal{U}}(\mathrm{y}_{3}, \mathrm{y}_{4})\mathrm{G}^{0}(\mathrm{y}_{3},\mathrm{y}_{4},\mathrm{x}_{3}, \mathrm{x}_{4}), \tag{335}\]
\[F\equiv\mathrm{G}^{0}(\mathrm{x}_{1},\mathrm{x}_{2},\mathrm{y}_{1}, \mathrm{y}_{3})\hat{\mathcal{U}}(\mathrm{y}_{1},\mathrm{y}_{2})\mathrm{G}^{0}( \mathrm{y}_{2},\mathrm{y}_{2},\mathrm{y}_{3},\mathrm{y}_{2})\hat{\mathcal{U}}( \mathrm{y}_{3},\mathrm{y}_{4})\mathrm{G}^{0}(\mathrm{y}_{1},\mathrm{y}_{4}, \mathrm{x}_{3},\mathrm{x}_{4}), \tag{336}\]
\[G \equiv \mathrm{G}^{0}(\mathrm{x}_{1},\mathrm{x}_{2},\mathrm{y}_{1}, \mathrm{y}_{3})\hat{\mathcal{U}}(\mathrm{y}_{1},\mathrm{y}_{2})\mathrm{G}^{0} (\mathrm{y}_{2},\mathrm{y}_{2},\mathrm{y}_{4},\mathrm{y}_{4})\hat{\mathcal{U}} (\mathrm{y}_{3},\mathrm{y}_{4})\mathrm{G}^{0}(\mathrm{y}_{1},\mathrm{y}_{3}, \mathrm{y}_{5},\mathrm{y}_{6}) \tag{337}\] \[\hat{\mathcal{U}}(\mathrm{y}_{5},\mathrm{y}_{6})\mathrm{G}^{0}( \mathrm{y}_{5},\mathrm{y}_{6},\mathrm{x}_{3},\mathrm{x}_{4}).\]
## Appendix F RPA Green Function in Matrix Form
We consider Equation (113) and we calculate first
\[\hbar\tilde{G}^{\mathrm{RPA}}(m,i,j,n,E)\] \[= \sum_{\mu_{1},\mu_{2},\mu_{3},\mu_{4}}\tilde{G}^{0}(m,i,\mu_{1}, \mu_{2},E)\bigg{\{}\delta_{\mu_{1},j}\delta_{\mu_{2},n}+\langle\mu_{1}\mu_{3}| \hat{V}|\mu_{2}\mu_{4}\rangle\,\tilde{G}^{\mathrm{RPA}}(\mu_{3},\mu_{4},j,n,E)\] \[- \langle\mu_{1}\mu_{2}|\hat{V}|\mu_{4}\mu_{3}\rangle\,\tilde{G}^{ \mathrm{RPA}}(\mu_{3},\mu_{4},j,n,E)\bigg{\}}.\]
Because of Equations (105) and (106) we have that
\[\hbar\tilde{G}^{\mathrm{RPA}}(m,i,j,n,E)=\frac{\delta_{i,j}\delta _{m,n}}{\epsilon_{m}-\epsilon_{i}-E}\bigg{\{}1+\] \[+ \sum_{\mu_{3},\mu_{4}}\bigg{[}+\langle i\mu_{3}|\hat{V}|m\mu_{4} \rangle\,\tilde{G}^{\mathrm{RPA}}(\mu_{3},\mu_{4},j,n,E)-\langle im|\hat{V}| \mu_{4}\mu_{3}\rangle\,\tilde{G}^{\mathrm{RPA}}(\mu_{3},\mu_{4},j,n,E) \bigg{]}\bigg{\}}.\]
Making explicit the sum on \(\mu_{3}\) and \(\mu_{4}\) and considering that, for the conservation of the number of particles, one of the indexes must indicate a particle state and the other one a hole state, we can rewrite the above expression as:
\[\hbar(\epsilon_{m}-\epsilon_{i}-E)\tilde{G}^{\mathrm{RPA}}(m,i,j, n,E)-\sum_{l_{q}} \bigg{[} \langle il|\hat{V}|mq\rangle\,\tilde{G}^{\mathrm{RPA}}(l,q,j,n,E)\] \[+ \langle iq|\hat{V}|ml\rangle\,\tilde{G}^{\mathrm{RPA}}(q,l,j,n,E)\] \[- \langle im|\hat{V}|lq\rangle\,\tilde{G}^{\mathrm{RPA}}(q,l,j,n,E)\] \[- \langle im|\hat{V}|ql\rangle\,\tilde{G}^{\mathrm{RPA}}(l,q,j,n,E) \bigg{]}=\delta_{i,j}\delta_{m,n}.\]
By considering the antisymmetrized matrix element (13) we can express the above equation as
\[\sum_{l_{q}}\bigg{\{}\bigg{[}\hbar(\epsilon_{m}-\epsilon_{i}-E) \delta_{i,l}\delta_{m,q} + \overline{V}_{iqml}\tilde{G}^{\mathrm{RPA}}(q,l,j,n,E)\] \[+ \overline{V}_{ilmq}\tilde{G}^{\mathrm{RPA}}(l,q,j,n,E)\bigg{]}= \delta_{i,j}\delta_{m,n},\]
which is Equation (114). The evaluation of Equations (115)-(117) is carried out in analogous manner.
## Appendix G Correlated TDHF
In this section, we obtain the explicit expression of the \(R_{mi}\) and \(S_{mi}\) factors defined in Equation (284) as
\[R_{mi}\equiv\frac{\delta\left\langle\Psi(t)\right|}{\delta C_{mi}^{*}(t)}\left( \hat{H}-i\hbar\frac{\partial}{\partial t}\right)\left|\Psi(t)\right\rangle, \tag{338}\]
and
\[S_{mi}\equiv\frac{\delta\left\langle\Psi(t)\right|}{\delta C_{mi}(t)}\left( \hat{H}-i\hbar\frac{\partial}{\partial t}\right)\left|\Psi(t)\right\rangle. \tag{339}\]
To simplify the writing we define
\[\mathbb{D}=\left\langle\Phi(t)|F^{+}F|\Phi(t)\right\rangle, \tag{340}\]
By considering Equation (282) for \(|\Phi(t)\rangle\) we have
\[\frac{\delta}{\delta C_{mi}^{*}(t)}\mathbb{D}^{\frac{1}{2}}=\frac{1}{2}\frac{ \left\langle\Phi(t)|\hat{a}_{i}^{+}\hat{a}_{m}F^{+}F|\Phi(t)\right\rangle}{ \mathbb{D}^{\frac{1}{2}}}, \tag{341}\]
and
\[\frac{\partial}{\partial t}\mathbb{D} = \sum_{nj}\frac{d}{dt}C_{nj}^{*}(t)\left\langle\Phi(t)|\hat{a}_{j} ^{+}\hat{a}_{n}F^{+}F|\Phi(t)\right\rangle \tag{342}\] \[+ \sum_{nj}\frac{d}{dt}C_{nj}(t)\left\langle\Phi(t)|F^{+}F\hat{a}_ {n}^{+}\hat{a}_{j}|\Phi(t)\right\rangle.\]
From the time dependence (282) and (283) of \(|\Phi(t)\rangle\) we have
\[i\hbar\frac{\partial}{\partial t}\left|\Phi(t)\right\rangle=H_{00}\left|\Phi( t)\right\rangle+i\hbar\sum_{nj}\frac{d}{dt}C_{nj}(t)\hat{a}_{n}^{+}\hat{a}_{j} \left|\Phi(t)\right\rangle. \tag{343}\]
We use the above expressions and Equation (281) of \(|\Psi(t)\rangle\) and obtain the following expressions
\[\frac{\delta\left\langle\Psi(t)\right|}{\delta C_{mi}^{*}(t)} = \frac{\left\langle\Phi(t)\right|\hat{a}_{i}^{+}\hat{a}_{m}F^{+} \right\rangle}{\mathbb{D}^{\frac{1}{2}}}-\frac{1}{2}\frac{\left\langle\Phi(t) |\hat{a}_{i}^{+}\hat{a}_{m}F^{+}F|\Phi(t)\right\rangle}{\mathbb{D}}\left\langle \Psi(t)\right|, \tag{344}\] \[\frac{\delta\left\langle\Psi(t)\right|}{\delta C_{mi}(t)} = -\frac{1}{2}\frac{\left\langle\Phi(t)|F^{+}F\hat{a}_{m}^{+}\hat{a} _{i}|\Phi(t)\right\rangle}{\mathbb{D}}\left\langle\Psi(t)\right|, \tag{345}\]
\[\left(\hat{H}-i\hbar\frac{\partial}{\partial t}\right)\left|\Psi(t)\right\rangle =\frac{\left(\hat{H}-i\hbar\frac{\partial}{\partial t}\right)F\left|\Phi(t) \right\rangle}{\mathbb{D}^{\frac{1}{2}}}+\frac{i\hbar}{2}\frac{F\left|\Phi(t) \right\rangle}{\mathbb{D}^{3/2}}\frac{\partial}{\partial t}\mathbb{D}. \tag{346}\]
Putting together the above equations, we obtain
\[0=R_{mi} = \frac{1}{\mathbb{D}}\left\langle\Phi(t)|\hat{a}_{i}^{+}\hat{a}_{ m}F^{+}\hat{H}F|\Phi(t)\right\rangle \tag{347}\] \[- \frac{1}{2}\frac{1}{\mathbb{D}}\left\langle\Phi(t)|\hat{a}_{i}^{ +}\hat{a}_{m}F^{+}F|\Phi(t)\right\rangle H_{00}\] \[- \frac{1}{2}\frac{1}{\mathbb{D}^{2}}\left\langle\Phi(t)|\hat{a}_{i }^{+}\hat{a}_{m}F^{+}F|\Phi(t)\right\rangle\left\langle\Phi(t)|F^{+}\hat{H}F| \Phi(t)\right\rangle\] \[- \frac{i\hbar}{\mathbb{D}}\sum_{nj}\frac{d}{dt}C_{nj}(t)\left\langle \Phi(t)|\hat{a}_{i}^{+}\hat{a}_{m}F^{+}F\hat{a}_{n}^{+}\hat{a}_{j}|\Phi(t)\right\rangle\] \[+ \frac{3}{4}\frac{i\hbar}{\mathbb{D}^{2}}\sum_{nj}\frac{d}{dt}C_{nj }(t)\left\langle\Phi(t)|\hat{a}_{i}^{+}\hat{a}_{m}F^{+}F|\Phi(t)\right\rangle \left\langle\Phi(t)|F^{+}F\hat{a}_{n}^{+}\hat{a}_{j}|\Phi(t)\right\rangle\] \[+ \frac{1}{4}\frac{i\hbar}{\mathbb{D}^{2}}\sum_{nj}\frac{d}{dt}C_{nj }^{*}(t)\left\langle\Phi(t)|\hat{a}_{i}^{+}\hat{a}_{m}F^{+}F|\Phi(t)\right\rangle \left\langle\Phi(t)|\hat{a}_{j}^{+}\hat{a}_{n}F^{+}F|\Phi(t)\right\rangle,\]
\[0=-2S_{mi} = \frac{1}{\mathbb{D}^{2}}\left\langle\Phi(t)|F^{+}F\hat{a}^{+}_{m} \hat{a}_{i}|\Phi(t)\right\rangle\left\langle\Phi(t)|F^{+}\hat{H}F|\Phi(t)\right\rangle \tag{348}\] \[- \frac{1}{\mathbb{D}}\left\langle\Phi(t)|F^{+}F\hat{a}^{+}_{m} \hat{a}_{i}|\Phi(t)\right\rangle H_{00}\] \[- \frac{i\hbar}{\mathbb{D}^{2}}\left\langle\Phi(t)|F^{+}F\hat{a}^{ +}_{m}\hat{a}_{i}|\Phi(t)\right\rangle\sum_{nj}\frac{d}{dt}C_{nj}(t)\left\langle \Phi(t)|F^{+}F\hat{a}^{+}_{n}\hat{a}_{j}|\Phi(t)\right\rangle\] \[+ \frac{i\hbar}{2}\frac{1}{\mathbb{D}^{2}}\left\langle\Phi(t)|F^{+ }F\hat{a}^{+}_{m}\hat{a}_{i}|\Phi(t)\right\rangle\] \[\bigg{[}\sum_{nj}\frac{d}{dt}C^{*}_{nj}(t)\left\langle\Phi(t)| \hat{a}^{+}_{j}\hat{a}_{n}F^{+}F|\Phi(t)\right\rangle\] \[+\sum_{nj}\frac{d}{dt}C_{nj}(t)\left\langle\Phi(t)|F^{+}F\hat{a} ^{+}_{n}\hat{a}_{j}|\Phi(t)\right\rangle\bigg{]}.\]
We carry out a power expansion of Equation (282)
\[\left\langle\Phi(t)\right|=\left\langle\Phi_{0}(t)\right|\left[1+\sum_{mi}C^{ *}_{mi}(t)\hat{a}^{+}_{i}\hat{a}_{m}+\cdots\right], \tag{349}\]
and
\[\left|\Phi(t)\right\rangle=\left[1+\sum_{mi}C_{mi}(t)\hat{a}^{+}_{m}\hat{a}_{i }+\cdots\right]\left|\Phi_{0}(t)\right\rangle. \tag{350}\]
Since \(H_{00}\) is a number, the expectation values of an operator between \(\left|\Phi_{0}(t)\right\rangle\) states are identical to those obtained between those calculated between time-independent \(\left|\Phi_{0}\right\rangle\) states. This means that the expression (273) is also valid in the form
\[\frac{\left\langle\Phi_{0}(t)|\hat{a}^{+}_{i}\hat{a}_{m}F^{+}\hat{H}F|\Phi_{ 0}(t)\right\rangle}{\left\langle\Phi_{0}(t)|F^{+}F|\Phi_{0}(t)\right\rangle}= H_{00}\frac{\left\langle\Phi_{0}(t)|\hat{a}^{+}_{i}\hat{a}_{m}F^{+}F|\Phi_{0}(t) \right\rangle}{\left\langle\Phi_{0}(t)|F^{+}F|\Phi_{0}(t)\right\rangle}. \tag{351}\]
We use the approximation
\[\frac{\left\langle\Phi(t)|F^{+}\hat{H}F|\Phi(t)\right\rangle}{\left\langle\Phi (t)|F^{+}F|\Phi(t)\right\rangle}\simeq\frac{\left\langle\Phi_{0}(t)|F^{+}\hat {H}F|\Phi_{0}(t)\right\rangle}{\left\langle\Phi_{0}(t)|F^{+}F|\Phi_{0}(t) \right\rangle}=H_{00}. \tag{352}\]
This expression has been obtained by using the expansions (349) and (350) in both numerator and denominator and by neglecting all the terms containing \(C\)'s.
By using the expansions (349) and (350) and the approximation (352) in Equation (348) we obtain the relation
\[0 = \sum_{nj}\frac{d}{dt}C_{nj}(t)\frac{\left\langle\Phi_{0}|F^{+}F \hat{a}^{+}_{n}\hat{a}_{j}|\Phi_{0}\right\rangle}{\left\langle\Phi_{0}|F^{+}F |\Phi_{0}\right\rangle} \tag{353}\] \[- \sum_{nj}\frac{d}{dt}C^{*}_{nj}(t)\frac{\left\langle\Phi_{0}| \hat{a}^{+}_{j}\hat{a}_{n}F^{+}F|\Phi_{0}\right\rangle}{\left\langle\Phi_{0}|F^ {+}F|\Phi_{0}\right\rangle}.\]
We consider the expansions (349) and (350) and the approximation (352) in Equation (347) and obtain
\[R_{mi}= \tag{354}\] \[\frac{1}{\mathbb{D}}\left\langle\Phi_{0}(t)\right|\left[1+\sum_{nj }C^{*}_{nj}(t)\hat{a}^{+}_{j}\hat{a}_{n}+\cdots\right]\hat{a}^{+}_{i}\hat{a} _{m}F^{+}\hat{H}F\left[1+\sum_{mj}C^{*}_{mi}(t)\hat{a}^{+}_{n}\hat{a}_{j}+ \cdots\right]\left|\Phi_{0}(t)\right\rangle\] \[- \frac{1}{2}\frac{1}{\mathbb{D}}\left\langle\Phi_{0}(t)\right| \left[1+\sum_{nj}C^{*}_{nj}(t)\hat{a}^{+}_{j}\hat{a}_{n}+\cdots\right]\hat{a}^{ +}_{i}\hat{a}_{m}F^{+}F\left[1+\sum_{mi}C^{*}_{nj}(t)\hat{a}^{+}_{n}\hat{a}_{j}+ \cdots\right]\left|\Phi_{0}(t)\right\rangle\] \[\left[H_{00}+\frac{\left\langle\Phi(t)|F^{+}\hat{H}F|\Phi(t) \right\rangle}{\left\langle\Phi(t)|F^{+}F|\Phi(t)\right\rangle}\right]\] \[+ \mathscr{K}\left[\frac{dC}{dt}\right],\]
where we have indicated with \(\mathscr{K}\) the terms depending on the derivative of \(C\)'s. We retain only the terms containing a single \(C\), we use the approximation (352) and obtain
\[R_{mi} = \frac{1}{\mathbb{D}}\left\langle\Phi_{0}(t)\right|\hat{a}_{i}^{+} \hat{a}_{m}F^{+}\hat{H}F\left|\Phi_{0}(t)\right\rangle \tag{355}\] \[+ \frac{1}{\mathbb{D}}\sum_{nj}C_{nj}^{*}(t)\left\langle\Phi_{0}(t )\right|\hat{a}_{j}^{+}\hat{a}_{n}\hat{a}_{i}^{+}\hat{a}_{m}F^{+}\hat{H}F\left| \Phi_{0}(t)\right\rangle\] \[+ \frac{1}{\mathbb{D}}\sum_{nj}C_{nj}(t)\left\langle\Phi_{0}(t) \right|\hat{a}_{i}^{+}\hat{a}_{m}F^{+}\hat{H}F\hat{a}_{n}^{+}\hat{a}_{j}\left| \Phi_{0}(t)\right\rangle\] \[- \frac{1}{\mathbb{D}}\left\langle\Phi_{0}(t)\right|\hat{a}_{i}^{+ }\hat{a}_{m}F^{+}F\left|\Phi_{0}(t)\right\rangle H_{00}\] \[- \frac{1}{\mathbb{D}}\sum_{nj}C_{nj}^{*}(t)\left\langle\Phi_{0}(t )\right|\hat{a}_{j}^{+}\hat{a}_{n}\hat{a}_{i}^{+}\hat{a}_{m}F^{+}F\left|\Phi_ {0}(t)\right\rangle H_{00}\] \[- \frac{1}{\mathbb{D}}\sum_{nj}C_{nj}(t)\left\langle\Phi_{0}(t) \right|\hat{a}_{i}^{+}\hat{a}_{m}F^{+}F\hat{a}_{n}^{+}\hat{a}_{j}\left|\Phi_{ 0}(t)\right\rangle H_{00}\] \[+ \mathscr{K}\left[\frac{dC}{dt}\right].\]
By applying the condition (351) the terms without \(C\)'s cancels and we obtain
\[R_{mi}=\sum_{nj}A_{minj}+\sum_{nj}B_{minj}+\mathscr{K}\left[\frac{dC}{dt} \right]=0. \tag{356}\]
where we used the definitions (275) and (276).
We use the relation (353) and we can write the term dependent on the derivative of the \(C\)'s as:
\[\mathscr{K}\left[\frac{dC}{dt}\right] = -i\hbar\sum_{nj}\sum_{nj}\frac{dC_{nj}(t)}{dt}\left[\frac{\left \langle\Phi_{0}(t)\right|\hat{a}_{i}^{+}\hat{a}_{m}F^{+}F\hat{a}_{n}^{+}\hat{ a}_{j}\left|\Phi_{0}(t)\right\rangle}{\left\langle\Phi_{0}(t)\right|F^{+}F \left|\Phi_{0}(t)\right\rangle}\right. \tag{357}\] \[- \left.\frac{\left\langle\Phi_{0}(t)\right|\hat{a}_{i}^{+}\hat{a} _{m}F^{+}F\left|\Phi_{0}(t)\right\rangle\left\langle\Phi_{0}(t)\right|F^{+}F \hat{a}_{n}^{+}\hat{a}_{j}\left|\Phi_{0}(t)\right\rangle}{\left\langle\Phi_{0 }(t)\right|F^{+}F\left|\Phi_{0}(t)\right\rangle^{2}}\right]\] \[\equiv -i\hbar\sum_{nj}\sum_{nj}\frac{dC_{nj}(t)}{dt}M_{minj}.\]
\begin{table}
\begin{tabular}{l l} CCRPA & Core coupling RPA \\ DFT & Density Functional Theory \\ EOM & Equation of Motion \\ HF & Hartree–Fock \\ \(hp\) & Hole–particle \\ IPM & Independent Particle Model \\ KS & Khon-Sham \\ MF & Mean-Field \\ ONR & Occupation Number Representation \\ \(ph\) & Particle–hole \\ PVCRPA & Particle-vibration coupling RPA \\ QRPA & Quasi-particle RPA \\ RPA & Random Phase Approximation \\ r-RPA & Renormalized RPA \\ s.p. & Single particle \\ SRPA & Second RPA \\ TDHF & Time-dependent Hartree–Fock \\ \end{tabular}
\end{table}
Table 1: ABBREVIATIONS |
2310.05939 | Learning Cyber Defence Tactics from Scratch with Multi-Agent
Reinforcement Learning | Recent advancements in deep learning techniques have opened new possibilities
for designing solutions for autonomous cyber defence. Teams of intelligent
agents in computer network defence roles may reveal promising avenues to
safeguard cyber and kinetic assets. In a simulated game environment, agents are
evaluated on their ability to jointly mitigate attacker activity in host-based
defence scenarios. Defender systems are evaluated against heuristic attackers
with the goals of compromising network confidentiality, integrity, and
availability. Value-based Independent Learning and Centralized Training
Decentralized Execution (CTDE) cooperative Multi-Agent Reinforcement Learning
(MARL) methods are compared revealing that both approaches outperform a simple
multi-agent heuristic defender. This work demonstrates the ability of
cooperative MARL to learn effective cyber defence tactics against varied
threats. | Jacob Wiebe, Ranwa Al Mallah, Li Li | 2023-08-25T14:07:50Z | http://arxiv.org/abs/2310.05939v1 | # Learning Cyber Defence Tactics from Scratch with Multi-Agent Reinforcement Learning
###### Abstract
Recent advancements in deep learning techniques have opened new possibilities for designing solutions for autonomous cyber defence. Teams of intelligent agents in computer network defence roles may reveal promising avenues to safeguard cyber and kinetic assets. In a simulated game environment, agents are evaluated on their ability to jointly mitigate attacker activity in host-based defence scenarios. Defender systems are evaluated against heuristic attackers with the goals of compromising network confidentiality, integrity, and availability. Value-based Independent Learning and Centralized Training Decentralized Execution (CTDE) cooperative Multi-Agent Reinforcement Learning (MARL) methods are compared revealing that both approaches outperform a simple multi-agent heuristic defender. This work demonstrates the ability of cooperative MARL to learn effective cyber defence tactics against varied threats.
## 1 Introduction
The pace of technological advancements in modern network infrastructure has levied demand for highly skilled and experienced cybersecurity professionals in roles such as penetration testing, threat hunting, and incident response. Meanwhile, machine learning and deep learning have risen as dominant technologies in the space of complex problem-solving. The emerging field of autonomous cyber operations has sought to harness the potential of Reinforcement Learning (RL), deep learning, and multi-agent systems to model the complexity of the cyber battlespace. Autonomous agents operating in cyber defence environments enable rapid tactical-level response to threats given complex observational inputs [21].
Cyber defence analysts typically employ a chain of decisions leading them from the first discovery of a threat to incident response and mitigation. Human experts are the best tool available for this role. However, human ability in cyber defence tasks faces the challenges of, among others, attention allocation, cognitive load, lack of measurable impact, and reaction time [17]. Furthermore, cyber defence is characterized by a large search space of computer network features with which vulnerabilities may be exposed and exploited. To provide coverage over this space, teams of analysts typically coordinate actions. Autonomous RL agents can derive tactical policies in concert to achieve a coordinated effect, mitigating some of the limitations of human actors. This multi-agent approach has yet to be shown in a complex network defence setting.
This work demonstrates the applicability of cooperative Multi-Agent Reinforcement Learning (MARL) for tactical-level decision-making in cyber defence. The results of this work show that cooperative MARL can learn to defend a simulated network environment against three heuristic attack patterns with an emphasis on host-based identification and mitigation of lateral movement.
There are numerous advantages to a cooperative MARL approach over single-agent RL for ACD: (1) input-output spaces can be constrained, avoiding the curse of dimensionality while enabling a high resolution representation of the environment, (2) agents can learn specialized roles relating to a specific function or area of a network, and (3) segregating agents into local areas of control can allow for improved robustness (e.g., if a particular agent is compromised an attacker cannot leak observations from other agents/network segments).
This work makes the following contributions:
1. To the best of our knowledge, CyMARL (Cyber MARL) is the first cooperative MARL training environment for enterprise network defence tasks. It extends the CybORG simulator with additional game types, actions, and network topologies and it provides a PyMARL environment interface with which it is possible to train tens of open-source algorithms.
2. A comparison of cooperative MARL approaches to learn independent and centrally-learned policies in a ACD context.
3. A demonstration of the adaptability of cooperative MARL training when encountering multiple attacker types and action sets.
The remainder of this paper is organized as follows. Section 2 reviews related work in autonomous and multi-agent applications for cyber defence and advancements in cooperative MARL. Section 3 discusses the background of RL, cooperative MARL, and autonomous cyber defence. Section 4
describes the evaluation of the two cooperative MARL approaches tested. Section 6 discusses the results. Section 7 suggests future work and concludes.
## 2 Related Work
RL for specific cybersecurity tasks has been studied in the context of DDoS protection [21], anomaly-based intrusion detection [23], and penetration testing [1, 15], among other specific use cases [20]. RL has been applied to a variety of decision-making tasks relating to cyber defence [22]. Although ACD techniques have been evaluated using multiple game environments and solving methods [20], to the best of our knowledge, this work is the first to consider multi-agent coordination of tactical decision-making state-of-the-art RL techniques.
### ACD Training Environments
Game environments are commonly modelled as graph-based abstractions of computer networks. Simple defensive decision-making games can be learned using tabular RL methods [14, 1]. In more complex game environments, deep RL is preferred for its generalizability and representational capacity. Methods such as DDQN [20], A3C [15], and PPOL[16] have been shown to learn to minimize attacker propagation in graph-based network games modelled as Partially-Observable Markov Decision Processes (POMDP) [13, 20].
More realistic behaviours can be trained using more detailed simulated data. Simulators such as CybORG [23], CyberBattleSim [22], and FARLAND [16] emphasize host-based features from which agents learn. Host-based simulations improve task realism by providing more possible states, represented by host processes, vulnerabilities, sessions, and opportunities for Red and Blue agents to interact with these features. In contrast, Yawning Titan [1] enables greater detail for agents to observe and affect network traffic in simulation.
Foley _et al._ demonstrate how a hierarchical PPO algorithm can learn to adapt to different attacker types to defend a high value server from a heuristic attacker in a simulated environment [24], winning CAGE Challenge 2 [25]. Building from this foundation using the CybORG simulator, Wolk _et al._ analyze the generalizability of various RL methods used in CAGE Challenge 2 [26].
Emulation has been used to bridge the sim-to-real gap between RL systems that can be trained in defence games and real network operations [11, 16]. Emulated environments are impractical for training agents in many research settings due to higher requirements for compute and clock-time than comparable simulations, instead providing a valuable validation component for agents trained in simulation. Red versus Blue asymmetric attack-defence games have used self-play to train defenders against RL-trained attackers [1, 16]. RL self-play and adversarial attacks on RL cyber defenders [23] are outside the scope of this research.
### Multi-Agent Systems for ACD
Communication between heuristic defender agents has been shown to improve their ability to protect a simulated network from DDoS attacks [11]. Heuristic defender agents programmed to communicate filter configurations between teams had improved reaction time and effectiveness than those that did not share information. MARL systems have defended a simulated network against DDoS attacks by selectively throttling network traffic without the use of communication [13]. MARL systems have successfully performed anomaly-based intrusion detection when framed as a classification task [27, 28].
One of the most commonly used cooperative MARL environments for research is the StarCraft Multi-Agent Challenge (SMAC) [2]. SMAC provides a variety of scenarios ranging from easy to very hard in which virtual units learn to defeat each other in a battle video game. A comparative study of cooperative MARL algorithms across a variety of environments, including SMAC, showed that CTDE methods generally perform better than independent learning [29]. SMAC tasks are assumed to have transferability for general tactical-level decision-making as a result of agents needing to target actions in a coordinated way toward individual enemies in order to achieve success. The cyber defence problem is modelled similarly, using actions to selectively affect host characteristics.
In practice, the QMIX MARL algorithm [28] has outperformed state-of-the-art value-based and policy-based methods at a range of tasks [29, 20]. Most notably, QMIX has been shown, with the help of specific implementation tricks, to discover optimal or near-optimal policies on all SMAC [2] scenarios [14].
## 3 Background
A useful ACD agent must be able to interpret its environment and derive a logical chain of decisions. The RL learner, an autonomous agent, takes sequential actions given inputs from its environment. The agent's goal is to choose actions that maximize its cumulative reward, referred to as _return_, received from the environment. To learn which series of actions will produce favourable results, an agent must explore the environment, receive rewards, and update its understanding. This process occurs iteratively, allowing the agent to refine its knowledge over many steps and episodes of play. To provide an analysis of how learning architectures compare when used to learn cyber defence tasks, this work constrains the approaches studied to model-free, value-based algorithms.
### The RL Framework
In a POMDP, an agent will take action \(a_{t}\) causing the environment to step to the next state \(s_{t+1}\) based on a transition probability \(p\), where \(p(s_{t+1}|s_{t},a_{t})\) is the transition function.
The environment generates two outputs from the state transition: the reward \(r(s,a)\) as a function of the state and action at time \(t-1\), and the observation \(o(s)\) as a function of the state at time \(t\). The observation is some subset of state information determined by the observation function. A _game_ in this context generalizes the POMDP and refers to an environment with a consistent set of rules in which one or more agents interact.
An agent's _policy_\(\pi(a|o)\) is a mapping of actions to the observation space. The policy, therefore, determines a series of actions for an agent to take. An agent adjusts its policy over many episodes of a game to optimize for its cumulative reward in an episode, its _return_(Sutton and Barto, 2018).
A value-based agent will predict the value of states or state-action pairs using its value function and decide on an action that will maximize its expected future return \(\pi(o)=\arg\max_{a\in A}Q(o,a)\). The value function is calculated from the expected discounted future return:
\[Q(o,a)=\alpha[r+\gamma\max_{a_{t+1}}Q(o_{t+1},a_{t+1})]+(1-\alpha)Q(o,a) \tag{1}\]
Where \(\alpha\) is the learning rate and the discount rate \((\gamma\in(0,1))\) weights rewards less as the agent looks to further \(k\) states. To encourage exploration, rather than to continue to revisit known high-value states, an \(\epsilon\)-greedy strategy is commonly employed which sets a probability \(\epsilon\) that an agent will randomly select an action rather than taking the action with the highest expected reward.
With deep value-based methods, sampling from experience replay (Lin, 1992) often produces more favourable results, as demonstrated by Mnih _et. Al_ with Deep Q-Networks (2013). A replay buffer can improve sample efficiency by exposing the agent to more learning experience without generating each sample through actions. Recurrent Neural Networks (RNN) allow RL models to learn long-term dependencies between data samples which is useful for accurate predictions in partially observable environments (Bakker, 2001).
### Multi-Agent Reinforcement Learning
The cooperative multi-agent game builds upon the POMDP defined previously by allowing multiple agents to take actions and receive rewards simultaneously within a single timestep (Littman, 1994; Busoniu _et al._, 2010). Although each agent, denoted by \(i\), receives individual observations \(o^{i}\) from the environment, a joint reward \(r\) is determined based on the state and actions of all agents. Agents collectively seek to optimize the joint reward. Figure 1 provides a cooperative MARL representation of agent-environment interaction. The game's global state depends on the vector of actions \(\mathbf{a}_{t}=\{a_{t}^{1},...,a_{t}^{n}\}\), where \(i=1,2,...n\) for \(n\) agents. At each timestep, the environment will output a joint reward as a scalar and an observation vector \(\mathbf{o}_{t+1}=\{o_{t+1}^{1},...,o_{t+1}^{n}\}\) that includes individual observations for each agent.
Simultaneous actions in a cooperative MARL setting allow for greater exploration. Cooperative MARL also has the advantage of dividing large tasks into manageable goals, thus handling greater task complexity. However, since the game's current state depends on the joint actions of all agents, an agent's actions can affect other agents' observations. Since both agents are learning and updating their policies, this interaction creates _multi-agent_-non-stationarity in the environment; the optimal policy is a moving target. In a single-agent game, the fixed probability of a stochastic transition function can be incorporated into the learned policy, for example, with the use of a replay buffer. However, in the multi-agent case, actions of "external agents" affect observations in a way that cannot be explained by changes in the observing agent's policy (Lowe _et al._, 2017). It can therefore be challenging for an agent to learn a stable policy when the actions of its peers are silently affecting its reward and observations.
### Independent Q-Learning
A naive approach to cooperative multi-agent learning systems is to decentralize prediction and control, referred to as _independent learning_. This approach employs self-contained RL agents that act out separate policies based on independent observations while seeking to maximize a joint reward. Each agent is responsible for learning a policy from their individual observations. A decentralized approach allows for greater scalability, at the cost of high variance due to the unmitigated non-stationarity. Independent Q-Learning (IQL), as implemented in this work, is equivalent to training independent DQN-style agents1 to learn to take simultaneous actions to maximize their joint reward.
Footnote 1: This work utilizes an implementation of IQL from PyMARL2(Hu _et al._, 2021) in which each DQN-style agent uses an RNN in addition to other implementation tricks to optimize for cooperative play.
**QMIX**
QMIX (Rashid _et al._, 2018) is a Centralized Training with Decentralized Execution (CTDE) (Lowe _et al._, 2017) algorithm that uses a central mixing architecture to perform _value decomposition_(Sunehag _et al._, 2017). With CTDE, semi-independent agents follow separate policies and receive updates periodically from a central learner. The central learner trains on information from all agents and provides a learning update, allowing each agent's policy to condition on the policies of their peers, mitigating the non-stationarity problem. The foundational difference between QMIX and IQL
Figure 1: Multi-Agent RL Agent-Environment Framework.
implementations in this work is that QMIX performs learning updates centrally.
### Autonomous Cyber Defence
ACD does not seek to replace tools that are successful in the field, such as anomaly-based intrusion detection. It instead aims to build a tactical-level decision-making framework to integrate with existing technologies. In this research, the decision space is modelled as a simulation on top of a tool-based abstraction of the network state. An RL simulation for ACD must model the elements of the environment to the level of detail that will allow the agent to learn tactical-level action chains when presented with a series of alerts about the underlying network status. A multi-agent framework further allows for the decentralization of the decision-making processes while maintaining an input-output space that is constrained to a level that can be learned effectively by current RL methods.
The high-level actions of the defender in this game represent the choices for a defender to selectively understand or act against the threat. The tactical decision-making competency of agents then can be inferred from their ability to minimize the impact of the attack. Many thousands or millions of iterations are often required for RL to learn reasonable policies. This problem is exasperated by a lower sample efficiency of partially-observable environments [11]. By simulating ACD, RL systems can be trained over many iterations in an abstraction of the underlying computer network environment. Simulation requires less compute than the alternative emulated approach while still offering enough complexity to challenge current state-of-the-art designs.
Cyber Operations Research Gym (CybORG) is a framework that provides a simulated cyber operations environment for training and evaluating RL agents [2]. CybORG simulates information technology systems related to network security that a cybersecurity professional may use in network defence or penetration testing. The simulated environment presented in this work, CyMARL, adapts CybORG to create a multi-agent host-based monitoring game for training tactical cyber defence. CyMARL includes a PyMARL2 environment allowing for integration with the tens of open-source cooperative MARL algorithms built on the PyMARL framework.
Footnote 2: PyMARL is an open-source MARL research project that allows algorithms to be built from existing deep RL components such as agents and trainers. It is included as part of the SMAC environment and paper [1]. The repository for PyMARL can be found at [https://github.com/oxwhirl/pymarl](https://github.com/oxwhirl/pymarl).
## 4 Experimental Design
To demonstrate the applicability of cooperative MARL for tactical-level decision-making for ACD, three game scenarios are evaluated against three attacker types. CAGE Challenge 2 [1] is a central contributor to many aspects of this experiment, including the attacker models, network topology, and agent actions. CyMARL extends the complexity of CAGE Challenge 2 by employing new scenarios (with diverse objectives, attacker behaviours, and reward functions), modified observation and action spaces, and a cooperative multi-agent framework.
### Simulated Network Topology
The simulated game environment is composed of nine hosts, five in a user subnet and four in an operational subnet. Hosts are a mix of Linux and Windows, with unique exploits for each OS. Hosts simulate processes, users, active sessions, and network interfaces. Rewards are calculated from the importance score of hosts. Hosts in the user subnet have a score of 0.1, operational subnet hosts, 1, and the operational server (located in the operational subnet), 10. Each host in a _compromised_ state incurs a negative reward for the defender team equal to its importance score.
A heuristic attacker agent begins each episode with an initial foothold on a user host and moves between hosts by scanning for and exploiting vulnerable services. The attacker can move laterally to a target host if one of its sessions' hosts has visibility of the target host through its network interface. The attacker's actions follow a hierarchy: it must first discover a host, then a port, before attempting to move laterally on the network using an exploit. Escalating session privilege can then be performed. The attacker's session cannot be removed from the initially compromised host allowing for episodic gameplay where both sides have opportunities to influence the reward. RL defender agents are segregated by subnet, one agent responsible for the user subnet and the other controlling the operational subnet. An example of the agent-network interface is shown in Figure 2.
### Attack Patterns
A real-world attacker is often not solely motivated to establish communication channels on a network. The heuristic attacker's behaviour in this game is differentiated by its goal, which can be _confidentiality_, _integrity_, or _availability_. In each case, the attacker takes different actions upon establishing a session on a host, though the heuristic for lateral movement is the same across all scenarios. In the confidentiality scenario, the attacker's intent is to spy on privileged information within the network but not take actions to modify it directly. The confidentiality attacker achieves "compromise" for each host it is connected to at each timestep. The integrity attacker seeks to modify files stored on a host. At each timestep that modified data exists, a host is considered compromised. The availability attacker will spawn a Denial of Service (DoS) malware process that represents a consumption of CPU cycles, affecting user availability. Similarly, each host in this scenario with a malware process running is compromised. This does not affect the defender's ability to monitor the victim host. Although the attacker's behaviour is a simplification of real attack patterns, the difference in objective affects how the threat appears to the defender and which actions it should take.
To compromise a host in the integrity and availability scenarios, an additional action must be taken once a session is established on a victim host using the exploit action. The tamper action is taken by the integrity attacker to simulate the modification of important files. The deny action is taken
process. The two-step scoring process in the integrity and availability scenarios decouples the observation of attacker presence on a host from the signal of a host compromise via the negative reward.
The cumulative reward for each strategy serves to model the risk to the network while malicious artifacts exist on network hosts. As the number of timesteps increases, there is a greater probability that the attacker will have collected privileged information or disrupted normal processes on the network. Likewise, the severity of the risk is proportional to the importance of the host that becomes compromised.
### Observations
Host-based monitoring systems are a common and effecti tool for cyber defence operations, particularly in situations where an attacker has made an initial breach and is attempting to gain Command and Control (C2) channels within a network. In CyMARL, each defender agent's observations are modelled as an abstraction of information that may be provided from a host-based monitoring service. The defender receives a set of six flags at each turn for each host (denoted by the variable \(x\)) within its area of control:
1. Host \(x\) was scanned last turn.
2. An exploit was run on a Host \(x\) service last turn.
3. There are no malicious sessions on Host \(x\).
4. Host \(x\) status is unknown.
5. A malicious session on Host \(x\) exists with user privileges.
6. A malicious session on Host \(x\) exists with root/admin privileges.
The flags are one-hot encoded into a vector that is assigned to an agent ID for interface with PyMARL. These observations do not specifically alert of malicious files or processes so the defender must learn patterns between the information encoded within its observation space and its rewards.
### Defender Actions
To mitigate attacker activity, the defender agents select actions to monitor, remove suspicious processes or files, restore hosts, or gather file information. monitor returns the observation vector and checks all hosts within an agent's vision for new processes that are then added to a suspicious process list. An agent can then choose to remove suspected malicious processes from a specific host. As a more aggressive action, an agent can re-image a host using restore. Restoring is the only way to remove an attacker that has escalated its privilege on a host. It will also remove any malicious processes or files. However, the restore action has an associated penalty that scales with the importance score of the target host.
In the integrity scenario, agents have additional actions to manage the specific threat type. Agents can take an analyze action to determine the security characteristics of files on a host, allowing it to identify any malicious files. To remove the discovered files, the data_repair, is required. In the availability scenario, the remove action can be used to target DoS processes and simulated TCP sessions. In the second set of trials, the misinform action is introduced which generates a service on a host that can be misinterpreted by an attacker as vulnerable. The attacker may choose to exploit this process which will always fail. A defender agent may choose to perform any action on any of its connected hosts. However, some actions are invalid (e.g., if a host is targetted with the remove action but does not have a malicious process running) and are penalized with a reward of -0.1.
### Heuristic Defender
To validate the performance of MARL systems, a heuristic multi-agent defender is used as a benchmark for each game scenario. The heuristic defender's design assumes it has access to a tailored expert system that provides a precise signal of malicious process creation. It relies on selected information from the true state and does not utilize the observation
Figure 2: Simple network diagram with defender sensors and targets.
space of the MARL systems. In the confidentiality scenarios, this policy monitors all hosts within each agent's subnet, and restores hosts indicated by the process creation alert. The restore action was chosen because it scored higher in testing than using combinations of other actions. In the availability scenario, detection occurs due to a DoS process being created, not an attacker session. In the integrity scenarios, the heuristic defender will attempt to remove malware on a host that it has discovered to have a tampered file with a 50% probability and restore otherwise. In trials using the misinform action, the defender system creates a decoy process on a random host for the first five turns and then proceeds with its standard policy for the remainder of the episode. Since using monitor to detect and restore to re-image affected hosts takes two turns, the attacker has the opportunity to stay one step ahead if it continues to exploit new hosts. Although the heuristic defender has perfect game information, its policy is restrictive enough to allow the attacker opportunities to overcome its defences.
### Evaluation Metrics
For each experiment, two algorithms are evaluated: IQL and QMIX, implementing fully-decentralized and CTDE architectures, respectively. Agents do not directly share information with each other, but in the case of QMIX, a joint policy representation is learned centrally before being decomposed into separate behaviour policies. The architecture of cooperative MARL systems has been shown to significantly affect the learning ability of models at a range of tasks [15, 16]. Learning ability is the capacity of an RL model to generate a policy from its initial randomized state. A trained model that achieves a higher evaluation score (i.e., mean return) at a task has a higher learning ability than a lower-performing model given the same training opportunity.
MARL systems are trained for two million timesteps. A timestep is represented in the game as a turn in which each agent takes one action. Each trained model is evaluated over 1000 timesteps of play without learning. The evaluation score is the average of the mean return of five evaluation runs using separately trained static policies for each trial. Evaluation scores are compared using percent difference. The variance of models is expressed in the standard deviation of the mean score. Learning speed is an implicit evaluation criterion to mitigate the impracticality of long training times. Although there may be greater performance gains outside of the bounds of the training time for the model, training time is constrained to conserve computational resources. This allows for greater ease of reproducibility of these results and a reasonable expectation of performance given a fixed amount of model experience.
## 5 Experiments
Two architectures are trained and compared at controlling two defender agents in three scenario types each with different attacker behaviour (confidentiality, integrity, and availability) in two sets of experiments. The first set uses the standard action set described in Section 4 and the second adds the misinform action to each scenario. The addition of a proactive action to the set of reactive actions sets the misinform experiment apart and allows for an evaluation of how the two MARL architectures perform under varied conditions.
To set initial conditions, a selection of hyperparameters was varied in a grid search in which MARL systems were trained to 500,000 timesteps at the confidentiality scenario evaluating four parameters: batch size (128, 256), buffer size (5,000, 10,000), learning rate (0.005, 0.01), and TD-\(\lambda\) trace decay (None, 0.6). For each set of hyperparameter values, three random seeds were trained to provide a representative result. The simplest scenario is confidentiality scenario due to the attacker's exploit and the reward for compromise occurring simultaneously. It is assumed that these hyperparameter values transfer reasonably well to the other scenarios. The hyperparameter values chosen were based on initial trials with the environment and the findings in Hu _et al._ and Hessel _et al._ that evaluated the effect of RL implementation designs in QMIX and DQN, respectively [12, 13].
The training curves of IQL and QMIX are shown in Figure 3 for the three baseline scenarios and Figure 4 for the misinform scenarios. The shaded area of each curve is the standard deviation of the average training return. The dashed line is the mean heuristic defender score for each scenario. Both IQL and QMIX outperform the heuristic multi-agent defender model in all scenarios. QMIX has a slight learning speed advantage in the confidentiality and integrity scenarios but otherwise, both architectures tend towards similar return scores. Table 1 presents the evaluation scores of IQL, QMIX, and the multi-agent heuristic defender at each scenario.
The option to generate decoy processes generally improved the performance of both architectures with a 19.8% and 8.3% average improvement in score for IQL and QMIX, respectively. However, QMIX had reduced performance in the confidentiality scenario and a negligible difference in the integrity scenario. The heuristic model using the misinform action at the start of each episode performed 2.7% better on average.
## 6 Discussion
The availability and integrity scenario types employ an attacker that must perform an additional action upon establishing a session to score points. The attacker in these scenario types is therefore slower and will tend to score less in a fixed-timestep game. On average the MARL systems trained at the integrity and availability scenarios respectively scored 19.1% and 36.7% higher than the confidentiality scenario. This effect is more pronounced in the heuristic, scoring 51.4% and 66.7% higher in the integrity and availability scenarios, respectively, than in the confidentiality scenario. Moreover, the addition of the misinform action narrowed the performance gap between IQL and QMIX from a 11.2% QMIX advantage to 0.2%. These results suggest that there is an attractive local minimum for each scenario that both architectures tend to settle in.
The multi-agent host-monitoring game presented does not require explicit coordination for a defender team to be suc
cessful. Moreover, the segregation of agents into subnets limits their ability to influence the state of other agents. As a result, IQL is able to discover policies that score closely to those of QMIX despite it's disadvantages of lacking coordination and suffering from multi-agent non-stationarity. IQL also has the advantage of scalability. QMIX requires that Q-functions are learned centrally, thus limiting the number of possible agents since the central learner will eventually create a bottleneck. IQL does not require any central learning and therefore the number of agents in a game does not pose a constraint.
## 7 Conclusion and Future Work
Tactical decision-making in cyber defence contends with an expansive problem space necessitating expert knowledge and, in many cases, decentralization of effort to provide essential coverage. This work presented the initial evidence of the success of cooperative MARL to generate of decentralized, tactical-level control policies for a variety of host-based ACD scenarios. Independent and CTDE value-based cooperative MARL architectures can learn policies that outperform a basic heuristic model in this game. In particular, the performance of IQL at these tasks suggests that this approach may scale well to larger networks with more agents. Future gains in performance are expected to be possible both in terms of the learning ability of MARL systems and in the realism of training games. We suggest the following areas for the future development of cooperative MARL in the domain of ACD:
1. Techniques to approximate greater environmental realism, including the use of generative programs to increase the sample size of experimentation [8, 1],
2. Games offering more sophisticated and varied threat types using self-play [10] and adversarial machine learning attacks [11, 12, 13], and
3. Leveraging promising MARL techniques such as multi-agent policy-gradient methods [23] or hierarchical role assignment [24], transfer learning [25], attention mechanisms [26] and transformers for value-decomposition [15].
|
2310.13183 | Breaking through Deterministic Barriers: Randomized Pruning Mask
Generation and Selection | It is widely acknowledged that large and sparse models have higher accuracy
than small and dense models under the same model size constraints. This
motivates us to train a large model and then remove its redundant neurons or
weights by pruning. Most existing works pruned the networks in a deterministic
way, the performance of which solely depends on a single pruning criterion and
thus lacks variety. Instead, in this paper, we propose a model pruning strategy
that first generates several pruning masks in a designed random way.
Subsequently, along with an effective mask-selection rule, the optimal mask is
chosen from the pool of mask candidates. To further enhance efficiency, we
introduce an early mask evaluation strategy, mitigating the overhead associated
with training multiple masks. Our extensive experiments demonstrate that this
approach achieves state-of-the-art performance across eight datasets from GLUE,
particularly excelling at high levels of sparsity. | Jianwei Li, Weizhi Gao, Qi Lei, Dongkuan Xu | 2023-10-19T22:32:51Z | http://arxiv.org/abs/2310.13183v2 | # Breaking through Deterministic Barriers:
###### Abstract
It is widely acknowledged that large and sparse models have higher accuracy than small and dense models under the same model size constraints. This motivates us to train a large model and then remove its redundant neurons or weights by pruning. Most existing works pruned the networks in a deterministic way, the performance of which solely depends on a single pruning criterion and thus lacks variety. Instead, in this paper, we propose a model pruning strategy that first generates several pruning masks in a designed random way. Subsequently, along with an effective mask-selection rule, the optimal mask is chosen from the pool of mask candidates. To further enhance efficiency, we introduce an early mask evaluation strategy, mitigating the overhead associated with training multiple masks. Our extensive experiments demonstrate that this approach achieves state-of-the-art performance across eight datasets from GLUE, particularly excelling at high levels of sparsity.
## 1 Introduction
One of the main challenges in deploying large neural networks (such as BERT (Devlin et al., 2019) and GPT-3 (Brown et al., 2020)) in production is the huge memory footprint and computational costs. Meanwhile, studies show that large and sparse models often yield higher accuracy than small but dense models (Gomez et al., 2019). As a result, pruning has been popularized to dramatically reduce memory size and computational power consumption with little to no performance degradation. (Hoefler et al., 2021; Glorot et al., 2011; Kaplan et al., 2020; Li et al., 2020; Mhaskar and Poggio, 2016; Brutzkus et al., 2017; Du et al., 2018).
Pruning aims to eliminate redundant weights, neurons, and even layers in models. Many works focus on magnitude-based pruning (Hagiwara, 1993; Gale et al., 2019; Thimm and Fiesler; Han et al., 2015; Zhu and Gupta, 2017; Cuadros et al., 2020), namely to remove the elements with the smallest magnitude. Here the magnitude refers to not only the weights but also the output sensitivity, gradients, or Hessian matrices of the training loss (Luo et al., 2017; Yu et al., 2018; He et al., 2019; Lis et al., 2019; Molchanov et al., 2019; Singh and Alistarh, 2020; Dong et al., 2017). While magnitude-based pruning can generate state-of-the-art results in a wide range of tasks, its pruning strategy is deterministic and solely depends on a single criterion, which lacks variety (we demonstrate this more thoroughly in the next paragraph). Furthermore, magnitude-based pruning is proven not optimal at high-level sparsity (Sanh et al., 2020). To further improve the pruning performance, Zhuang et al. (2020); Ge et al. (2011); Savarese et al. (2020); Verdenius et al. (2020);
Figure 1: Weight Distribution in a Feedforward Layer of BERT\({}_{Base}\) at Various Sparsity Levels (0.52 and 0.83), Corresponding to Pruning Thresholds \(\tau=0.027\) and \(\tau=0.055\). Notably, around 29% of the weights lie within the range \(\lfloor\frac{2}{3}\tau\rfloor\), \(\frac{4}{3}\tau\rfloor\). This observation puts into question the efficacy of magnitude-based pruning, as these weights, despite their proximity to the threshold, might play a crucial role in maintaining the model’s accuracy. This suggests that directly eliminating weights with smaller magnitudes could potentially lead to a sub-optimal pruning strategy.
Azarian et al. (2020) try to enlarge the search space of sparse architecture with regularization-based methods, which are non-deterministic. They design fine-designed \(L_{0}\) or \(L_{1}\) penalty terms added to the loss function. In this way, the model proactively shrinks some of the weights until they don't contribute to the final loss. Regularization-based methods can achieve noticeably better results than magnitude-based methods, especially at high-level sparsity (Sanh et al., 2020). However, this line of work often suffers from a non-convex landscape and is challenging to optimize with extra hyper-parameters. In parallel, Su et al. (2020); Liu et al. (2022) adopt a more aggressive strategy to prune elements in a completely random way. Their methods demonstrate the competitive effect in small datasets (such as CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009)) but fail in large datasets (such as ImageNet (Deng et al., 2009)). Different from these works, this paper introduces a mildly random pruning method that brings a controllable degree of randomness into the pruning mask generation procedure.
We demonstrate the weakness of magnitude-based pruning in Figure 1. It presents the weight distribution of a feedforward layer of BERT (Devlin et al., 2019). Define \(\tau\) as the pruning boundary. We consider two scenarios: \(\tau=0.027\) and \(\tau=0.055\), leading to sparsity levels of 0.52 and 0.83, respectively. As shown in Figure 1, a large portion of weights (\(\approx 29\%\)) falls into the range of [\(\frac{2}{3}\tau\), \(\frac{4}{3}\tau\)], which cannot be overlooked because that it is unclear if the pruned weights close to the threshold contribute less than the kept weights in the final accuracy. The weights with smaller magnitudes can still be crucial, especially when dealing with edge cases or infrequent situations. Proximity between weights can intensify decision-making challenges. This is why the direct removal of weights with smaller magnitudes is sub-optimal, as also demonstrated in Gomez et al. (2019). Based on the above observations, we investigate the following questions in this paper:
**Question 1.** _Which is better for pruning? a deterministic way or a randomized way_?
Previous literature has not reached a consistent conclusion. While Su et al. (2020) and Liu et al. (2022) have provided evidence that random pruning can yield competitive or better results compared to deterministic methods, this finding does not consistently hold true for larger datasets. Moreover, these results have not been universally extended to language models. We conjecture that their methods introduce unbridled randomness but do not provide any effective negative feedback. Moreover, exploring the extent of introduced randomness in a principled way has also not been explored in the previous literature. In this paper, we study and extend the above question systematically.
**Question 2.** _Can we design a consistently effective randomized pruning method_?
This paper answers the above question with the following contribution. **First**, we propose a randomized pruning mask generation strategy that can introduce controllable randomness in a principled way. **Second**, we design Mask Candidate Selection Strategy (MCSS) to choose the optimal mask from the pool of mask candidates, ensuring the introduced randomness always guides pruning in a beneficial direction. **Third**, to further enhance efficiency, we introduce Early Mask Evaluation Pipeline (EMEP) to mitigate the overhead associated with training under multiple pruning masks. **Last**, we offer an empirical guidance for randomized pruning on Bert\({}_{base}\) and Bert\({}_{large}\). Our results show a consistent accuracy boost (**0.1%\(\sim\)2.6%**) on the GLUE benchmark, outperforming other state-of-the-art pruning techniques at a 16x compression rate. Notably, our approach showcases even more significant enhancements (**2%\(\sim\)4%**) at extreme sparsity levels like the 100x compression rate.
## 2 Preliminaries
### Pruning
Iterative Magnitude PruningIterative Magnitude Pruning (IMP) is the most well-known strategy because it yields state-of-art results than others (Frankle and Carbin, 2019; Frankle et al., 2020), such as Single-shot Network Pruning (SNIP) (Lee et al., 2018). Specifically, we divide the pruning process into multiple stages by gradually increasing the sparsity. In each stage, pruning is to find and eliminate redundant parameters or neurons at that time. The most intuitive approach is to assign an importance score to each element and keep only the top-k elements. The score used to rank elements can be the absolute value of weights, output sensitivity, gradients, or other fine-designed metrics (Hagiwara, 1993; Gale et al., 2019; Thimm and Fiesler; Han et al., 2015; Zhu and Gupta, 2017; Cuadros et al., 2020; Luo et al., 2017). In this work,
different from the traditional deterministic way, we extend the IMP in a random way.
### Knowledge Distillation
Knowledge Distillation (KD) Hinton et al. (2015) is another compressing technique trying to transfer the knowledge from a well-trained large model T to a small model S. Many previous works have proved that pruning with KD can significantly reduce accuracy loss for Transformer-based models Xu et al. (2021); Xia et al. (2022). Our experiments evaluate the pruning methods based on BERT Devlin et al. (2019), and we apply the KD method to both the baseline and our strategy. Specifically, we distill the knowledge from the hidden state of each transformer block and the attention score of each self-attention layer. Figure 6 demonstrates the distillation strategy in our settings.
### Multinomial Distribution
In probability theory, a multinomial distribution describes the probability distribution of \(n\)\((n>2)\) sampling trials from elements with \(k\)\((k>2)\) categories, which is a generalization of the binomial distribution Ross (2010). In our setting, the number of categories equals the number of elements, and the target sparsity and the total number of elements determine the number of trials. Note that the sampling process in this paper is done without replacement. This kind of sampling is also referred to as sampling from a multivariate hypergeometric distribution Berkopec (2007).
## 3 Methodology
In this section, we first rethink the traditional deterministic pruning method and introduce our basic idea and method. Following that, we elaborate on the details of our randomized pruning mask generation and selection strategy. The architecture of our strategy is depicted in Figure 2, and the detailed procedure is outlined step-by-step in Algorithm 1.
### Rethink Iterative Magnitude Pruning
Traditional IMP divides pruning into multiple stages and generates a deterministic pruning mask by retaining the top-k elements at each stage. This process is based on the assumption that the top-k elements contribute more than the removed part. However, given the complex topology of model architecture and observations from Figure 1, it is difficult to draw such a conclusion. In this paper, we aim to introduce a certain degree of randomness into the process of pruning mask generation, thereby expanding the search space for locally optimal pruning masks at each stage. Specifically, we propose a strategy for generating and selecting randomized pruning masks at each pruning stage.
### Randomized Pruning Mask Generation
#### 3.2.1 Mask Sampling
Different from the deterministic mask generation, we seek to infuse controllable randomness into this process. In essence, our approach is to sample the retained elements from a multinomial distribution without replacement. Specifically, the first step is to derive a probability distribution by normalizing the magnitude of the elements. Within our framework, magnitude is defined as the absolute value of the weight. Subsequently, we sample \(k\) indices from this distribution, where \(k\) represents the number of elements retained. Finally, we generate a binary mask, with locations corresponding to these indices set to one, effectively outlining the sparse architecture of the model. The utilization of this approach offers a refreshing departure from the deterministic way and creates a larger optimization space for model pruning.
#### 3.2.2 Controllable Randomness
We have proposed a random method to generate pruning masks. However, for current models with several million parameters per layer, a single iteration of sampling leads to considerable randomness due to the minute probability post-normalization. To quantify this randomness, we propose \(ir\) (introduced randomness) in Equation 1:
\[ir=(C*sparsity-C_{s})/C_{s} \tag{1}\]
Here, \(C\) and \(C_{s}\) represent the total count of weights and the count of weights pruned by both deterministic and random approaches, respectively. A small value of \(ir\) indicates that the sampled mask resembles the deterministic one. Conversely, a larger value suggests a noticeable departure from the deterministic method.
We assess the introduced randomness with \(ir\) and simultaneously strive to regulate its quantity. Drawing inspiration from the concept of model soup Wortsman et al. (2022), we manage the randomness by sampling \(M\) masks and adding them element-wise to craft an ensemble mask. This mask has its top-k values set to 1, with the remainder set to 0, thus yielding a mask with controllable
randomness (k is the number of kept elements). Importantly, the degree of introduced randomness shows a positive correlation with the value of \(M\).
#### 3.2.3 Accelerated Mask Sampling
Controlling randomness solely by increasing the number of sampled masks can be time-intensive. To address this, we suggest deriving the sampling probability distribution from the \(w^{T}\), where \(w\) is the weight of the corresponding layer. In this scenario, the power value \(T\) in the exponential term is used to control the variance of the sampling probability. As \(T\) increases, the sampling probabilities for larger and smaller magnitudes diverge more, allowing Mask Sampling to curtail the introduced randomness swiftly. Moreover, aligning with our motivation to introduce randomness in mask generation, we only sample weights whose magnitudes are close to the pruning boundary \(\tau\). We introduce more details in Appendix.
### Randomized Pruning Mask Selection
#### 3.3.1 Mask Candidate Selection Strategy
Our sampling approach expands the search space for locally optimal masks compared to the deterministic way. However, this inadvertently introduces undesired noise, leading to poor model accuracy because we introduce randomness without providing any effective negative feedback to the model optimization. To address this, we propose Mask Candidate Selection Strategy (MCSS) to ensure the introduced randomness always guides the model optimization in a beneficial direction. Specifically, at each pruning stage, we generate \(N\) candidate masks and select the best one for the next pruning stage. To ensure robustness in our approach, we adopt a deterministic mask as one of our mask candidates. By doing so, we are not solely relying on random or heuristic methods but also have a reliable fallback.
#### 3.3.2 Early Mask Evaluation Pipeline
To accelerate the mask selection, we design Early Mask Evaluation Pipeline (EMEP) to reduce computational costs. Specifically, we only fine-tune the model one epoch with a large learning rate for each candidate mask. The candidate that achieves a superior early-stop evaluation metric on the validation dataset is then deemed the winner. We crafted this strategy based on findings by Li et al. (2019) and You et al. (2019), which suggest that using a high learning rate during the earlier optimization iterations can yield a good approximation of the sparse network structure. Once the winner has
Figure 2: Main Architecture of Our Strategy. We replace 1�
been chosen, we revert the weights and learning rate to their state before the last pruning step. Subsequently, the winning candidate mask is employed for continuous regular training.
## 4 Experiments
We evaluate the effectiveness of our pruning strategy in a wide range of natural language understanding tasks. Following previous work, we use **BERT** as our backbone and then apply different pruning methods to compare their performance.
### Baselines
**BERT\({}_{base}\)** and **BERT\({}_{large}\)** are first chosen as our baseline models. Based on them, we apply **IMP** to generate 16x sparse models. These sparse models are used as our main baselines. In addition, we compare our strategy with previous works that have reported results on the same datasets, which including: **BERT-PKD**Sun et al. (2019), **Stru-Pruning\({}_{Roberta}\)**Wang et al. (2020), **SNIP**Lin et al. (2020), **EBERT**Liu et al. (2021), **BERT-of-Theseus**Xu et al. (2020), **EfficientBERT**Dong et al. (2021), **Sparse-BERT**Xu et al. (2021), **RPP**Guo et al. (2019), **Pretrained Ticket**Chen et al. (2020), **Lottery Ticket**Prasanna et al. (2020), **Prune at Pretraining**Gordon et al. (2020), **Movement Pruning**Sanh et al. (2020), **DistillBert\({}_{6}\)**Sanh et al. (2019), and **TinyBert\({}_{6}\)**Jiao et al. (2020).
### Datasets and Data Augmentation
Following previous works, we select eight tasks from the **GLUE** dataset (excluding **WNLI**) to evaluate the effectiveness of our pruning strategy Wang et al. (2018). We also follow the data augmentation method from **TinyBert**Jiao et al. (2020). More details can be found in Appendix.
### Setup
We follow the strategy from SparseBert Xu et al. (2021) to do pruning and knowledge distillation simultaneously at downstream tasks. We also imitate the setting from Frankle and Carbin (2019) to adopt a simple pruning schedule in our experiments. Specifically, we only use 4-9 pruning stages to increase the sparsity gradually (such as 0.54, 0.83. 0.91, 0.9375). Furthermore, after choosing a decision mask at each pruning stage, the number of epochs in the finetuning phase is no longer limited until the model converges. We apply exactly the same setting for the IMP baseline and our approach. For more details about hyperparameters, please refer to Appendix.
### Main Results and Analysis
We report the results on _dev_ set of 8 datasets from GLUE and summarize them in Table 1-2. We also compare our results with distillation-based methods and describe the results in Appendix.
From the above results, we can easily observe that our strategy consistently generates better performances than the main baseline IMP in a totally same setting. Moreover, the result of our method is also optimal in similar settings compared to other pruning techniques. These findings demonstrate the superiority of our random way in the mask generation process over the deterministic approach and confirm that our mask selection rule can effectively navigate the optimization after introducing randomness. In other words, our methods successfully increase the probability of finding better pruning
masks by introducing randomness in a principled way.
We also notice that the potential improvement of performance may be limited by the magnitude used to derive sampling probability. In our setting, we use the absolute value of weights to decide the importance of each neuron connection. Thus our pruning results can not surpass the theoretical optimal result (upper bound) when pruning with the absolute value of weights. This reminds us that our method can be easily transplanted to other magnitude-based pruning methods, such as the gradients-based methods, and may produce the same effect, helping us find better pruning masks.
Furthermore, we realize that the effect of our strategy in different datasets is not uniform. We obtain a more noticeable improvement in accuracy on small datasets. We argue that small datasets have more local minimums of loss surface, and therefore our strategy can more easily help find better pruning masks.
### Ablation Study
We try different ablation settings to figure out the functionality of each part of our strategy and analyze why they are effective for model pruning.
#### 4.5.1 Impact of Randomness and Schedule
Prior studies have demonstrated that there is no loss in accuracy at lower levels of sparsity, particularly when sparsity is less than 50%. This suggests that the model maintains robustness with the architectures identified in the early stages of pruning. We conjecture there is a high level of redundancy in the weights at the early pruning stage. As such, introducing more randomness could potentially expand the search space for sparse architecture in the early stages without hurt to the accuracy. However, previous works also argue the concept of early model adaptions and emphasize the importance of early searched architecture to the final pruning targets. Thus, introducing too much randomness at the beginning stage may not be a good idea.
In the last pruning stage, the closely-matched magnitudes of the remaining weights significantly impact accuracy. Careful selection is required for the elements to be pruned. Too much randomness could corrupt the model, hindering recovery from the last pruning step, while too little might restrict our search for potentially optimal masks. Hence, deciding the best schedule of randomness at each pruning stage is crucial.
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c} \hline \hline Methods & \#params & MNLI-m & QNLI & QQP & MRPC & SST-2 & COLA & RTE & STS-B \\ & & Acc & Acc & F1/Acc & F1 & Acc & Mrr & Acc & Spear \\ \hline \hline BERT\({}_{Base}\) & 110M & 84.5 & 91.4 & 89.59/91.0 & 90.1 & 92.5 & 56.3 & 69.3 & 89.0 \\ \hline \multicolumn{12}{l}{_left \#params \(\geq\) 50\%_} \\ \hline BERT-PKD & 50\% & 81.3 & 88.4 & -/88.4 & 85.7 & 91.3 & 45.5 & 66.5 & 86.2 & - \\ Stru Pruning & 73\% & - & 89.05 & - & 88.61 & 92.09 & - & - & 88.18 & \\ SNIP & 50\% & 82.4 & 89.5 & - & 88.1 & 91.8\% & - & - & - \\ EBERT & 60\% & 83.1 & 90.2 & 87.5/90.8 & - & 92.2 & - & - & - \\ BERT-of-Theseus & 50\% & 82.3 & 89.5 & -/89.6 & 89.0 & 91.5 & 51.1 & 68.2 & 88.7 & \\ Pretrained Ticket & 50\%-90\% & 82.6 & 88.9 & -/90.0 & 84.9 & 91.9 & 53.8 & 66.0 & 88.2 & \\ Lottery Ticket & 38\%-51\% & 84.0 & 91.0 & -/91.0 & 84.0 & 92.0 & 54.0 & 61.0 & 88.0 & \\ \hline IMP & 50\% & 84.6 & 91.3 & 88.0/91.0 & 90.8 & 92.8 & 53.1 & 72.0 & 89.4 & \\
**Ours** & 50\% & **84.7** & **91.5** & **88.1/91.1** & **91.5** & **93.0** & **54.3** & **72.3** & **89.5** & \\ \hline \multicolumn{12}{l}{_left \#params \(\leq\) 10\%_} \\ \hline RPP & 10\% & 78 & 87 & 88.0/- & 80.0 & 89 & - & - & - & - \\ Movement Pruning & 10\% & 80.7 & - & 87.1/90.5 & - & - & - & - & - & \\ EfficientBERT & 9\% & 81.7 & 89.3 & 86.7/- & 90.1 & 90.1 & 39.1 & 63.2 & 79.9 & \\ SparseBERT & 5\% & - & 90.6 & - & 88.5 & - & 52.1 & 69.1 & - & \\ \hline IMP & 6\% & 83.3 & 90.5 & 87.6/90.8 & 90.2 & 92.2 & 53.1 & 66.7 & 87.0 & \\
**Ours** & 6\% & **83.4** & **90.9** & **87.9/90.9** & **91.5** & **92.7** & **53.4** & **69.3** & **87.5** & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Main Comparison Results between Our Strategy and Other Pruning Baselines with Bert\({}_{Base}\) on \(dev\) Sets of 8 Datasets from GLUE Benchmark. Note that the pruning results of IMP and **Ours** are achieved by ourselves in a totally same setting, while others are from the corresponding literature.
To investigate the impact of randomness, we introduce a hyper-parameter \(sr\) that controls the number of sampled masks \(M\), and thereby controls the total introduced randomness. We also propose two simple randomness schedules at varying pruning stages: (1) _Decrease_, where we reduce the introduced randomness by increasing the number of sampled masks as the count of pruned weights increases (\(M=sr*C_{pruned}\)), and (2) _Increase_, where we enhance the introduced randomness by decreasing the number of sampled masks as the count of pruned weights increases (\(M=sr*(C-C_{pruned})\)).
We conduct experiments comparing these two schedules against our primary baseline (IMP) under different \(sr\) values. The results are displayed in Figure 3, leading us to the following observations: 1) Excessive randomness results in our strategy performing even worse than the deterministic method (the region blue and orange lines below the green line). In this region, the _Decrease_ strategy outperforms the _Increase_ strategy. 2) Existing a threshold below which both our two randomness schedules outperform IMP, highlighting the superiority of our random approach over the deterministic way. 3) Existing another threshold above which the performances of the two randomness schedules become virtually identical. 4) The _Decrease_ strategy consistently equals or outperforms the _Increase_ strategy. This proves that the model accuracy is not sensitive to randomness in the early pruning stage and is gradually becoming sensitive as it approaches target sparsity.
#### 4.5.2 Impact of MCSS
We assessed the role of MCSS by removing it from our strategy and comparing the results with our primary findings. The results are summarized in Figure 4. We make the following observation: 1) In the setting with MCSS, there is a certain threshold of randomness below which our strategy significantly outperforms the deterministic way. 2) In contrast, In the setting without MCSS, the model's performance against the deterministic approach is inconsistent and lacks a clear trend or pattern. This precisely demonstrates that MCSS can ensure the introduced randomness consistently guides the model optimization toward a beneficial direction. In other words, MCSS effectively enhances the lower accuracy boundary in our experiments.
#### 4.5.3 Impact of Sparsity
We have examined our strategy across various levels of sparsity, and the findings have been encapsulated in Figure 5(a). We observe that our random pruning strategy has consistently demonstrated superior performance compared to the baseline (IMP) across all levels of compression. The advantage is particularly pronounced at higher levels of sparsity,
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|} \hline \hline Methods & \#params & MNLI-m & QNLI & QQP & MRPC & SST-2 & COLA & RTE & STS-B \\ & & Acc & Acc & F1 & F1 & Acc & Mrr & Acc & Spear \\ \hline \hline BERT\({}_{Large}\) & 330M & 86.6 & 92.3 & 91.3 & 89.1 & 93.2 & 60.6 & 74.8 & 90.0 \\ \hline IMP & 20\% & 85.2 & 91.6 & 90.8 & 90.9 & 92.8 & 59.0 & 73.2 & 89.1 \\
**Ours** & 20\% & **86.2** & **91.8** & **91.1** & **91.9** & **93.7** & **60.9** & **75.5** & **89.9** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison between Our Strategy and IMP with BERT\({}_{Large}\) on GLUE \(dev\) Sets.
Figure 3: Comparing the Impact of Randomness in Two Different Schedules with a Deterministic Approach (IMP), which features zero randomness. The horizontal axis presents the logarithmic outputs of \(ir\), with larger \(ir\) indicating a greater amount of total introduced randomness. The vertical axis signifies the model’s accuracy.
such as those equal to or greater than 16x sparsity.
#### 4.5.4 Impact of Masks Candidates
To verify the relationship between the number of mask candidates in MCSS with the final performance, we design experiments to increase the number of candidate masks at each pruning stage from 2 to 10, and the results are depicted in Figure 5(b). We conclude a positive correlation between the quality of the searched mask and the number of candidate masks at each stage. As the number of mask candidates close to 10, the performance gain is gradually missing. Additionally, the variance of the performance gain is also gradually minimized, which proves that the MCSS can effectively navigate the optimization of pruning in a beneficial direction.
### Discussion
#### 4.6.1 Efficiency Analysis
We analyze the efficiency of our method through the training and inference phases.
Training Phase:In the training phase, we compare the required computation of our method with the traditional IMP method. It's easy to find that additional computation mainly from the randomized mask generation and selection. For the generation process, it's crucial to emphasize that the creation of each mask is independent of the others. This independence allows for parallel processing, meaning that the time consumption doesn't increase linearly as the increase of mask candidates. On the other hand, we measured the GFLOPS required to generate a single mask and compared it with the GFLOPS needed for one forward pass of BERT\({}_{base}\). It's roughly 1 percent to the later operation. However, due to implementation challenges, we couldn't concurrently sample \(k\) positions multiple times from the weight matrix, leading to an overall increase in processing time for single randomized mask generation. For the selection process, we only require one epoch to identify the optimal mask, where the overhead is minimal compared with the entire pruning process.
Inference Phase:In real-world applications, although there might be overheads during training, the benefits reaped during inference make it worthwhile. Our method stands out with a 16x compression rate and can sustain performance even at
Figure 4: Mask Sampling 4(a) v.s. Mask Sampling + MCSS 4(b). Note that the green line in 4(a) and 4(b) represents the same value of accuracy from IMP. The value on the horizontal axis represents the amount of introduced randomness. The value on the vertical axis indicates model accuracy.
Figure 5: Impact of Sparsities 5(a) and Impact of #Mask Candidates in MCSS 5(b). The horizontal axes represent the sparsity level and the number of mask candidates for Figures 5(a) and 5(b) respectively, while the vertical axes in both figures denote model accuracy.
higher sparsity levels, achieving up to a 100x compression rate. This ensures that our pruned neural networks, once deployed, bring about significant improvements in performance and efficiency.
#### 4.6.2 Extending to Billion Parameters
In the current age of large language models, achieving effective pruning is a formidable challenge, particularly when striving to preserve high sparsity without sacrificing performance. While initiatives like SparseGPT have ventured into pruning for these colossal models, they have only managed a 2x compression rate (Frantar and Alistarh, 2023). The computational complexity of our method is primarily determined by the number of parameters involved. Consequently, our random pruning technique is not yet adaptable to models with billions of parameters. Nevertheless, we are diligently working on refining methods that incorporate controllable randomness more efficiently.
## 5 Related Work
A number of researchers have explored pruning in BERT. Prasanna et al. (2020) prunes the model with Michel et al. (2019)'s first-order importance metric and proves that unstructured magnitude-based pruning always produces sparser and higher-accuracy models than structured pruning. Gordon et al. (2020) and Chen et al. (2020) both argue that pruning in the pre-trained stage is better than fine-tuning stage because there is no need to prune for each downstream task. They also conclude that knowledge in sparse training can be transferred as well as dense models. In contrast, Xu et al. (2021) find that pruning at the pre-trained stage has huge computational problems while pruning at the finetuning stage can save computational efforts and keep accuracy simultaneously. These pruning methods are based on the magnitude and prune weights in a deterministic way. In parallel, a linear of works defeats magnitude-based methods at high-level sparsity by applying a non-deterministic way: regularization-based pruning. Specifically, a fine-designed \(L_{0}\) or \(L_{1}\) penalty terms are added to the loss function. Then the model proactively shrinks some of the weights until they do not contribute to the final loss. The regularization-based method can generally achieve significantly better results than the magnitude-based methods, especially at high-level sparsity (Sanh et al., 2020). However, penalty terms can introduce additional local minima to the loss function and are difficult to navigate optimization. On the other hand, there is a lack of research on random pruning applied to Transformer-based models (such as BERT) in previous studies. Therefore, our paper complements the gap in this area.
## 6 Conclusion
This paper presents a non-deterministic model pruning method, introducing controllable randomness by generating binary masks in a specific random fashion. Coupled with our specific mask candidate selection rule, our method exhibits significant effectiveness in enhancing model accuracy in pruning, particularly at high levels of sparsity.
### Limitations
Previous random pruning techniques do not scale to large datasets probably because the search space is too large to find a winning pruning mask. There's a considerable likelihood that our method isn't flawless either. For example, a more fine-designed randomness schedule could potentially yield more substantial benefits. In addition, our method might not be appropriate for models with billions of parameters due to the cost of training under multiple pruning masks. A potential solution could be to execute this process in parallel since pruning masks are independent of each other.
## Acknowledgements
The authors wish to thank the anonymous reviewers for their helpful comments.
## Ethics Statement
This research is conducted in compliance with the ACL Ethics Policy. In the pursuit of advancing the efficiency of model pruning strategies, our methods raise several ethical considerations. First, our research proposes a new model pruning strategy that significantly enhances the efficiency of neural networks, which could potentially contribute to the democratization of artificial intelligence by making it accessible to systems with lower computational resources.
However, despite these potential benefits, the increased efficiency and performance of neural networks might lead to misuse if not properly regulated. For instance, such improvements could potentially contribute to the spread of deepfake technology, an area of AI that has been used to
spread disinformation and cause harm. It is important to note that our research is conducted with the intent of improving efficiency and accessibility in AI, and we explicitly denounce any misuse of our work for harmful purposes. We encourage further discussions on the implementation of safeguards to prevent misuse and to guide the ethical use of such technology.
In addition, we must consider the implications of our method on fairness and bias. While our work is focused on model efficiency, the datasets used in the process may contain inherent biases, which can result in the propagation of these biases in pruned models. We believe it is essential to ensure that data used in our processes is as unbiased and representative as possible, and to strive for transparency in this regard.
Lastly, we acknowledge that the reduction of model size may result in the loss of interpretability, as smaller models often have more complex and less interpretable decision-making processes. We urge researchers and practitioners to maintain a balance between model efficiency and transparency to ensure fair and explainable AI practices.
We hope that our work will stimulate further discussions around these considerations, and that it will encourage future research to continuously consider the ethical implications of their work.
|
2304.00706 | Large deviations of reflected weakly interacting particle systems | In this paper, we prove a large deviation principle for the empirical
measures of a system of weakly interacting diffusion with reflection. We adopt
the weak convergence approach. To make this approach work, we show that the
sequence of empirical measures of the controlled reflected system will converge
to the weak solution of an associated reflected McKean--Vlasov equation. | Ping Cheng, Rong Wei, Tusheng Zhang | 2023-04-03T03:46:59Z | http://arxiv.org/abs/2304.00706v1 | # Large deviations of reflected weakly interacting particle systems
###### Abstract.
In this paper, we prove a large deviation principle for the empirical measures of a system of weakly interacting diffusion with reflection. We adopt the weak convergence approach. To make this approach work, we show that the sequence of empirical measures of the controlled reflected system will converge to the weak solution of an associated reflected McKean-Vlasov equation.
Key words and phrases:large deviation; interacting particle systems; McKean-Vlasov equation; Stochastic differential equation with reflection; weak convergence; sub-martingale problem.
Introduction
The study of the \(\mathbb{R}^{3}\)-norm of the Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the \(\mathbb{R}^{3}\)-norm of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\) is a well-known and well-studied problem in the context of the Laplacian \(\mathcal{L}\). The Laplacian \
Let \(\overline{\mathcal{D}}\) be the closure of \(\mathcal{D}\). Let \((\Omega,\mathcal{F},(\mathcal{F}_{t})_{t\in[0,T]},\mathbb{P})\) be a filterted probability space where \((\mathcal{F}_{t})_{t\in[0,T]}\) satisfies the usual conditions. Let \(\mathcal{X}:=C([0,T];\overline{\mathcal{D}})\) and \(\mathcal{W}:=C([0,T];\mathbb{R}^{d_{1}})\), equipped with maximum norm, which is denoted by \(\|\cdot\|_{\infty}\). Let \(|\cdot|\) be the Euclidean norm and \(\|\cdot\|\) stand for the Hilbert-Schmidt norm, i.e., \(\|\sigma\|^{2}:=\sum_{i=1}^{d}\sum_{j=1}^{d_{1}}\sigma_{ij}^{2}\) for any \(d\times d_{1}\)-matrix \(\sigma=(\sigma_{ij})\in\mathbb{R}^{d\times d_{1}}\). Given a Polish space \((E,d_{E})\), define \(\mathcal{B}(E)\) as the Borel \(\sigma\)-field, denote by \(\mathcal{P}(E)\) the space of probability measures on \(\mathcal{B}(E)\), which is equipped with the topology of weak convergence. A convenient metric on this space is the bounded Lipschitz metric, which is denoted by
\[\Pi_{E}(\mu,\nu):=\sup_{f\in\operatorname{Lip}_{1}(E)}\int_{E}f\mathrm{d}(\mu -\nu),\ \ \mu,\nu\in\mathcal{P}(E),\]
where \(\operatorname{Lip}_{1}(E):=\{f\in C(E):\sup_{x\in E}|f(x)|\leq 1,\sup_{x\neq y \in E}\frac{|f(x)-f(y)|}{d_{E}(x,y)}\leq 1\}\). Let \(C_{b}(E)\) be the set of real-valued bounded continuous functions on \(E\). Let \(C^{1,2}([0,T]\times\overline{\mathcal{D}})\) denote the set of real-valued functions, whose elements are continuously differentiable once with respect to the time variable and twice with respect to the space variable. For \(f\in C^{1,2}([0,T]\times\overline{\mathcal{D}})\), \(\nabla_{x}f\) denotes the gradient of \(f\) with respect to the spatial variable \(x\). Analogously, we can define \(C^{1,2,2}([0,T]\times\overline{\mathcal{D}}\times\mathbb{R}^{d_{1}})\), \(C^{1,2,2}_{b}([0,T]\times\overline{\mathcal{D}}\times\mathbb{R}^{d_{1}})\), \(C^{1,2,2}_{0}([0,T]\times\overline{\mathcal{D}}\times\mathbb{R}^{d_{1}})\).
## 2. Main result
Let \(\mathcal{R}\) denote the set of all positive measures on \(\mathcal{B}(\mathbb{R}^{d_{1}}\times[0,T])\), say \(r\), such that \(r(\mathbb{R}^{d_{1}}\times[0,t])=t\) for all \(t\in[0,T]\), which will be the space of all deterministic relaxed controls on \(\mathbb{R}^{d_{1}}\times[0,T]\). We also denote
\[\mathcal{R}_{1} := \Big{\{}r\in\mathcal{R}:\int_{\mathbb{R}^{d_{1}}\times[0,T]}|y|r( \mathrm{d}y\times\mathrm{d}t)<\infty\Big{\}},\]
as the space of all deterministic relaxed controls on \(\mathbb{R}^{d_{1}}\times[0,T]\) with finite first moments. We equip \(\mathcal{R}\) and \(\mathcal{R}_{1}\) with the topology of weak convergence of measures and weak convergence of measures plus convergence of the first moments respectively, which turns \(\mathcal{R}\) and \(\mathcal{R}_{1}\) into Polish spaces. If \(r\in\mathcal{R}\) and \(B\in\mathcal{B}(\mathbb{R}^{d_{1}})\), the mapping \([0,T]\ni t\mapsto r(B\times[0,t])\) will be absolutely continuous, hence differentiable almost everywhere. Since \(\mathcal{B}(\mathbb{R}^{d_{1}})\) is countably generated, the time derivative of \(r\), denoted by \(r_{t}\), exists almost everywhere and is a measurable mapping from \([0,T]\) to \(\mathcal{P}(\mathbb{R}^{d_{1}})\), such that \(r(\mathrm{d}y\times\mathrm{d}t)=r_{t}(\mathrm{d}y)\mathrm{d}t\).
Let \(b:[0,T]\times\overline{\mathcal{D}}\times\mathcal{P}(\overline{\mathcal{D}}) \rightarrow\mathbb{R}^{d}\) and \(\sigma:[0,T]\times\overline{\mathcal{D}}\times\mathcal{P}(\overline{\mathcal{D} })\rightarrow\mathbb{R}^{d\times d_{1}}\) be measurable mappings. Given a Borel measurable mapping \(\nu:[0,T]\rightarrow\mathcal{P}(\overline{\mathcal{D}})\) and an adapted \(\mathcal{R}_{1}\)-valued random variable \(\rho\). We will consider the controlled RSDEs:
\[\begin{cases}\mathrm{d}\bar{X}(t)=b(t,\bar{X}(t),\nu(t))\mathrm{d}t+\int_{ \mathbb{R}^{d_{1}}}\sigma(t,\bar{X}(t),\nu(t))y\rho_{t}(\mathrm{d}y)\mathrm{d}t \\ \qquad\qquad+\sigma(t,\bar{X}(t),\nu(t))\mathrm{d}W(t)-\mathrm{d}\bar{K}(t),\\ |\bar{K}|(t)=\int_{0}^{t}\mathbf{1}_{\partial\mathcal{D}}(\bar{X}(s)) \mathrm{d}|\bar{K}|(s),\quad\bar{K}(t)=\int_{0}^{t}\mathbf{n}(\bar{X}(s)) \mathrm{d}|\bar{K}|(s),\\ \nu(0)=\mathrm{Law}(\bar{X}(0)),\quad\bar{X}(t)\in\overline{\mathcal{D}},\quad t \in[0,T],\end{cases} \tag{2.1}\]
where \(W\) is a \(d_{1}\)-dimensional \(\mathcal{F}_{t}\)-adapted standard Wiener process. The above equation is the controlled analogue of the following reflected McKean-Vlasov SDEs:
\[\begin{cases}\mathrm{d}X(t)=b(X(t),\mathrm{Law}(X(t)))\mathrm{d}t+\sigma(X(t), \mathrm{Law}(X(t)))\mathrm{d}W(t)-\mathrm{d}K(t),\\ |K(t)|=\int_{0}^{t}\mathbf{1}_{\partial\mathcal{D}}(X(s))\mathrm{d}|K|(s), \quad K(t)=\int_{0}^{t}\mathbf{n}(X(s))\mathrm{d}|K|(s),\\ \mathrm{Law}(X(0))=\nu_{0},\quad X(t)\in\overline{\mathcal{D}},\quad t\in[0,T],\end{cases} \tag{2.2}\]
where \(W\) is a \(d_{1}\)-dimensional \((\mathcal{F}_{t})\)-adapted standard Wiener process.
Recall the system given by (1.1): for fixed \(N\in\mathbb{N}\) and finite interval \([0,T]\), \(i\in\{1,\dots,N\}\),
\[\begin{cases}\mathrm{d}X^{i,N}(t)=b(t,X^{i,N}(t),\mu^{N}(t)) \mathrm{d}t+\sigma(t,X^{i,N}(t),\mu^{N}(t))\mathrm{d}W^{i}(t)\\ \qquad\quad-\mathrm{d}K^{i,N}(t),\\ |K^{i,N}|(t)=\int_{0}^{t}\mathbf{1}_{\partial\mathcal{D}}(X^{i,N}(s)) \mathrm{d}|K^{i,N}|(s),\\ K^{i,N}(t)=\int_{0}^{t}\mathbf{n}(X^{i,N}(s))\mathrm{d}|K^{i,N}|(s),\\ X^{i,N}(0)=x^{i,N},\quad X^{i,N}(t)\in\overline{\mathcal{D}},\quad t\in[0,T]. \end{cases}\]
Define
\[\mu^{N}(\cdot,\omega):=\frac{1}{N}\sum_{i=1}^{N}\delta_{X^{i,N}(\cdot,\omega)},\quad\omega\in\Omega\]
as the empirical measure of \((X^{1,N},\dots,X^{N,N})\) over the path space \(\mathcal{X}:=C([0,T];\overline{\mathcal{D}})\). For any \(t\in[0,T]\), \(\mu^{N}\) and \(\mu^{N}(t)\) have relationship as
\[\mu^{N}(t)=\mu^{N}\circ\pi_{t}^{-1},\]
where \(\pi_{t}:\mathcal{X}\to\overline{\mathcal{D}}\) is a projection map at time \(t\). It is known that under mild conditions on the coefficients the empirical measures \(\mu^{N}\) converges to the law of the solution to reflected McKean-Vlasov SDE (2.2).
Let \(\mathcal{Z}:=\mathcal{X}\times\mathcal{R}_{1}\times\mathcal{W}\). For any \(z\in\mathcal{Z}\), we write it as \((\phi,r,w)\) with the understanding that \(\phi\in\mathcal{X}\), \(r\in\mathcal{R}_{1}\), and \(w\in\mathcal{W}\). If the triple \((\bar{X},\rho,W)\) defined on some filtered probability space \((\tilde{\Omega},\tilde{\mathcal{F}},(\tilde{\mathcal{F}}_{t}),\tilde{\mathbb{ P}})\) solves equation (2.1) for some measurable \(\nu:[0,T]\to\mathcal{P}(\overline{\mathcal{D}})\), then the distribution of \((\bar{X},\rho,W)\) under \(\tilde{\mathbb{P}}\) is an element of \(\mathcal{P}(\mathcal{Z})\) and is called a weak solution of equation (2.1). For any \(\Theta\in\mathcal{P}(\mathcal{Z})\), define \(\nu_{\Theta}:[0,T]\to\mathcal{P}(\overline{\mathcal{D}})\) as
\[\nu_{\Theta}(t)(B):=\Theta\big{(}\{(\phi,\rho,w)\in\mathcal{Z}:\phi(t)\in B\} \big{)},\quad B\in\mathcal{B}(\overline{\mathcal{D}}),\quad t\in[0,T], \tag{2.3}\]
which is the distribution under \(\Theta\) of the first component of \(Z\) at time \(t\). Note that if \(\Theta\) is a weak solution of equation (2.1) with \(\nu_{\Theta}(t)=\nu(t),\ \forall t\in[0,T]\), then \(\Theta\) is a
weak solution of the controlled RSDEs
\[\begin{cases}\mathrm{d}\bar{X}(t)=b(t,\bar{X}(t),\nu_{\Theta}(t))\mathrm{d}t+ \int_{\mathbb{R}^{d_{1}}}\sigma(t,\bar{X}(t),\nu_{\Theta}(t))y\rho_{t}(\mathrm{ d}y))\mathrm{d}t\\ \qquad\qquad+\sigma(t,\bar{X}(t),\nu_{\Theta}(t))\mathrm{d}W(t)-\mathrm{d}\bar {K}(t),\\ |\bar{K}|(t)=\int_{0}^{t}\mathbf{1}_{\partial\mathcal{D}}(\bar{X}(s)) \mathrm{d}|\bar{K}|(s),\quad\bar{K}(t)=\int_{0}^{t}\mathbf{n}(\bar{X}(s)) \mathrm{d}|\bar{K}|(s),\\ \nu_{\Theta}(t)=\mathrm{Law}(\bar{X}(t)),\quad\bar{X}(t)\in\overline{ \mathcal{D}},\quad t\in[0,T].\end{cases} \tag{2.4}\]
For a probability measure \(\Theta\in\mathcal{P}(\mathcal{Z})\), let \(\Theta_{\mathcal{X}}\), \(\Theta_{\mathcal{R}}\) and \(\Theta_{\mathcal{W}}\) denote the first, second and the third marginal measures, respectively. Let \(\mathcal{P}_{\infty}\) be the set of all probability measures \(\Theta\in\mathcal{P}(\mathcal{Z})\) such that
1. \[\int_{\mathcal{R}_{1}}\int_{\mathbb{R}^{d_{1}}\times[0,T]}|y|^{2}r(\mathrm{d}y \times\mathrm{d}t)\Theta_{\mathcal{R}}(\mathrm{d}r)<\infty.\]
2. \(\Theta\) is a weak solution to equation (2.4).
3. \(\nu_{\Theta}(0)=\nu_{0}\), where \(\nu_{0}\in\mathcal{P}(\overline{\mathcal{D}})\) is the initial distribution.
Define the map \(\bar{v}:\mathcal{Z}\to\mathcal{Z}^{0}:=\overline{\mathcal{D}}\times\mathcal{R} _{1}\times\mathcal{W}\) as \(\bar{v}(\phi,r,w)=(\phi(0),r,w)\).
**Definition 2.1** (Weak Uniqueness).: We say that the solution of equation (2.4) is weakly unique if for any \(\Theta\) and \(\tilde{\Theta}\in\mathcal{P}_{\infty}\) such that \(\Theta\circ\bar{v}^{-1}=\tilde{\Theta}\circ\bar{v}^{-1}\), we have \(\Theta=\tilde{\Theta}\).
Now we are going to state the precise assumption and the main result.
**Definition 2.2** (Rate Function [12, 13]).: A function \(I:\mathcal{P}(\mathcal{X})\to[0,\infty]\) is called a rate function on \(\mathcal{P}(\mathcal{X})\), if for any \(C<\infty\), the level set \(\{\theta\in\mathcal{P}(\mathcal{X}):I(\theta)\leq C\}\) is compact.
**Definition 2.3** (Large Deviation Principle [12, 13]).: Let \(I\) be a rate function on \(\mathcal{P}(\mathcal{X})\). The sequence \(\{\mu^{N},N\in\mathbb{N}\}\) is said to satisfy large deviation principle on \(\mathcal{P}(\mathcal{X})\) with the rate function \(I\) if, for all Borel subset \(\Gamma\) of \(\mathcal{P}(\mathcal{X})\),
\[-\inf_{\theta\in\Gamma^{o}}I(\theta)\leq\liminf_{N\to\infty}\frac{1}{N}\log \mathbb{P}\{\mu^{N}\in\Gamma\}\leq\limsup_{N\to\infty}\frac{1}{N}\log\mathbb{P }\{\mu^{N}\in\Gamma\}\leq-\inf_{\theta\in\overline{\Gamma}}I(\theta),\]
where \(\Gamma^{o}\) is the interior of \(\Gamma\).
**Definition 2.4** (Laplace Principle [13]).: Let \(I\) be a rate function on \(\mathcal{P}(\mathcal{X})\). The sequence \(\{\mu^{N},N\in\mathbb{N}\}\) is said to satisfy the Laplace principle on \(\mathcal{P}(\mathcal{X})\) with rate function \(I\) if for all bounded continuous function \(F:\mathcal{P}(\mathcal{X})\to\mathbb{R}\),
\[\lim_{N\to\infty}-\frac{1}{N}\log\mathbb{E}\big{\{}\exp[-N\cdot F(\mu^{N})] \big{\}}=\inf_{\theta\in\mathcal{P}(\mathcal{X})}\{F(\theta)+I(\theta)\}.\]
It is known that the Laplace principle holds in the above setting if and only if \(\{\mu^{N},N\in\mathbb{N}\}\) satisfies a large deviation principle with rate function \(I\) (see [13, Section 1.2]).
Let us make the following assumptions with respect to the domain, coefficients \(b\), \(\sigma\), and the family \(\{x^{i,N}\}\subset\overline{\mathcal{D}}\) of initial conditions:
**Assumption 2.1**.:
1. \(\mathcal{D}\) _is a bounded, convex, smooth domain in \(\mathbb{R}^{d}\)._
* _There exists_ \(\nu_{0}\in\mathcal{P}(\overline{\mathcal{D}})\) _such that for all_ \(\nu_{0}\)_-integrable_ \(f:\overline{\mathcal{D}}\to\mathbb{R}\)_,_ \[\lim_{N\to\infty}\frac{1}{N}\sum_{i=1}^{N}f(x^{i,N})=\int_{\overline{\mathcal{D} }}f(x)\mathrm{d}\nu_{0}(x).\]
* _Let_ \(b:[0,T]\times\overline{\mathcal{D}}\times\mathcal{P}(\overline{\mathcal{D}}) \to\mathbb{R}^{d}\) _and_ \(\sigma:[0,T]\times\overline{\mathcal{D}}\times\mathcal{P}(\overline{\mathcal{D }})\to\mathbb{R}^{d\times d_{1}}\) _be measurable and there exist constants_ \(L\) _and_ \(K\in(0,\infty)\) _such that for each_ \(t\in[0,T],x,y\in\overline{\mathcal{D}}\) _and_ \(\mu,\nu\in\mathcal{P}(\overline{\mathcal{D}})\)_,_ \[|b(t,x,\mu)|+\|\sigma(t,x,\mu)\|\leq L\] _and_ \[|b(t,x,\mu)-b(t,y,\nu)|+\|\sigma(t,x,\mu)-\sigma(t,y,\nu)\|\leq K(|x-y|+\Pi_{ \overline{\mathcal{D}}}(\mu,\nu)).\]
* _Weak uniqueness of solution holds for Eq.(_2.4_)._
_Remark 2.5_.:
* Under Assumption 2.1(A3), it follows by standard arguments that for each \(N\), equation (1.1) has a unique solution. Moreover, according to Theorem 3.2 in [2], equation (2.2) admits a unique solution.
* The assumption (A4) is satisfied if the diffusion coefficients depended only on state variables like \(\sigma(t,x,\mu)=\sigma(t,x)\), or only on distributions such as \(\sigma(t,x,\mu)=\sigma(t,\mu)\) (cf. [3, Lemma 3.1]).
* Since the domain \(\mathcal{D}\) is convex, we have for any \(y\in\mathcal{D}\) and \(x\in\partial\mathcal{D}\), \[\langle\mathbf{n}(x),y-x\rangle\leq 0.\]
The main result is stated as follows.
**Theorem 2.6**.: _Let Assumption 2.1 hold. Then the family of empirical measures \(\{\mu^{N},N\in\mathbb{N}\}\) of the solutions of the interacting reflected system (1.1) satisfies the Laplace Principle with rate function_
\[I(\theta)=\inf_{\Theta\in\mathcal{P}_{\infty}:\Theta_{\mathcal{X}}=\theta} \frac{1}{2}\int_{\mathcal{R}_{1}}\int_{\mathbb{R}^{d_{1}}\times[0,T]}|y|^{2}r (\mathrm{d}y\times\mathrm{d}t)\Theta_{\mathcal{R}}(\mathrm{d}r)\]
_where \(\theta\in\mathcal{P}(\mathcal{X})\) and \(\inf\emptyset:=\infty\) by convention._
To explain the steps of the proof, we need the controlled system of RSDEs (1.1). For \(N\in\mathbb{N}\), let \(\mathcal{H}_{N}\) be the space of all \((\mathcal{F}_{t})\)-progressively measurable functions \(h:[0,T]\times\Omega\to\mathbb{R}^{N\times d_{1}}\) such that
\[\mathbb{E}\left[\int_{0}^{T}|h(t)|^{2}\mathrm{d}t\right]<\infty,\]
where \(h\) is written as \(h=(h_{1},\ldots,h_{N})\), and \(h_{i}\) is its \(i\)-th entry, which is \(d_{1}\)-dimensional.
Given \(h\in\mathcal{H}_{N}\), we consider the following controlled system of RSDEs
\[\begin{cases}\mathrm{d}\bar{X}^{i,N}(t)=b(t,\bar{X}^{i,N}(t),\bar{\mu}^{N}(t)) \mathrm{d}t+\sigma(t,\bar{X}^{i,N}(t),\bar{\mu}^{N}(t))h_{i}(t)\mathrm{d}t\\ \qquad\qquad+\sigma(t,\bar{X}^{i,N}(t),\bar{\mu}^{N}(t))\mathrm{d}W^{i}(t)- \mathrm{d}\bar{K}^{i,N}(t),\\ |\bar{K}^{i,N}|(t)=\int_{0}^{t}\mathbf{1}_{\partial\mathcal{D}}(\bar{X}^{i,N} (s))\mathrm{d}|\bar{K}^{i,N}|(s),\\ \bar{K}^{i,N}(t)=\int_{0}^{t}\mathbf{n}(\bar{X}^{i,N}(s))\mathrm{d}|\bar{K}^{i,N}|(s),\\ \bar{X}^{i,N}(0)=x^{i,N},\quad\bar{X}^{i,N}(t)\in\overline{\mathcal{D}},\quad t \in[0,T],\end{cases} \tag{2.5}\]
where \(\bar{\mu}^{N}(t)\) is empirical measure of \((\bar{X}^{1,N}(t),\ldots,\bar{X}^{N,N}(t))\) defined by
\[\bar{\mu}^{N}(t,\omega):=\frac{1}{N}\sum_{i=1}^{N}\delta_{\bar{X}^{i,N}(t, \omega)},\quad\omega\in\Omega.\]
We also use \(\bar{\mu}^{N}\) to denote the process version of \(\bar{\mu}^{N}(t)\).
The proof of Theorem 2.6 is based on following variational representation given by Theorem 3.6 in [4], i.e. for any \(F\in C_{b}(\mathcal{P}(\mathcal{X}))\)
\[-\frac{1}{N}\log\mathbb{E}\left\{\exp[-N\cdot F(\mu^{N})]\right\}=\inf_{h^{N} \in\mathcal{H}_{N}}\Big{\{}\frac{1}{2}\mathbb{E}\big{[}\frac{1}{N}\sum_{i=1}^ {N}\int_{0}^{T}|h^{N}_{i}|^{2}\mathrm{d}t\big{]}+\mathbb{E}\left[F(\bar{\mu}^ {N})\right]\Big{\}},\]
where \(\bar{\mu}^{N}\) is the empirical measure of the solution to equation (2.5) for \(h^{N}\in\mathcal{H}_{N}\).
According to the arguments in [5, section 3],the Laplace principle (hence the large deviation principle) can be established through the following two steps.
**Step 1:**: We establish the Laplace principle upper bound by showing that for any sequence \(\{h^{N},N\in\mathbb{N}\}\) with \(h^{N}\in\mathcal{H}_{N}\),
\[\begin{split}&\liminf_{N\mapsto\infty}\Big{\{}\frac{1}{2}\mathbb{E} \Big{[}\frac{1}{N}\sum_{i=1}^{N}\int_{0}^{T}|h^{N}_{i}|^{2}\mathrm{d}t\Big{]}+ \mathbb{E}\big{[}F(\bar{\mu}^{N})\big{]}\Big{\}}\\ &\geq\inf_{\Theta\in\mathcal{P}_{\infty}}\Big{\{}\frac{1}{2}\int_ {\mathcal{R}_{1}}\int_{\mathbb{R}^{d_{1}}\times[0,T]}|y|^{2}r(\mathrm{d}y\times \mathrm{d}t)\Theta_{\mathcal{R}_{1}}(\mathrm{d}r)+F(\Theta_{\mathcal{X}}) \Big{\}}.\end{split} \tag{2.6}\]
This will be done in Section 4.1.
**Step 2:**: We verify the Laplace principle lower bound in Section 4.2, by showing that for any \(\Theta\in\mathcal{P}_{\infty}\) there exists a sequence \(\{h^{N},N\in\mathbb{N}\}\) with \(h^{N}\in\mathcal{H}_{N}\) such that
\[\begin{split}&\limsup_{N\to\infty}\Big{\{}\frac{1}{2}\mathbb{E} \big{[}\frac{1}{N}\sum_{i=1}^{N}\int_{0}^{T}|h^{N}_{i}|^{2}\mathrm{d}t\big{]}+ \mathbb{E}[F(\bar{\mu}^{N})]\Big{\}}\\ &\leq\frac{1}{2}\int_{\mathcal{R}_{1}}\int_{\mathbb{R}^{d_{1}} \times[0,T]}|y|^{2}r(\mathrm{d}y\times\mathrm{d}t)\Theta_{\mathcal{R}_{1}}( \mathrm{d}r)+F(\Theta_{\mathcal{X}}).\end{split} \tag{2.7}\]
## 3. Sub-martingale Problem
As a preparation for the proof of the main result, in this section, we will introduce some sub-martingale problems and provide the relations to the weak solution of RSDEs (2.4).
To begin with, we introduce the definition of the sub-martingale problem described in [19]. Let \(\bar{b}:[0,T]\times\overline{\mathcal{D}}\to\mathbb{R}^{d}\) be a bounded measurable function and \(\bar{\sigma}:[0,T]\times\overline{\mathcal{D}}\to\mathbb{R}^{d\times d_{1}}\) a bounded continuous function.
Let \(\tilde{X}\) be the canonical process on \(\mathcal{X}\). Set \(\mathcal{F}_{t}^{0}=\sigma(\tilde{X}(s),\ s\leq t)\) and define operator
\[\mathcal{L}_{t}:=\frac{1}{2}\sum_{i,j=1}^{d}(\bar{\sigma}\bar{\sigma}^{T})_{ij }(t,x)\frac{\partial^{2}}{\partial x_{i}\partial x_{j}}+\sum_{i=1}^{d}\bar{b} _{i}(t,x)\frac{\partial}{\partial x_{i}}.\]
**Definition 3.1** (Sub-martingale Problem).: We say that a probability measure \(\hat{\mathbb{P}}\) on \((\mathcal{X},\mathcal{B}(\mathcal{X}))\) solves a sub-martingale problem for coefficients \(\bar{b}\), \(\bar{\sigma}\) and \(\mathbf{n}(x)\) if
\[f(t,\tilde{X}(t))-f(0,\tilde{X}(0))-\int_{0}^{t}(f_{s}+\mathcal{L}_{s}f)(s, \tilde{X}(s))\mathrm{d}s\]
is a \(\hat{\mathbb{P}}\)-sub-martingale with respect to the canonical filtration \((\mathcal{F}_{t}^{0})\) for any \(f\in C_{0}^{1,2}([0,T]\times\overline{\mathcal{D}})\) satisfying
\[\langle\nabla_{x}f(t,x),\mathbf{n}(x)\rangle\leq 0\quad\text{on}\quad[0,T] \times\partial\mathcal{D}.\]
Consider the following RSDEs
\[\begin{cases}\mathrm{d}Y(t)=\bar{b}(t,Y(t))\mathrm{d}t+\bar{\sigma}(t,Y(t)) \mathrm{d}W(t)-\mathrm{d}L(t),\\ |L(t)|=\int_{0}^{t}\mathbf{1}_{\partial\mathcal{D}}(Y(s))\mathrm{d}|L|(s), \quad L(t)=\int_{0}^{t}\mathbf{n}(Y(s))\mathrm{d}|L|(s),\\ Y(t)\in\overline{\mathcal{D}},\quad t\in[0,T],\end{cases} \tag{3.1}\]
where \(W\) is a \(d_{1}\)-dimensional \(\mathcal{F}_{t}\)-adapted Wiener process.
For any \(f\in C^{1,2}([0,T]\times\overline{\mathcal{D}})\), we define a real-valued process \((M_{f}(t))_{t\in[0,T]}\) on probability space \((\mathcal{X},\mathcal{B}(\mathcal{X}),\hat{\mathbb{P}})\) by:
\[M_{f}(t,\tilde{X}):=f(t,\tilde{X}(t))-f(0,\tilde{X}(0))-\int_{0}^{t}(f_{s}+ \mathcal{L}_{s}f)(s,\tilde{X}(s))\mathrm{d}s.\]
The next result provides the relationship between a sub-martingale problem and the weak solutions of RSDEs (3.1).
**Lemma 3.2**.: _Assume that the measure \(\hat{\mathbb{P}}\in\mathcal{P}(\mathcal{X})\), then \(\hat{\mathbb{P}}\) is a weak solution of equation (3.1) if and only if \(\hat{\mathbb{P}}\) solves a sub-martingale problem for coefficients \(\bar{b}\), \(\bar{\sigma}\) and \(\mathbf{n}\), or equivalently, \(M_{f}\) is a \(\hat{\mathbb{P}}\)-sub-martingale with respect to the canonical filtration \((\mathcal{F}_{t}^{0})\) for any \(f\in C^{1,2}([0,T]\times\overline{\mathcal{D}})\) satisfying \(\langle\nabla_{x}f(t,x),\mathbf{n}(x)\rangle\leq 0\) on \([0,T]\times\partial\mathcal{D}\)._
Proof.: The "\(\Rightarrow\)" part is obvious by Ito's formula.
Now we show the "\(\Leftarrow\)" part. Assume \(\hat{\mathbb{P}}\) is a solution to the sub-martingale problem. By Theorem 2.4 in [19], we know that there exists a unique, continuous,
non-decreasing, adapted function \(L:[0,T]\times\mathcal{X}\mapsto[0,\infty)\) such that \(L(0)=0\), \(L(t)=\int_{0}^{t}\mathbf{1}_{\partial\mathcal{D}}(\tilde{X}(s))\mathrm{d}L(s)\) and for any \(f\in C^{1,2}([0,T]\times\overline{\mathcal{D}})\)
\[\tilde{M}_{f}(t,\tilde{X}):=M_{f}(t,\tilde{X})+\int_{0}^{t}\langle\nabla_{x}f(s,\tilde{X}(s)),\mathbf{n}(\tilde{X}(s))\rangle\mathrm{d}L(s)\]
is a \(\hat{\mathbb{P}}\)-martingale.
In particular, choose \(f^{i}(t,x)=x_{i}\) to obtain that
\[\tilde{M}_{f^{i}}(t,\tilde{X}):= \tilde{X}_{i}(t)-\tilde{X}_{i}(0)-\int_{0}^{t}\bar{b}_{i}(s, \tilde{X}(s))\mathrm{d}s\] \[+\int_{0}^{t}n_{i}(\tilde{X}(s))\mathrm{d}L(s)\]
is a \(\hat{\mathbb{P}}\)-martingale.
Similarly, for each \(i,j\in\{1,2,\dots,d\}\), letting \(f^{i,j}(t,x)=x_{i}x_{j}\), we see that
\[\begin{split}\tilde{M}_{f^{i,j}}(t,\tilde{X}):=& \tilde{X}_{i}(t)\tilde{X}_{j}(t)-\tilde{X}_{i}(0)\tilde{X}_{j}(0)-\int_{0}^{t} \bar{b}_{i}(s,\tilde{X}(s))\tilde{X}_{j}(s)\mathrm{d}s\\ &-\int_{0}^{t}\bar{b}_{j}(s,\tilde{X}(s))\tilde{X}_{i}(s) \mathrm{d}s+\int_{0}^{t}\tilde{X}_{i}(s)n_{j}(\tilde{X}(s))\mathrm{d}L(s)\\ &+\int_{0}^{t}\tilde{X}_{j}(s)n_{i}(\tilde{X}(s))\mathrm{d}L(s)- \int_{0}^{t}(\bar{\sigma}\bar{\sigma}^{T})_{ij}(s,\tilde{X}(s))\mathrm{d}s\end{split} \tag{3.2}\]
is also a \(\hat{\mathbb{P}}\)-martingale. Applying Ito's formula to \(\tilde{X}_{i}(t)\tilde{X}_{j}(t)\) and comparing with (3.2), we deduce that
\[\langle\tilde{M}_{f^{i}},\tilde{M}_{f^{j}}\rangle(t)=\int_{0}^{t}\sum_{k=1}^{ d_{1}}\bar{\sigma}_{ik}\bar{\sigma}_{kj}(s,\tilde{X}(s))\mathrm{d}s.\]
Then, according to Theorem II.7.1' in [15], there exists a \(d_{1}\)-dimensional \((\tilde{\mathcal{F}}_{t})\)-Wiener process \(\tilde{W}=(\tilde{W}(t))\) on an extension of \((\tilde{\Omega},\tilde{\mathcal{F}},(\tilde{\mathcal{F}}_{t}),\tilde{\mathbb{ P}})\) such that
\[\tilde{M}_{f^{i}}(t)=\sum_{i=1}^{d_{1}}\int_{0}^{t}\bar{\sigma}_{ik}(s,\tilde{ X}(s))\mathrm{d}\tilde{W}^{k}(s),\quad i=1,2,\dots,d.\]
Therefore, \((\tilde{X},\tilde{W},L)\) is a weak solution to the reflected stochastic differential equation (3.1).
In the remaining part of this section, we will show that the measure \(\Theta\in\mathcal{P}(\mathcal{Z})\) is a weak solution of equation (2.4) if and only if it solves a sub-martingale problem.
Recall that \(\mathcal{Z}=\mathcal{X}\times\mathcal{R}_{1}\times\mathcal{W}\) and \(\nu_{\Theta}\) is given by (2.3). Let \((\bar{X},\rho,W)\) be the canonical process on \(\mathcal{Z}\), namely
\[\bar{X}(t,(\phi,r,w)):=\phi(t),\ \rho(t,(\phi,r,w)):=r_{|\mathcal{B}( \mathbb{R}^{d_{1}}\times[0,t])},\ W(t,(\phi,r,w)):=w(t),\]
and \((\mathcal{G}_{t})\) the canonical filtration in \(\mathcal{B}(\mathcal{Z})\) defined as
\[\mathcal{G}_{t}:=\sigma((\bar{X}(s),\rho(s),W(s)):0\leq s\leq t),\quad t\in[0,T].\]
Let \(\Theta\in\mathcal{P}(\mathcal{Z})\). Given \(f\in C^{1,2,2}_{0}([0,T]\times\overline{\mathcal{D}}\times\mathbb{R}^{d_{1}})\), we define a real-valued process \((M^{\Theta}_{f}(t))_{t\in[0,T]}\) on probability space \((\mathcal{Z},\mathcal{B}(\mathcal{Z}),\Theta)\) as
\[M^{\Theta}_{f}(t,(\phi,r,w)) := f(t,\phi(t),w(t))-f(0,\phi(0),w(0))-\int_{0}^{t}\frac{\partial f }{\partial s}(s,\phi(s),w(s))\mathrm{d}s \tag{3.3}\] \[-\int_{0}^{t}\int_{\mathbb{R}^{d_{1}}}\mathcal{A}^{\Theta}_{s}(f )(s,\phi(s),y,w(s))r_{s}(\mathrm{d}y)\mathrm{d}s,\]
where for any \(s\in[0,T]\), \(x\in\overline{\mathcal{D}}\), \(y,z\in\mathbb{R}^{d_{1}}\), \(\mathcal{A}^{\Theta}_{s}(f)\) is defined as
\[\mathcal{A}^{\Theta}_{s}(f)(s,x,y,z):= \ \langle b(s,x,\nu_{\Theta}(s))+\sigma(s,x,\nu_{\Theta}(s))y, \nabla_{x}f(s,x,z)\rangle \tag{3.4}\] \[+\frac{1}{2}\sum_{i,j=1}^{d}(\sigma\sigma^{T})_{ij}(s,x,\nu_{ \Theta}(s))\frac{\partial^{2}f}{\partial x_{i}\partial x_{j}}(s,x,z)\] \[+\sum_{i=1}^{d}\sum_{j=1}^{d_{1}}\sigma_{ij}(s,x,\nu_{\Theta}(s) )\frac{\partial^{2}f}{\partial x_{i}\partial z_{j}}(s,x,z)\] \[+\frac{1}{2}\sum_{i=1}^{d_{1}}\frac{\partial^{2}f}{\partial z_{i }\partial z_{i}}(s,x,z).\]
The last result of this section is stated as follows.
**Theorem 3.3**.: _Let measure \(\Theta\in\mathcal{P}(\mathcal{Z})\) satisfy \(\Theta(\{(\phi,r,w)\in\mathcal{Z}:w(0)=0\})=1\). Then \(\Theta\) is a weak solution of equation (2.4) if and only if \(M^{\Theta}_{f}\) is a sub-martingale under \(\Theta\) with respect to the canonical filtration \((\mathcal{G}_{t})\) for all \(f\in C^{1,2,2}_{0}([0,T]\times\overline{\mathcal{D}}\times\mathbb{R}^{d_{1}})\) with \(\langle\nabla_{x}f(t,x,z),\mathbf{n}(x)\rangle\leq 0\) on \([0,T]\times\partial\mathcal{D}\times\mathbb{R}^{d_{1}}\)._
Before proving Theorem 3.3, we state the following Lemmas whose proofs are similar to those of Lemma 2.2 and Theorem 2.4 in [19].
**Lemma 3.4**.: _Suppose that \(M^{\Theta}_{f}\) is a \(\Theta\)-sub-martingale for any \(f\in C^{1,2,2}_{0}([0,T]\times\overline{\mathcal{D}}\times\mathbb{R}^{d_{1}})\) satisfying \(\langle\nabla_{x}f(t,x,z),\mathbf{n}(x)\rangle\leq 0\) on \([0,T]\times\partial\mathcal{D}\times\mathbb{R}^{d_{1}}\). Then, for each \(f\in C^{1,2,2}_{b}([0,T]\times\overline{\mathcal{D}}\times\mathbb{R}^{d_{1}})\) with \(\langle\nabla_{x}f(t,x,z),\mathbf{n}(x)\rangle\leq 0\) on \([0,T]\times\partial\mathcal{D}\times\mathbb{R}^{d_{1}}\), \(M^{\Theta}_{f}(t)\) is a \(\Theta\)-local-sub-martingale._
Proof.: Assume that \(f\in C^{1,2,2}_{b}([0,T]\times\overline{\mathcal{D}}\times\mathbb{R}^{d_{1}})\) with \(\langle\nabla_{x}f(t,x,z),\mathbf{n}(x)\rangle\leq 0\) on \([0,T]\times\partial\mathcal{D}\times\mathbb{R}^{d_{1}}\). For each \(n\geq 1\), choose \(\eta_{n}\in C^{\infty}_{0}(\mathbb{R}^{d_{1}})\) such that \(0\leq\eta_{n}\leq 1\), \(\eta_{n}=1\) on \(\{z\in\mathbb{R}^{d_{1}}:|z|\leq n\}\) and all derivatives of \(\eta_{n}\) up to the second order are uniformly bounded on \(n\). Let
\[f_{n}=\eta_{n}\cdot f.\]
Then \(f_{n}\in C^{1,2,2}_{0}([0,T]\times\overline{\mathcal{D}}\times\mathbb{R}^{d_{1}})\) and \(\langle\nabla_{x}f_{n}(t,x,z),\mathbf{n}(x)\rangle\leq 0\) on \([0,T]\times\partial\mathcal{D}\times\mathbb{R}^{d_{1}}\). So \(M^{\Theta}_{f_{n}}(t)\) is a \(\Theta\)-sub-martingale. For each \(M\in\mathbb{N}\), define a stopping time
\[\tau_{M}((\phi,r,w)):=\inf\{t\in[0,T]:\int_{\mathbb{R}^{d_{1}}\times[0,t]}|y|r (\mathrm{d}y\times\mathrm{d}s)\geq M\}.\]
Then \(M^{\Theta}_{f_{n}}(t\wedge\tau_{M})\) is a \(\Theta\)-sub-martingale. Obviously, \(M^{\Theta}_{f_{n}}(t\wedge\tau_{M})\to M^{\Theta}_{f}(t\wedge\tau_{M})\) boundedly and so \(M^{\Theta}_{f}(t\wedge\tau_{M})\) is a \(\Theta\)-sub-martingale. It follows that \(M^{\Theta}_{f}(t)\) is a \(\Theta\)-local-sub-martingale.
**Lemma 3.5**.: _There exists a continuous, non-decreasing and adapted function \(\xi:[0,T]\times\mathcal{Z}\mapsto[0,\infty)\) such that \(\xi(0)=0\),_
\[\xi(t)=\int_{0}^{t}\mathbf{1}_{\partial D}(\phi(s))\mathrm{d}\xi(s),\]
_and for all \(f\in C^{1,2,2}_{b}([0,T]\times\overline{\mathcal{D}}\times\mathbb{R}^{d_{1}})\),_
\[M^{\Theta}_{f}+\int_{0}^{t}\langle\nabla_{x}f(s,\phi(s),w(s)),\mathbf{n}(\phi (s))\rangle\mathrm{d}\xi(s)\]
_is a \(\Theta\)-local-martingale._
The proof of this lemma is a minor modification of the proof of Theorem 2.4 in [19]. We omit the details.
Now we come back to the proof of Theorem 3.3
Proof.: The "\(\Rightarrow\)" part is obvious by Ito's formula.
Now we show the "\(\Leftarrow\)" part. From Lemma 3.5, we know that there exists a continuous, non-decreasing and adapted function \(\xi:[0,T]\times\mathcal{Z}\mapsto[0,\infty)\) such that \(\xi(0)=0\), \(\xi(t)=\int_{0}^{t}\mathbf{1}_{\overline{\mathcal{D}}}(\phi(s))\mathrm{d}\xi(s)\), and for all \(f\in C^{1,2,2}_{b}([0,T]\times\overline{\mathcal{D}}\times\mathbb{R}^{d_{1}})\),
\[\tilde{M}^{\Theta}_{f}(t,(\phi,r,w)):=M^{\Theta}_{f}(t,(\phi,r,w))+\int_{0}^{ t}\langle\nabla_{x}f(s,\phi(s),w(s)),\mathbf{n}(\phi(s))\rangle\mathrm{d}\xi(s)\]
is a \(\Theta\)-local-martingale.
In particular, for each \(i\in\{1,2,\ldots,d\}\), choose \(f^{i}(t,x,z)=x_{i}\) to obtain that
\[\tilde{M}^{\Theta}_{f^{i}}(t,(\phi,r,w)) :=\phi_{i}(t)-\phi_{i}(0)-\int_{0}^{t}b_{i}(s,\phi(s),\nu_{\Theta} (s))\mathrm{d}s\] \[\quad-\int_{0}^{t}\int_{\mathbb{R}^{d_{1}}}\sum_{k=1}^{d_{1}} \sigma_{ik}(s,\phi(s),\nu_{\Theta}(s))y_{k}r_{s}(\mathrm{d}y)\mathrm{d}s\] \[\quad+\int_{0}^{t}n_{i}(\phi(s))\mathrm{d}\xi(s)\]
is a \(\Theta\)-local-martingale.
For each \(i,j\in\{1,2,\ldots,d\}\), choosing \(f^{i,j}(t,x,z)=x_{i}x_{j}\), we see that
\[\begin{split}\tilde{M}^{\Theta}_{f^{i,j}}(t,(\phi,r,w))& :=\phi_{i}(t)\phi_{j}(t)-\phi_{i}(0)\phi_{j}(0)\\ &\quad-\int_{0}^{t}b_{i}(s,\phi(s),\nu_{\Theta}(s))\phi_{j}(s) \mathrm{d}s-\int_{0}^{t}b_{j}(s,\phi(s),\nu_{\Theta}(s))\phi_{i}(s)\mathrm{d}s \\ &\quad-\int_{0}^{t}\int_{\mathbb{R}^{d_{1}}}\sum_{k=1}^{d_{1}} \sigma_{ik}(s,\phi(s),\nu_{\Theta}(s))y_{k}\phi_{j}(s)r_{s}(\mathrm{d}y)\mathrm{ d}s\\ &\quad-\int_{0}^{t}\int_{\mathbb{R}^{d_{1}}}\sum_{k=1}^{d_{1}} \sigma_{jk}(s,\phi(s),\nu_{\Theta}(s))y_{k}\phi_{i}(s)r_{s}(\mathrm{d}y)\mathrm{ d}s\\ &\quad-\int_{0}^{t}\sum_{k=1}^{d_{1}}\sigma_{ik}\sigma_{kj}(s, \phi(s),\nu_{\Theta}(s))\mathrm{d}s+\int_{0}^{t}\phi_{i}(s)n_{j}(\phi(s)) \mathrm{d}\xi(s)\\ &\quad+\int_{0}^{t}\phi_{j}(s)n_{i}(\phi(s))\mathrm{d}\xi(s)\end{split} \tag{3.5}\]
is also a \(\Theta\)-local-martingale. Applying Ito's formula to \(\bar{X}_{i}(t)\bar{X}_{j}(t)\) and comparing with (3.5), we deduce that
\[\langle\tilde{M}^{\Theta}_{f^{i}},\tilde{M}^{\Theta}_{f^{j}}\rangle(t)=\int_{ 0}^{t}\sum_{k=1}^{d_{1}}\sigma_{ik}\sigma_{kj}(s,\phi(s),\nu_{\Theta}(s)) \mathrm{d}s. \tag{3.6}\]
For each \(i\in\{1,2,\ldots,d_{1}\}\), choose \(g^{i}\in C^{1,2,2}_{0}([0,T]\times\overline{\mathcal{D}}\times\mathbb{R}^{d_{1}})\) such that \(g^{i}(t,x,z)=z_{i}\) on \([0,T]\times\overline{\mathcal{D}}\times\{z\in\mathbb{R}^{d_{1}}:|z|\leq n\}\) and for each \(n\in\mathbb{N}\), define the stopping time \(\tau_{n}=\inf\{t\in[0,T]:|w(t)|>n\}\) to obtain that
\[\tilde{M}^{\Theta}_{g^{i}}(t\wedge\tau_{n},(\phi,r,w)):=w_{i}(t\wedge\tau_{n} )-w_{i}(0)\]
is a \(\Theta\)-martingale. It follows that \(\{\tilde{M}^{\Theta}_{g^{i}}(t),t\geq 0\}\) is a \(\Theta\)-local-martingale.
Similarly, for each \(i,j\in\{1,2,\ldots,d_{1}\}\), choosing \(g^{i,j}(t,x,z)\in C^{1,2,2}_{0}([0,T]\times\overline{\mathcal{D}}\times \mathbb{R}^{d_{1}})\) such that \(g^{i,j}(t,x,z)=z_{i}z_{j}\) on \([0,T]\times\overline{\mathcal{D}}\times\{z\in\mathbb{R}^{d_{1}}:|z|\leq n\}\), we have
\[\tilde{M}^{\Theta}_{g^{i,j}}(t,(\phi,r,w)):=w_{i}(t)w_{j}(t)-w_{i}(0)w_{j}(0) -\frac{1}{2}\delta_{ij}t \tag{3.7}\]
is a \(\Theta\)-local-martingale. Applying Ito's formula to \(W_{i}(t)W_{j}(t)\) and comparing with (3.7), we deduce that
\[\langle\tilde{M}^{\Theta}_{g^{i}},\tilde{M}^{\Theta}_{g^{j}}\rangle(t)=\langle W _{i},W_{j}\rangle(t)=\delta_{ij}t. \tag{3.8}\]
Therefore, \(W(t)\) is a \((\mathcal{G}_{t})\)-Wiener process on \((\mathcal{Z},\mathcal{B}(\mathcal{Z}),(\mathcal{G}_{t}),\Theta)\).
For each \(i\in\{1,2,\ldots,d\}\), \(j\in\{1,2,\ldots,d_{1}\}\), by choosing \(h^{i,j}(t,x,z)\in C^{1,2,2}_{0}([0,T]\times\overline{\mathcal{D}}\times \mathbb{R}^{d_{1}})\) such that \(h^{i,j}(t,x,z)=x_{i}z_{j}\) on \([0,T]\times\overline{\mathcal{D}}\times\{z\in\mathbb{R}^{d_{1}}:|z|\leq n\}\), using a similar argument as (3.8), we see that
\[\langle\tilde{M}^{\Theta}_{f^{i}},\tilde{M}^{\Theta}_{g^{j}}\rangle(t)=\int_{ 0}^{t}\sigma_{ij}(s,\phi(s),\nu_{\Theta}(s))\mathrm{d}s. \tag{3.9}\]
Then, by (3.6), (3.8) and (3.9) and using the arguments in the proof of Theorem II.7.1' in [15], we obtain that
\[M^{\Theta}_{f^{i}}(t,(\phi,r,w))=\sum_{k=1}^{d_{1}}\int_{0}^{t}\sigma_{ik}(s,\phi (s),\nu_{\Theta}(s))\mathrm{d}W_{k}(s),i=1,2,\ldots,d.\]
Therefore, the canonical processes \((\bar{X},\rho,W)\) is a solution of equation (2.4).
## 4. Laplace Principle
In this section, we will prove the upper and the lower bound of the Laplace principle. We start by presenting an auxiliary lemma, which will be used in the proofs of the Laplace principle. Recall the controlled RSDEs (2.5), the empirical measure \(\bar{\mu}^{N}\) and the path space \(\mathcal{X}:=C([0,T];\overline{\mathcal{D}})\).
**Lemma 4.1**.: _Suppose Assumption 2.1 holds. Let \(\{h^{N},N\in\mathbb{N}\}\) be a sequence of elements in \(\mathcal{H}_{N}\) that satisfies_
\[\sup_{N\in\mathbb{N}}\mathbb{E}\left[\frac{1}{N}\sum_{i=1}^{N}\int_{0}^{T}|h^{ N}_{i}(t)|^{2}\mathrm{d}t\right]<\infty. \tag{4.1}\]
_Then the family of the laws of the \(\mathcal{P}(\mathcal{X})\)-valued random variables \(\bar{\mu}^{N}\) is tight._
Proof.: For positive constants \(\alpha\) and \(M\), we define a mapping \(G_{\alpha}\) by
\[\mathcal{X}\ni f\mapsto G_{\alpha}(f):=\sup_{0\leq s<t\leq T}\frac{|f(t)-f(s) |}{|t-s|^{\alpha}}\]
and set
\[H_{M}:=\Big{\{}\mu\in\mathcal{P}\big{(}\mathcal{X}):\mu(G_{\alpha}(f))\leq M \Big{\}}.\]
We claim that \(H_{M}\) is tight (relatively compact) in \(\mathcal{P}\big{(}\mathcal{X}\big{)}\). Indeed, for each \(\tilde{M}\in(0,+\infty)\), since the domain \(\mathcal{D}\) is bounded, we see that the set \(B_{\tilde{M}}:=\big{\{}f\in\mathcal{X},G_{\alpha}(f)\leq\tilde{M}\big{\}}\) is relatively compact in \(\mathcal{X}\) according to the Arzela-Ascoli theorem. On the other hand, by Chebyshev's inequality, we have
\[\sup_{\mu\in H_{M}}\mu(B^{c}_{\tilde{M}}) =\sup_{\mu\in H_{M}}\mu\big{(}\{f\in\mathcal{X},G_{\alpha}(f)> \tilde{M}\}\big{)}\] \[\leq\sup_{\mu\in H_{M}}\frac{\mu(G_{\alpha}(f))}{\tilde{M}}\] \[\leq\frac{M}{\tilde{M}}.\]
Then for any \(\varepsilon>0\), there exists a constant \(\tilde{M}\) depending on \(\varepsilon\) such that
\[\sup_{\mu\in H_{M}}\mu(B^{c}_{\tilde{M}})\leq\varepsilon.\]
This shows that \(H_{M}\) is relatively compact in \(\mathcal{P}\big{(}\mathcal{X}\big{)}\). On the other hand, we have
\[\sup_{N\in\mathbb{N}}\mathbb{P}(\bar{\mu}^{N}\in H^{c}_{\tilde{M}})=\sup_{N\in \mathbb{N}}\mathbb{P}(\bar{\mu}^{N}(G_{\alpha}(f))>M)\leq\frac{\sup_{N\in \mathbb{N}}\mathbb{E}[\bar{\mu}^{N}(G_{\alpha}(f))]}{M}. \tag{4.2}\]
Next, we show that \(\mathbb{E}[\bar{\mu}^{N}(G_{\alpha}(f))]\) is uniformly bounded with respect to \(N\). From the controlled RSDE (2.5), for any \(0\leq s\leq t\leq T\), we have
\[\bar{X}^{i,N}(t)-\bar{X}^{i,N}(s) = \int_{s}^{t}b(r,\bar{X}^{i,N}(r),\bar{\mu}^{N}(r))\mathrm{d}r\] \[+\int_{s}^{t}\sigma(r,\bar{X}^{i,N}(r),\bar{\mu}^{N}(r))h_{i}^{N }(r)\mathrm{d}r\] \[+\int_{s}^{t}\sigma(r,\bar{X}^{i,N}(r),\bar{\mu}^{N}(r))\mathrm{d }W^{i}(r)\] \[-\int_{s}^{t}\mathbf{n}(\bar{X}^{i,N}(r))\mathrm{d}|\bar{K}^{i,N} |(r). \tag{4.3}\]
Applying the Ito's formula to (4.3), we have
\[|\bar{X}^{i,N}(t)-\bar{X}^{i,N}(s)|^{2}\] \[= 2\int_{s}^{t}\langle\bar{X}^{i,N}(r)-\bar{X}^{i,N}(s),b(r,\bar{ X}^{i,N}(r),\bar{\mu}^{N}(r))\rangle\mathrm{d}r\] \[+2\int_{s}^{t}\langle\bar{X}^{i,N}(r)-\bar{X}^{i,N}(s),\sigma(r, \bar{X}^{i,N}(r),\bar{\mu}^{N}(r))h_{i}^{N}(r)\rangle\mathrm{d}r\] \[+2\int_{s}^{t}\langle\bar{X}^{i,N}(r)-\bar{X}^{i,N}(s),\sigma(r, \bar{X}^{i,N}(r),\bar{\mu}^{N}(r))\mathrm{d}W^{i}(r)\rangle\] \[+\int_{s}^{t}\|\sigma(r,\bar{X}^{i,N}(r),\bar{\mu}^{N}(r))\|^{2} \mathrm{d}r\] \[+2\int_{s}^{t}\langle\bar{X}^{i,N}(s)-\bar{X}^{i,N}(r),\mathbf{n} (\bar{X}^{i,N}(r))\rangle\mathrm{d}|\bar{K}^{i,N}|(r)\] \[=: I_{1}^{s,t}+I_{2}^{s,t}+I_{3}^{s,t}+I_{4}^{s,t}+I_{5}^{s,t}.\]
Now, it follows from the definition of \(G_{\alpha}\) that
\[\mathbb{E}[G_{\alpha}^{2}(\bar{X}^{i,N}(\cdot))] =\mathbb{E}\Big{[}\sup_{0\leq s<t\leq T}\frac{|\bar{X}^{i,N}(t)- \bar{X}^{i,N}(s)|^{2}}{|t-s|^{2\alpha}}\Big{]}\] \[\leq\sum_{i=1}^{5}\mathbb{E}\Big{[}\sup_{0\leq s<t\leq T}\frac{I_ {i}^{s,t}}{|t-s|^{2\alpha}}\Big{]}.\]
Note that there exists a positive constant \(\tilde{L}\) such that \(\mathcal{D}\subset B(0,\tilde{L})\).
Therefore, combining Assumption 2.1(A3), we obtain that
\[\mathbb{E}\Big{[}\sup_{0\leq s<t\leq T}\frac{I_{1}^{s,t}}{|t-s|^{2\alpha}} \Big{]}\leq 4L\tilde{L}\mathbb{E}[\sup_{0\leq s<t\leq T}|t-s|^{1-2\alpha}], \tag{4.4}\]
and
\[\mathbb{E}\Big{[}\sup_{0\leq s<t\leq T}\frac{I_{4}^{s,t}}{|t-s|^{2\alpha}} \Big{]}\leq L^{2}\mathbb{E}[\sup_{0\leq s<t\leq T}|t-s|^{1-2\alpha}]. \tag{4.5}\]
By Holder inequality, we have
\[\mathbb{E}\Big{[}\sup_{0\leq s<t\leq T}\frac{I_{2}^{s,t}}{|t-s|^{2\alpha}} \Big{]} \tag{4.6}\]
\[\leq 2\mathbb{E}\Big{[}\sup_{0\leq s<t\leq T}\frac{\int_{s}^{t}|\bar{X}^{i,N}(r)-\bar{X}^{i,N}(s)|\cdot|\sigma(r,\bar{X}^{i,N}(r),\bar{\mu}^{N}(r))h_{i}^ {N}(r)|\mathrm{d}r}{|t-s|^{2\alpha}}\Big{]}\] \[\leq 4\tilde{L}\mathbb{E}\Big{[}\sup_{0\leq s<t\leq T}\frac{(\int_{s} ^{t}\|\sigma(r,\bar{X}^{i,N}(r),\bar{\mu}^{N}(r))\|^{2}\mathrm{d}r)^{\frac{1}{ 2}}\cdot(\int_{s}^{t}|h_{i}^{N}(r)|^{2}\mathrm{d}r)^{\frac{1}{2}}}{|t-s|^{2 \alpha}}\Big{]}\] \[\leq 4L\tilde{L}\mathbb{E}\Big{[}\sup_{0\leq s<t\leq T}|t-s|^{\frac{ 1}{2}-2\alpha}\cdot(\int_{s}^{t}|h_{i}^{N}(r)|^{2}\mathrm{d}r)^{\frac{1}{2}} \Big{]}.\]
Since the domain \(\mathcal{D}\) is convex, we have \(I_{5}^{s,t}\leq 0\) and hence
\[\mathbb{E}\Big{[}\sup_{0\leq s<t\leq T}\frac{I_{5}^{s,t}}{|t-s|^{2\alpha}} \Big{]}\leq 0. \tag{4.7}\]
Set
\[Z^{i}(t):=\int_{0}^{t}\langle\bar{X}^{i,N}(r),\sigma(r,\bar{X}^{i,N}(r),\bar{ \mu}^{N}(r))\mathrm{d}W^{i}(r)\rangle,\]
and
\[Y^{i}(t):=\int_{0}^{t}\sigma(r,\bar{X}^{i,N}(r),\bar{\mu}^{N}(r))\mathrm{d}W^{ i}(r)\]
Then, we have
\[I_{3}^{s,t}=2(Z^{i}(t)-Z^{i}(s))-2\langle\bar{X}^{i,N}(s),Y^{i}(t)-Y^{i}(s)\rangle. \tag{4.8}\]
We now proceed to give an estimate for \(|Z^{i}(t)-Z^{i}(s)|\). By the Burkholder-Davis-Gundy inequality, for each \(p>2\) and \(0\leq s\leq t\leq T\),
\[\mathbb{E}[|Z^{i}(t)-Z^{i}(s)|^{2p}]\] \[= \mathbb{E}\Big{[}\Big{|}\int_{s}^{t}\langle\bar{X}^{i,N}(r), \sigma(r,\bar{X}^{i,N}(r),\bar{\mu}^{N}(r))\mathrm{d}W^{i}(r)\rangle\Big{|}^{ 2p}\Big{]}\] \[\leq C\tilde{L}^{2p}L^{2p}|t-s|^{p},\]
where \(C\) is a positive constant depending on \(d_{1}\) and \(p\). Applying the Garsia-Rodemich-Rumsey lemma (see Corollary 1.2 in [20]), there exists a random variable \(A_{i}\) such that with probability one, for all \(0\leq s\leq t\leq T\),
\[|Z^{i}(t,\omega)-Z^{i}(s,\omega)|\leq A_{i}(\omega)|t-s|^{\frac{p-2}{2p}},\]
where \(\mathbb{E}[A_{i}^{2p}]\leq C\tilde{L}^{2p}L^{2p}.\) By the similar arguments, there exists also a random variable \(B_{i}\) such that with probability one, for all \(0\leq s\leq t\leq T\),
\[|Y^{i}(t,\omega)-Y^{i}(s,\omega)|\leq B_{i}(\omega)|t-s|^{\frac{p-2}{2p}},\]
where \(\mathbb{E}[B_{i}^{2p}]\leq CL^{2p}.\) It follows from (4.8) that
\[\mathbb{E}\Big{[}\sup_{0\leq s<t\leq T}\frac{I_{3}^{s,t}}{|t-s|^ {2\alpha}}\Big{]} \leq C\mathbb{E}\Big{[}\sup_{0\leq s<t\leq T}\frac{|Z^{i}(t)-Z^{ i}(s)|+|Y^{i}(t)-Y^{i}(s)|}{|t-s|^{2\alpha}}\Big{]}\] \[\leq C\mathbb{E}[(A_{i}+B_{i})\sup_{0\leq s<t\leq T}|t-s|^{\frac{ p-2}{2p}-2\alpha}]. \tag{4.9}\]
Let \(p=4\) and \(\alpha=\frac{1}{8}\). It follows from (4.4)-(4.9) that
\[\mathbb{E}[G_{\frac{1}{8}}^{2}(\bar{X}^{i,N}(\cdot))] \leq\Big{\{}(4L\tilde{L}+L^{2})T^{\frac{3}{4}}+C\mathbb{E}[A_{i}+B _{i}]+4L\tilde{L}T^{\frac{1}{4}}\mathbb{E}[(\int_{0}^{T}|h_{i}^{N}(r)|^{2} \mathrm{d}r)^{\frac{1}{2}}]\Big{\}}\] \[\leq C\big{(}1+\mathbb{E}[\int_{0}^{T}|h_{i}^{N}(r)|^{2}\mathrm{d }r]\big{)},\]
for some positive constant \(C\) independent of \(N\).
Therefore, using the condition (4.1), for each \(N\in\mathbb{N}\),
\[\mathbb{E}[\bar{\mu}^{N}(G_{\frac{1}{8}}(\cdot))] =\frac{1}{N}\sum_{i=1}^{N}\mathbb{E}[G_{\frac{1}{8}}(\bar{X}^{i,N }(\cdot))]\] \[\leq\frac{1}{N}\sum_{i=1}^{N}(1+\mathbb{E}[G_{\frac{1}{8}}^{2}( \bar{X}^{i.N}(\cdot))])\] \[\leq\tilde{C},\]
where \(\tilde{C}\) is independent on \(N\). In combination with (4.2), it follows that for any \(\varepsilon>0\), there exists a positive constant \(M\) depending on \(\varepsilon\) such that
\[\sup_{N\in\mathbb{N}}\mathbb{P}(\bar{\mu}^{N}\in H_{M}^{c})\leq\varepsilon.\]
This implies that the family of the laws of \(\{\bar{\mu}^{N},N\in\mathbb{N}\}\) is tight in \(\mathcal{P}\big{(}\mathcal{P}\big{(}\mathcal{X}\big{)}\big{)}\).
### Laplace Upper Bound
In this subsection, we establish the Laplace upper bound (2.6) in **Step 1**, namely, we will show that for any sequence \(\{h^{N},N\in\mathbb{N}\}\) with \(h^{N}\in\mathcal{H}_{N}\),
\[\liminf_{N\mapsto\infty}\Big{\{}\frac{1}{2}\mathbb{E}\Big{[} \frac{1}{N}\sum_{i=1}^{N}\int_{0}^{T}|h_{i}^{N}|^{2}\mathrm{d}t\Big{]}+\mathbb{ E}\big{[}F(\bar{\mu}^{N})\big{]}\Big{\}}\] \[\geq\inf_{\Theta\in\mathcal{P}_{\infty}}\Big{\{}\frac{1}{2}\int_ {\mathcal{R}_{1}}\int_{\mathbb{R}^{d_{1}}\times[0,T]}|y|^{2}r(\mathrm{d}y \times\mathrm{d}t)\Theta_{\mathcal{R}}(\mathrm{d}r)+F(\Theta_{\mathcal{X}}) \Big{\}}. \tag{4.10}\]
For a sequence \(\{h^{N}\in\mathcal{H}_{N},N\in\mathbb{N}\}\), we may assume that
\[\mathbb{E}\left[\frac{1}{N}\sum_{i=1}^{N}\int_{0}^{T}|h_{i}^{N}(t)|^{2} \mathrm{d}t\right]\leq 4\|F\|_{\infty},\]
since otherwise the inequality (4.10) is automatic.
For each \(N\in\mathbb{N}\), define a \(\mathcal{P}(\mathcal{Z})\)-valued random variable by
\[Q_{\omega}^{N}(A\times R\times B):=\frac{1}{N}\sum_{i=1}^{N}\delta_{\bar{X}^{i,N}(\cdot,\omega)}(A)\cdot\delta_{\rho_{\omega}^{i,N}}(R)\cdot\delta_{W^{i}( \cdot,\omega)}(B) \tag{4.11}\]
for any \(A\times R\times B\in\mathcal{B}(\mathcal{Z})\) and \(\omega\in\Omega\), where \(\bar{X}^{i,N}\) is the solution of equation (2.5) with \(h^{N}=(h_{1}^{N},...,h_{N}^{N})\) and
\[\rho_{\omega}^{i,N}(B\times I):=\int_{I}\delta_{h_{i}^{N}(t,\omega)}(B) \mathrm{d}t,\quad B\in\mathcal{B}(\mathbb{R}^{d_{1}}),\quad I\in\mathcal{B}( [0,T]),\quad\omega\in\Omega.\]
**Lemma 4.2**.: _The family \(\{Q^{N},N\in\mathbb{N}\}\) of \(\mathcal{P}(\mathcal{Z})\)-valued random variables is tight._
Proof.: By Lemma 4.1, we know that the family of the first marginals of \(\{Q^{N},N\in\mathbb{N}\}\) is tight. The rest of the proof is the same as that of Lemma 5.1 in [5], we omit the details.
The following result is crucial for the proof the Laplace upper bound.
**Theorem 4.3**.: _Let \(\{Q^{N_{j}},j\in\mathbb{N}\}\) be a weakly convergent subsequence of \(\{Q^{N},N\in\mathbb{N}\}\), with \(Q\) being a \(\mathcal{P}(\mathcal{Z})\)-valued random variable defined on some probability space \((\tilde{\Omega},\tilde{\mathcal{F}},\tilde{\mathbb{P}})\) such that \(Q^{N_{j}}\overset{j\to\infty}{\longrightarrow}Q\) in distribution. Then, \(Q_{\omega}\in\mathcal{P}_{\infty}\) for \(\tilde{\mathbb{P}}\)-almost all \(\omega\in\tilde{\Omega}\), where \(\mathcal{P}_{\infty}\) is the family (defined in Section 2) of weak solutions of the controlled reflected McKean-Vlasov equations:_
\[\begin{cases}\mathrm{d}\bar{X}(t)=b(t,\bar{X}(t),\nu_{\Theta}(t))\mathrm{d}t+ \int_{\mathbb{R}^{d_{1}}}\sigma(t,\bar{X}(t),\nu_{\Theta}(t))y\rho_{t}( \mathrm{d}y))\mathrm{d}t\\ \qquad\qquad+\sigma(t,\bar{X}(t),\nu_{\Theta}(t))\mathrm{d}W(t)-\mathrm{d}\bar {K}(t),\\ |\bar{K}|(t)=\int_{0}^{t}\mathbf{1}_{\partial\mathcal{D}}(\bar{X}(s)) \mathrm{d}|\bar{K}|(s),\quad\bar{K}(t)=\int_{0}^{t}\mathbf{n}(\bar{X}(s)) \mathrm{d}|\bar{K}|(s),\\ \nu_{\Theta}(t)=\mathit{Law}(\bar{X}(t)),\quad\bar{X}(t)\in\overline{ \mathcal{D}},\quad t\in[0,T].\end{cases} \tag{4.12}\]
Proof.: Set \(I:=\{N_{j},j\in\mathbb{N}\}\) and write \((Q^{n})_{n\in I}\) for \((Q^{N_{j}})_{j\in\mathbb{N}}\), then \(Q^{n}\to Q\) in distribution. Note that by Fatou's Lemma (see Theorem A.3.12 in [13]) and Assumption 2.1(A2), it is easy to see that for \(\tilde{\mathbb{P}}\)-a.s. \(\omega\in\tilde{\Omega}\), \(Q_{\omega}\) satisfies conditions (i) and (iii) in the definition of \(\mathcal{P}_{\infty}\). To complete the proof, we need to prove that for \(\tilde{\mathbb{P}}\)-a.s. \(\omega\in\tilde{\Omega}\), \(Q_{\omega}\) is a weak solution to equation (4.12)
According to Theorem 3.3, a probability measure \(\Theta\in\mathcal{P}(\mathcal{Z})\) with \(\Theta(\{(\phi,r,w)\in\mathcal{Z}:w(0)=0\})=1\) is a weak solution of equation (4.12) if for all \(f\in C^{1,2,2}_{0}([0,T]\times\overline{\mathcal{D}}\times\mathbb{R}^{d_{1}})\) with \(\langle\nabla_{x}f(t,x,z),\mathbf{n}(x)\rangle\leq 0\) on \([0,T]\times\partial\mathcal{D}\times\mathbb{R}^{d_{1}}\), \(M^{\Theta}_{f}\) (defined in (3.3) and (3.4)) is a sub-martingale under \(\Theta\) with respect to the canonical filtration \((\mathcal{G}_{t})\), i.e., the following holds
\[\mathbb{E}_{\Theta}[\Psi\cdot(M^{\Theta}_{f}(t_{1})-M^{\Theta}_{f}(t_{0}))]\geq 0 \tag{4.13}\]
for all \(t_{0},t_{1}\in[0,T]\) with \(t_{0}\leq t_{1}\), and \(\mathcal{G}_{t_{0}}\)-measurable \(\Psi\in C^{+}_{b}(\mathcal{Z})\). Here \(C^{+}_{b}(\mathcal{Z})\) denotes the space of nonnegative functions in \(C_{b}(\mathcal{Z})\).
Note that it is enough to show that (4.13) holds for any countable collection of times \(t_{0}\) and \(t_{1}\) that is dense in \([0,T]\), any countable collection of \(\Psi\in C^{+}_{b}(\mathcal{Z})\) that generates the \(\sigma\)-algebras \(\mathcal{G}_{t_{0}}\), and the countable collection of test function \(f\) that is dense in \(C^{1,2,2}_{0}([0,T]\times\overline{\mathcal{D}}\times\mathbb{R}^{d_{1}})\) with \(\langle\nabla_{x}f(t,x,z),n(x)\rangle\leq 0\) on \([0,T]\times\partial\mathcal{D}\times\mathbb{R}^{d_{1}}\). Thus, there is a countable collection \(\mathcal{I}\subset[0,T]^{2}\times C^{+}_{b}(\mathcal{Z})\times\{f\in C^{1,2,2} _{0}([0,T]\times\overline{\mathcal{D}}\times\mathbb{R}^{d_{1}}):\langle \nabla_{x}f(t,x,z),\mathbf{n}(x)\rangle\leq 0\) on \([0,T]\times\partial\mathcal{D}\times\mathbb{R}^{d_{1}}\}\) of test parameters such that if (4.13) holds for all \((t_{0},t_{1},\Psi,f)\in\mathcal{I}\), then \(\Theta\) is a weak solution of equation (4.12).
To verify the sub-martingale property of \(M^{\Theta}_{f}\) with \(\Theta=Q_{\omega}\), \(\omega\in\tilde{\Omega}\), we introduce a continuous function with compact support. For each \(B\in(0,\infty)\), let \(g_{B}:\mathbb{R}^{d_{1}}\to\mathbb{R}^{d_{1}}\) be a continuous function with compact support satisfying \(g_{B}(y)=y\) for \(|y|\leq B\) and \(|g_{B}(y)|\leq|y|+1\) for every \(y\in\mathbb{R}^{d_{1}}\). Also, define the operator \(\mathcal{A}^{\Theta,B}_{s}\) by replacing \(y\) on the right side of (3.4) with \(g_{B}(y)\) and define \(M^{\Theta,B}_{f}\) by replacing \(\mathcal{A}^{\Theta}_{s}\) in (3.3) with \(\mathcal{A}^{\Theta,B}_{s}\).
For each \((t_{0},t_{1},\Psi,f)\in\mathcal{I}\), define \(\Phi=\Phi_{(t_{0},t_{1},\Psi,f)}\) by
\[\mathcal{P}(\mathcal{Z})\ni\Theta\mapsto\Phi(\Theta):=\bigg{(}\mathbb{E}_{ \Theta}[\Psi\cdot(M_{f}^{\Theta}(t_{1})-M_{f}^{\Theta}(t_{0}))]\bigg{)}^{-}, \tag{4.14}\]
where for each \(a\in\mathbb{R}\), \(a^{-}:=\max\{-a,0\}\). Similarly, for each \(B\in(0,\infty)\), define \(\Phi_{B}\) by replacing \(M_{f}^{\Theta}\) in (4.14) with \(M_{f}^{\Theta,B}\).
Similarly to the proof of Lemma 3.2 in [3], we obtain that (i) \(\forall B\in(0,\infty)\), \(\Phi_{B}\) is continuous on \(\mathcal{P}(\mathcal{Z})\), (ii) \(\sup_{n}\mathbb{E}[\Phi(Q^{n})-\Phi_{B}(Q^{n})]\to 0\), and (iii) \(\tilde{\mathbb{E}}|\Phi(Q)-\Phi_{B}(Q)|\to 0\) as \(B\to\infty\). For fixed \(B>0\), \(\Phi_{B}(Q^{n})\to\Phi_{B}(Q)\) in distribution. The above properties yield that \(\Phi(Q^{n})\to\Phi(Q)\) in distribution as well. By the definition of \(Q^{n}\), for \(\omega\in\Omega\), we obtain
\[\Phi(Q^{n}_{\omega})\] \[= \Big{(}\mathbb{E}_{Q^{n}_{\omega}}\left[\Psi\cdot\left(M_{f}^{Q^{ n}_{\omega}}(t_{1})-M_{f}^{Q^{n}_{\omega}}(t_{0})\right)\right]\Big{)}^{-}\] \[= \Big{(}\frac{1}{n}\sum_{i=1}^{n}\Psi\big{(}(\bar{X}^{i,n}(., \omega),\rho^{i,n}_{\omega},W^{i}(.,\omega))\big{)}\] \[\cdot\big{(}f(t_{1},\bar{X}^{i,n}(t_{1},\omega),W^{i}(t_{1}, \omega))-f(t_{0},\bar{X}^{i,n}(t_{0},\omega),W^{i}(t_{0},\omega))\] \[-\int_{t_{0}}^{t_{1}}f_{s}(s,\bar{X}^{i,n}(s,\omega),W^{i}(s, \omega))\mathrm{d}s\] \[-\int_{t_{0}}^{t_{1}}\mathcal{A}_{s}^{Q^{n}_{\omega}}(f)(s,\bar{ X}^{i,n}(s,\omega),h^{n}_{i}(s,\omega),W^{i}(s,\omega))\mathrm{d}s\big{)} \Big{)}^{-},\]
where \(\mathcal{A}^{Q^{n}_{\omega}}\) is defined in (3.4) with \(\bar{\mu}^{n}_{\omega}\) in place of \(\nu_{\Theta}\).
Applying the Ito's formula, for each \(i\), we have
\[f\big{(}t_{1},\bar{X}^{i,n}(t_{1}),W^{i}(t_{1})\big{)}-f\big{(} t_{0},\bar{X}^{i,n}(t_{0}),W^{i}(t_{0})\big{)}\] \[\quad-\int_{t_{0}}^{t_{1}}f_{s}(s,\bar{X}^{i,n}(s),W^{i}(s)) \mathrm{d}s\] \[\quad-\int_{t_{0}}^{t_{1}}\mathcal{A}_{s}^{Q^{n}}(f)\big{(}s,\bar {X}^{i,n}(s),h^{n}_{i}(s),W^{i}(s)\big{)}\mathrm{d}s\] \[= \int_{t_{0}}^{t_{1}}\langle\nabla_{x}f(s,\bar{X}^{i,n}(s),W^{i}( s)),\sigma(s,\bar{X}^{i,n}(s),\bar{\mu}^{n}(s))\mathrm{d}W^{i}(s)\rangle\] \[\quad+\int_{t_{0}}^{t_{1}}\langle\nabla_{z}f(s,\bar{X}^{i,n}(s),W ^{i}(s)),\mathrm{d}W^{i}(s)\rangle\] \[\quad-\int_{t_{0}}^{t_{1}}\langle\nabla_{x}f(s,\bar{X}^{i,n}(s),W ^{i}(s)),\mathrm{d}\bar{K}^{i,n}_{s}\rangle.\]
Keeping in mind that \(\Psi\) is \(\mathcal{G}_{t_{0}}\)-measurable and non-negative, we have
\[\mathbb{E}[\Phi^{2}(Q^{n})]\] \[= \mathbb{E}\bigg{[}\bigg{(}\Big{(}\mathbb{E}_{Q^{n}_{\omega}}\big{[} \Psi\cdot\left(M_{f}^{Q^{n}_{\omega}}(t_{1})-M_{f}^{Q^{n}_{\omega}}(t_{0}) \right)\big{]}\Big{)}^{-}\bigg{)}^{2}\bigg{]}\] \[= \mathbb{E}\bigg{[}\bigg{(}\Big{(}\frac{1}{n}\sum_{i=1}^{n}\int_{t _{0}}^{t_{1}}\Psi\cdot\big{(}\langle\nabla_{x}f(s,\bar{X}^{i,n}(s),W^{i}(s)), \sigma(s,\bar{X}^{i,n}(s),\bar{\mu}^{n}(s))\mathrm{d}W^{i}(s)\rangle\]
\[+\langle\nabla_{z}f(s,\bar{X}^{i,n}(s),W^{i}(s)),\mathrm{d}W^{i}(s) \rangle-\langle\nabla_{x}f(s,\bar{X}^{i,n}(s),W^{i}(s)),\mathrm{d}\bar{K}^{i,n}_ {s})\rangle\Big{)}^{-}\right)^{2}\] \[\leq \mathbb{E}\bigg{[}\bigg{(}\Big{(}\frac{1}{n}\sum_{i=1}^{n}\int_{t _{0}}^{t_{1}}\Psi\cdot\big{(}\langle\nabla_{x}f(s,\bar{X}^{i,n}(s),W^{i}(s)), \sigma(s,\bar{X}^{i,n}(s),\bar{\mu}^{n}(s))\mathrm{d}W^{i}(s)\rangle\] \[+\langle\nabla_{z}f(s,\bar{X}^{i,n}(s),W^{i}(s)),\mathrm{d}W^{i}( s)\rangle\big{)}\Big{)}^{-}\] \[+\Big{(}-\frac{1}{n}\sum_{i=1}^{n}\int_{t_{0}}^{t_{1}}\Psi\cdot \langle\nabla_{x}f(s,\bar{X}^{i,n}(s),W^{i}(s)),\mathrm{d}\bar{K}^{i,n}_{s} \rangle\Big{)}^{-}\bigg{)}^{2}\bigg{]}\] \[\leq \frac{1}{n^{2}}\sum_{i=1}^{n}\mathbb{E}\bigg{[}\int_{t_{0}}^{t_{ 1}}\Big{|}\Psi\cdot\Big{(}\nabla_{z}f(s,\bar{X}^{i,n}(s),W^{i}(s))\] \[+\nabla_{x}f(s,\bar{X}^{i,n}(s),W^{i}(s))\sigma(s,\bar{X}^{i,n}(s ),\bar{\mu}^{n}(s))\Big{)}\Big{|}^{2}\mathrm{d}s\bigg{]}\] \[\stackrel{{ n\to\infty}}{{\longrightarrow}} 0,\]
where we have used the fact that
\[-\frac{1}{n}\sum_{i=1}^{n}\int_{t_{0}}^{t_{1}}\Psi\cdot\langle\nabla_{x}f(s, \bar{X}^{i,n}(s),W^{i}(s)),\mathrm{d}\bar{K}^{i,n}_{s}\rangle\geq 0.\]
Since \(\mathcal{I}\) is countable, it follows that for \(\tilde{\mathbb{P}}\)-almost all \(\omega\in\tilde{\Omega}\), all \((t_{0},t_{1},\Psi,f)\in\mathcal{T}\),
\[\mathbb{E}_{Q_{\omega}}[\Psi\cdot(M_{f}^{Q_{\omega}}(t_{1})-M_{f}^{Q_{\omega} }(t_{0})]\geq 0,\]
which implies that for \(\tilde{\mathbb{P}}\)-almost all \(\omega\in\tilde{\Omega}\), \(Q_{\omega}\) is a weak solution of equation (4.12) and we complete the proof of the theorem.
Now we are ready to complete the proof of the upper bound (2.6). By the definition of \(Q^{N}\), the left-hand side of (2.6) can be rewritten as
\[\frac{1}{2}\mathbb{E}\bigg{[}\frac{1}{N}\sum_{i=1}^{N}\int_{0}^{T} |h^{N}_{i}(t)|^{2}\mathrm{d}t\bigg{]}+\mathbb{E}[F(\bar{\mu}^{N})]\] \[= \int_{\Omega}\bigg{[}\int_{\mathcal{R}_{1}}\bigg{(}\frac{1}{2} \int_{\mathcal{R}^{d_{1}}\times[0,T]}|y|^{2}r(\mathrm{d}y\times\mathrm{d}t) \bigg{)}Q^{N}_{\omega,\mathcal{R}}(\mathrm{d}r)+F(Q^{N}_{\omega,\mathcal{X}}) \bigg{]}\mathbb{P}(\mathrm{d}\omega).\]
Noticing that the function \(F\) is bounded and continuous, by Lemma 4.2 and Theorem 4.3, Fatou's Lemma, the Laplace upper bound is immediate.
### Laplace lower Bound
In this subsection, we proves the Laplace lower bound (2.7) in **Step 2**. The proof is very similar to the ones in [3] and [5]. For completeness, we give a sketch.
Let \(\Theta\in\mathcal{P}_{\infty}\). We will construct a sequence \(\{h^{N},N\in\mathbb{N}\}\) with \(h^{N}\in\mathcal{H}_{N}\) on a common stochastic basis satisfying (2.7):
\[\limsup_{N\to\infty}\Big{\{}\frac{1}{2}\mathbb{E}\big{[}\frac{1}{N }\sum_{i=1}^{N}\int_{0}^{T}|h^{N}_{i}|^{2}\mathrm{d}t\big{]}+\mathbb{E}[F(\bar{ \mu}^{N})]\Big{\}}\] \[\leq \frac{1}{2}\int_{\mathcal{R}_{1}}\int_{\mathbb{R}^{d_{1}}\times[0, T]}|y|^{2}r(\mathrm{d}y\times\mathrm{d}t)\Theta_{\mathcal{R}}(\mathrm{d}r)+F( \Theta_{\mathcal{X}}).\]
Recall \((\bar{X},\rho,W)\) from Section 3, which is the canonical process on \((\mathcal{Z},\mathcal{B}(\mathcal{Z}),(\mathcal{G}_{t}),\Theta)\) and \(\bar{v}\) from Section 2, which is the map from \(\mathcal{Z}\) to \(\mathcal{Z}^{0}\). Then we can disintegrate \(\Theta\circ\bar{v}^{-1}\) as
\[\Theta\circ\bar{v}^{-1}:=\nu_{0}(\mathrm{d}\phi_{0})\Theta_{\mathcal{W}}( \mathrm{d}w)\bar{\lambda}(\mathrm{d}r|\phi_{0},w).\]
Define a product space \((\Omega_{\infty},\mathcal{F}^{\infty})\) as the countably infinite product of \((\mathcal{R}_{1}\times\mathcal{W},\mathcal{B}(\mathcal{R}_{1}\times\mathcal{W }))\) and write \((r^{\prime},w^{\prime})\) as the element of \(\Omega_{\infty}\), where \(r^{\prime}=(r_{1},r_{2},\ldots)\) and \(w^{\prime}=(w_{1},w_{2},\ldots)\). For each \(i\in\mathbb{N}\), define
\[W^{i,\infty}(t,(r^{\prime},w^{\prime})):=w_{i}(t),\ \ \rho^{i,\infty}(r^{ \prime},w^{\prime}):=r_{i},\]
and
\[h_{i}^{\infty}(t,(r^{\prime},w^{\prime})):=\int_{\mathbb{R}^{d_{1}}}y\rho^{i, \infty}_{(r^{\prime},w^{\prime}),t}(\mathrm{d}y),\ (r^{\prime},w^{\prime})\in\Omega_{\infty},\ t\in[0,T],\]
where \(\rho^{i,\infty}_{(r^{\prime},w^{\prime}),t}\) is the derivative measure of \(\rho^{i,\infty}(r^{\prime},w^{\prime})\) at time \(t\). Furthermore, for each \(N\in\mathbb{N}\), define \(\mathbb{P}_{N}\in\mathcal{P}(\Omega_{\infty})\) as
\[\mathbb{P}_{N}(\mathrm{d}r,\mathrm{d}w):=\bigotimes_{i=1}^{N}\Theta_{\mathcal{ W}}(\mathrm{d}w_{i})\bar{\lambda}(\mathrm{d}r_{i}|x^{i,N},w_{i})\bigotimes_{i=N+1}^{ \infty}\Theta\circ(\rho,W)^{-1}(\mathrm{d}r_{i},\mathrm{d}w_{i}),\]
and define a \(\mathcal{P}(\overline{\mathcal{D}}\times\mathcal{R}_{1}\times\mathcal{W})\)-valued random variable by
\[\lambda^{N}(A\times R\times B):=\frac{1}{N}\sum_{i=1}^{N}\delta_{x^{i,N}}(A) \delta_{\rho^{i,\infty}}(R)\delta_{W^{i,\infty}}(B),\]
for any \(A\times R\times B\in\mathcal{B}(\overline{\mathcal{D}}\times\mathcal{R}_{1} \times\mathcal{W})\), where \(\{x^{i,N}\}\) are the initial values.
By construction, under \(\mathbb{P}_{N}\), \(\{W^{i,\infty},i\in\{1,\ldots,N\}\}\) are independent Wiener processes. By Assumption 2.1(A2), we obtain that
\[\mathbb{P}_{N}\circ(\lambda^{N})^{-1}\to\delta_{\Theta\circ\bar{v}^{-1}} \tag{4.15}\]
and
\[\limsup_{N\to\infty}\mathbb{E}_{\mathbb{P}_{N}}\bigg{[}\frac{1}{N }\sum_{i=1}^{N}\int_{0}^{T}|h_{i}^{\infty}(t)|^{2}\mathrm{d}t\bigg{]}\] \[= \limsup_{N\to\infty}\frac{1}{N}\sum_{i=1}^{N}\int_{\mathcal{R}_{1 }\times\mathcal{W}}\int_{0}^{T}|\int_{\mathbb{R}^{d_{1}}}yr_{t}(\mathrm{d}y)|^ {2}\mathrm{d}t\bar{\lambda}(\mathrm{d}r|x^{i,N},w)\Theta_{\mathcal{W}}( \mathrm{d}w)\] \[= \mathbb{E}_{\Theta}\bigg{[}\int_{0}^{T}|\int_{\mathbb{R}^{d_{1}} }y\rho_{t}(\mathrm{d}y)|^{2}\mathrm{d}t\bigg{]}\leq\mathbb{E}_{\Theta}\bigg{[} \int_{\mathbb{R}^{d_{1}}\times[0,T]}|y|^{2}\rho(\mathrm{d}y\times\mathrm{d}t) \bigg{]}<\infty. \tag{4.16}\]
In analogy with (4.11), for each \(N\in\mathbb{N}\), define a \(\mathcal{P}(\mathcal{Z})\)-valued random variable by
\[\tilde{Q}^{N}(A\times R\times B):=\frac{1}{N}\sum_{i=1}^{N}\delta_{\tilde{X}^{ i,N}}(A)\cdot\delta_{\rho^{i,\infty}}(R)\cdot\delta_{W^{i,\infty}}(B),\]
for any \(A\times R\times B\in\mathcal{B}(\mathcal{Z})\), where \((\tilde{X}^{1,N},\ldots,\tilde{X}^{N,N})\) is the solution of the system (2.5) with \(h^{N}=(h_{1}^{\infty},\ldots,h_{N}^{\infty})\) and \(\tilde{\mu}^{N}(t)=\frac{1}{N}\sum_{i=1}^{N}\delta_{\tilde{X}^{i,N}(t)}\) for each \(t\in[0,T]\).
By (4.16) and Lemma 4.1, we know that \(\{\tilde{Q}^{N},N\in\mathbb{N}\}\) is tight. Let \(\tilde{Q}\) be a limit point of \(\{\tilde{Q}^{N},N\in\mathbb{N}\}\) defined on some probability space \((\tilde{\Omega},\tilde{\mathcal{F}},\tilde{\mathbb{P}})\). According to Theorem 4.3 and its proof, \(\tilde{Q}_{\omega}\in\mathcal{P}_{\infty}\) for \(\tilde{\mathbb{P}}\)-almost all \(\omega\in\tilde{\Omega}\). Moreover, by (4.15), we derive that for \(\tilde{\mathbb{P}}\)-almost all \(\omega\in\tilde{\Omega}\), \(\tilde{Q}_{\omega}\circ\bar{v}^{-1}=\Theta\circ\bar{v}^{-1}\). Therefore, by
Assumption 2.1(A4), for \(\tilde{\mathbb{P}}\)-almost all \(\omega\in\tilde{\Omega}\), \(\tilde{Q}_{\omega}=\Theta\). In combination with (4.16) and \(F\) in (2.7) is bounded and continuous, the Laplace lower bound is immediate.
**Acknowledgement**. This work is partially supported by National Key R&D Program of China (No.2022YFA1006001), National Natural Science Foundation of China (Nos. 12131019, 11971456, 11721101)
|
2308.01726 | Dense Random Packing of Disks With a Power-Law Size Distribution in
Thermodynamic Limit: Fractal-like Properties | The correlation properties of a dense random system of disks with a power-law
size distribution are analyzed in momentum space in the thermodynamic limit.
The limit assumes that the total number the disks increases infinitely, while
the mean density of the disk centers and the range of the size distribution are
kept constant. The structure factor dependence on momentum transfer is
investigated for various numbers of disks and extrapolated to the thermodynamic
limit. The fractal power-law decay of the structure factor is recovered in
momentum space within the fractal range, which corresponds to the range of the
size distribution in real space. The fractal exponent coincides with the
exponent of the power-law size distribution as was shown previously by the
authors [Cherny, Anitas, Osipov, J. Chem. Phys. 158 (4), 044114 (2023)]. The
finite-size effects are also studied at very small momentum of order of the
inverse system size. We show that the structure factor is parabolic in this
region and calculate the prefactor analytically. The obtained results reveal
fractal-like properties of the packing. Our findings can be used to analyze
small-angle scattering from such systems. | Alexander Yu. Cherny, Eugen M. Anitas, Artem A. Vladimirov, Vladimir A. Osipov | 2023-08-03T12:33:42Z | http://arxiv.org/abs/2308.01726v1 | # Dense Random Packing of Disks With a Power-Law Size Distribution in Thermodynamic Limit:
###### Abstract
The correlation properties of a dense random system of disks with a power-law size distribution are analyzed in momentum space in the thermodynamic limit. The limit assumes that the total number the disks increases infinitely, while the mean density of the disk centers and the range of the size distribution are kept constant. The structure factor dependence on momentum transfer is investigated for various numbers of disks and extrapolated to the thermodynamic limit. The fractal power-law decay of the structure factor is recovered in momentum space within the fractal range, which corresponds to the range of the size distribution in real space. The fractal exponent coincides with the exponent of the power-law size distribution as was shown previously by the authors [Cherry, Anitas, Osipov, _J. Chem. Phys._**158**(4), 044114 (2023)]. The finite-size effects are also studied at very small momentum of order of the inverse system size. We show that the structure factor is parabolic in this region and calculate the prefactor analytically. The obtained results reveal fractal-like properties of the packing. Our findings can be used to analyze small-angle scattering from such systems.
dense random packing; power-law polydispersity; random sequential addition; Delaunay triangulation;fractals; structure factor.
## I Introduction
Dense random packings of granular particles or pores with sizes following a power-law distribution are highly relevant for various applications [1] including emulsion [2], soil [3], ceramics [4] and concrete technologies [5]. For hard materials this is due to the fact that power-laws give typically the highest densities, as required in applications where very high resistance is needed.
Besides the practical interest, there is a fundamental problem of understanding the correlation properties in such systems. They exhibit complex packings with fractal properties [6; 7; 8], which are controlled by two parameters: the range of the power-law size distribution and its exponent. Thus, this type of packing is characterized by a new kind of symmetry, termed self-similarity [9].
The dense packing limit used in the previous paper [8] assumes that the total number of the disks tends to infinity but the area occupied by the system remains finite. In this limit, the lower border of the size distribution tends to zero, which implies that maximum particle size ratio increases infinitely. Here, we extend our results [8] to a more realistic case of a macroscopic number of the disks and a fixed range of the size distribution. Then we study the structure factor as a function of the momentum transfer in the thermodynamic limit, where the area \(A\) and the number of disks \(N\) are increased while \(A/N={\rm const}\).
To this aim, we propose a time-saving algorithm of dense random packing of disks with a given size distribution, based on the Delaunay triangulation (DT) [10]. The triangulation is commonly used for compact packing problems for disks in 2D and spheres in 3D [11] and for analysis of pore space in a porous media [12]. For the dense packing, the most time consuming task is to check for possible intersections between a newly placed disk and a set of \(N\) already placed disks. The DT helps us to optimize its computation time. This algorithm needs only \(O(\log N)\) repeated operations to select neighbours of the newly placed disk and calculate distances to them. By contrast, without the DT the same procedure requires \(O(N)\) operations. For validation of DT-based algorithm, the results from a modified version of random sequential addition (RSA)-based algorithm used in Ref. [8] are also included.
In addition, we find that \(S(q)\propto q^{2}\) for the small momentum transfer \(q\lesssim 2\pi/s\) (here \(S(q)\) and \(s\) are the structure factor and system size, respectively). As is shown in Sec. III.1 below, the prefactor is determined by the variance of the disk positions, which are randomly distributed over the area.
## II Algorithms of Disk Fractal Generation
We consider a set of \(N\) disks obeying the power-law distribution with the exponent \(D\)[6; 8]. The number of disks \(dN(r)\) whose radii fall within the range (\(r,r+dr\)) is proportional to \(dr/r^{D+1}\). The radii vary from \(a\) to \(R\). In two dimensions, the exponent satisfies the condition \(0<D<2\).
### Random-sequential-addition grid
The RSA-based algorithm was described in details in Ref. [8]. For a large number of particles, it becomes too time-consuming, since the algorithm needs about \(O(N^{2})\) operations. For this reason, we use a simplified approach and build the entire system from \(k^{2}\) randomly generated cells. The system is a square grid of \(k\) by \(k\) cells. Each cell is a trial, obtained by RSA [8] and containing \(M=N/k^{2}\) disks of radii \(r_{i}=R\,i^{-1/D}\), \(i=1,...,M\). Then the thermodynamic limit is emulated by \(k\rightarrow\infty\) with \(M\) and \(R/a\) being constant.
This scheme enables us to reduce the number of operations to \(O(M^{2}k^{2})=O(N/k^{2})\). It changes the spatial long-range correlations of the disk positions at the distances bigger than \(s/k\), but the fractal range remains intact (see Sec. III.2 below).
Figure 1 (**Left**) shows a grid of \(5\times 5\) cells with each cell containing \(M=5000\) disks with \(D=1.5\) and \(R/a\simeq 3.4\times 10^{-3}\). For these control parameters, the packing fraction is about 0.95 and \(s/R\simeq 15\). Disks of the same radius are displayed in the same color: the largest disks are red, and the smallest are purple.
### Delaunay triangulation algorithm
For the DT algorithm, we use a smoother realization of the same power-law distribution as for the RSA grid
\[r_{i}=R\left(\frac{m}{m+i-1}\right)^{1/D},\text{ for }i=1,...,N. \tag{1}\]
The smallest radius \(a=R\big{[}m/(N+m-1)\big{]}^{1/D}\) slightly deviates from \(a=R(m/N)^{1/D}\), which is used in the RSA-based calculations at \(m=k^{2}\) (Sec. II.1). The value of the biggest radius \(R\) should be chosen to provide the densest packing in the thermodynamic limit. It is realized when the area of the embedding square is equal to the total area of the disks:
\[s^{2}/m=\pi R^{2}m^{2/D-1}\sum_{i=m}^{\infty}i^{-2/D}=\pi R^{2}m^{2/D-1}\zeta(2 /D,m), \tag{2}\]
where \(s\) is the edge length of the square, and \(\zeta(t,a)=\sum_{k=0}^{\infty}(a+k)^{-t}\) is the Hurwitz zeta function. Then the thermodynamic limit is \(m\to\infty\) and \(N/m=\text{const}\) and \(s^{2}/m\to\pi R^{2}D/(2-D)=\text{const}\) in full analogy with Sec. II.1 at \(m=k^{2}\). Note that the limit \(\lim_{m\to\infty}s^{2}/m=\pi R^{2}D/(2-D)\) is also obtained with the continuous power-law distribution as it should be.
Figure 2 compares the distributions of the radii used in the RSA grid and given by Eq. (1). We observe that the latter distribution is smoother and the both decay with the power-law \(r^{-D}\) as it should be.
To provide the compact packing of the disks within a finite area, the exponent of the distribution should be restricted to \(D<2\), since for \(D\geqslant 2\) the total area of the infinite set of disks becomes infinite. Disks are arranged into a square by placing them inside the square in the sequence from largest to smallest. As the set of already placed disks grows, the computational costs of checking for their collisions increase. Thus, we use the DT to select neighbours of a newly placed disk and thus reduce the number of the required calculations. The DT covers a convex hull of a given set of points by a set of triangles in such a way that no point lies inside the circumcircle of any triangle. There are very effective implementations of the triangulation that need only \(O(\log N)\) operations for both adding a new point in the triangulation and finding neighbours of the point. We use the code from the CGAL software package [13].
The algorithm consists of the following steps:
1. Positions of the first six disks are searched randomly over the entire square.
2. To place a new disk, we choose a random point for its center. The function _vertices_on_conflict_zone_boundary_ of the CGAL package is used to obtain the subset of already placed disks that are neighbours to the new disk and are most likely to intersect with it. If the randomly selected point is too close to their neighbours and collisions appear, the algorithm searches for an empty space inside the polygon that includes the neighbours of the selected point. To find an empty space, the new disk is bounced back with the shift vector that amounts to the sum of the vectors of all found overlappings. If no collisions appear then we go to the step 3. If a collision with the neighbours still exists, a new random point is chosen once more and the step 2 is repeated again. When the maximum number of unsuccessful attempts is reached, the algorithm is restarted from the step 1.
3. Before placing a new disk, algorithm does the final check for collisions with all already placed disks. This is because the selection of the neighbors with the DT might not work perfectly, although the probability of the failure is very small.
4. The center of the newly placed disk is added to the triangulation set and the algorithm proceeds from the step 2.
Note that this algorithm does not place disks in a completely random way. Although each new disc is initially dropped on the plane at random point, after that the algorithm simulates the disc repelling from its neighbors and the boundary of the square. In the case of \(m=1\) we place the first disk at the corner of the square, for \(m>1\) the first disk is placed to a random point. The larger the number of the disks and the packing fraction, the more effective the suggested algorithm is in comparison to a simple random placement of the disks such as RSA. The algorithm allows us to limit the search for empty space to the neighborhood of any added disk, which makes it especially effective at high packing fractions. The increase of performance is approximately proportional to \(1/(1-F)\) where \(F\) is the current filling rate.
Figure 1**(Right)** shows one trial of a set of \(N=125000\) disks generated using the DT-based algorithm with the distribution (1) at the same control parameters as described in Sec. II.1. The color coding is the same as in Figure 1**(Left)**. For the RSA grid, the spatial disk distribution is more homogeneous by its construction. However, the spatial distributions of RSA and DT are almost indistinguishable on scales of order \(r\lesssim s/k\).
## III The structure factor
We define the structure factor through the density _fluctuations_ over the background of the density mean value
\[S(q)\equiv\langle\delta\rho_{\mathbf{q}}\delta\rho_{-\mathbf{q}}\rangle_{\hat{q}}/N, \tag{3}\]
where the Fourier components of the density fluctuations are given by
\[\delta\rho_{\mathbf{q}}=\sum_{j=1}e^{-i\mathbf{q}\cdot\mathbf{r}_{j}}-\frac{N}{s^{2}}\int d\bm {r}e^{-i\mathbf{q}\cdot\mathbf{r}}. \tag{4}\]
Here the brackets \(\langle\cdots\rangle_{\hat{q}}\) stand for the average over all directions of unit vector \(\hat{q}\) along \(\mathbf{q}\), and the integral is taken over the area of the embedding square. Note that in the previous papers [6; 8], the structure factor was defined without subtracting the scattering amplitude of the background. Such a definition leads to the appearance of a spike of order of \(N\) in the range \(q\lesssim 2\pi/s\), which is not convenient to study the thermodynamic limit. The definition (3) enables us to exclude the spike.
Figure 3 shows the structure factor (3) for the dense packings depicted in Figure 1. Each curve is calculated numerically with 20 trials. We observe that the errors are almost negligible from the border of the fractal range \(q\gtrsim 2\pi/R\), see the discussion in Sec. III.2 below. This implies that at the given control parameters \(N\), \(D\), and \(R\), the structure factor is independent of a specific packing configuration in the limit of high packing fraction. However, when \(q\lesssim 2\pi/R\) the errors are quite pronounced, especially for the DT-based curves.
In order to illustrate graphically how the structure factor is obtained, Figure 3 shows also the structure factor (of the densities) given by Eq. (3) but without subtracting the scattering amplitude of the background square in Eq. (4) (black curve). We observe that that for \(q\lesssim 2\pi/s\) the dominant contribution comes from the embedding square (green curve), while for \(q\gtrsim 2\pi/R\), the main contribution arises from the structure factor of a single cell. The crossover point lies somewhere in between. The contribution of a single cell was studied in detail in the previous paper [8].
A quantitative analysis of the structure factor is performed after smearing the curves in Figure 3 with a log-normal distribution function, as described in Ref. [8]. This procedure eliminates the maxima and minima by smoothing them and provides a reliable way to estimates the power-law exponents by a
Figure 1: (**Left**) A grid of \(5\times 5\) cells; each one contains \(M=5000\) disks, placed with the RSA algorithm. The total number of the disks \(N=125000\). (**Right**) A set of \(N=125000\) disks placed by the DT algorithm. In the both cases, the disks are compactly packed and randomly distributed, and their radii follow the power-law distributions with the exponent \(D=1.5\). Disks with the same radius are displayed in the same color.
Figure 2: The cumulative power-law distribution in units of the largest radius \(R\) for the two structures shown in Figure 1. Vertical dashed line (blue) denote the minimum radius \(a\). Horizontal dashed line (green) denotes the number of cells in the RSA grid.
linear fit in a double-logarithmic plot. Moreover, the smoothing is a standard procedure in analyzing small-angle scattering data, since there are three contributions to the smearing of an ideal curve: the angular divergence of the beam, the finite resolution of the detector, and the polychromatic nature of the beam [14]. The results of smearing are shown in Figure 4 for various number of disks \(N\). For the RSA grid, the chosen values of \(N\) correspond to different grid sizes (\(1\times 1\) at \(N=5000\), \(2\times 2\) at \(N=20000\) etc). In particular, in Figure 4 the violet curve at \(N=125000\) is the smeared version of the blue curve in Figure 3.
### Small momenta: finite-size effects
We choose the origin of coordinates in the center of the square. Expanding Eq. (4) at small momentum, we obtain: \(\delta\rho_{\mathbf{q}}=-iN\mathbf{q}\cdot\mathbf{r}_{\text{c}}+O(q^{2})\) with \(\mathbf{r}_{\text{c}}=\frac{1}{N}\sum_{j=1}^{N}\mathbf{r}_{j}\) being the center-of-mass position of the total set of the points. By substituting this formula into the definition (3) and using the identity \(\langle(\mathbf{q}\cdot\mathbf{r}_{\text{c}})^{2}\rangle_{\hat{q}}=q^{2}r_{\text{c}}^ {2}/2\) in 2D, we are left with the structure factor at \(q\lesssim 2\pi/s\)
\[S(q)\simeq\frac{N}{2}q^{2}r_{\text{c}}^{2}. \tag{5}\]
The vector \(\mathbf{r}_{\text{c}}\) is non-zero only due to the fluctuations of the density. When the distribution of the points \(\mathbf{r}_{j}\) over the square is random and uniform then its variance \(\left\langle\left(\sum_{j=1}^{N}\mathbf{r}_{j}\right)\cdot\left(\sum_{j=1}^{N}\bm {r}_{j}\right)\right\rangle\) is proportional to \(N\). It follows that the structure factor (5) is independent of \(N\) in the thermodynamic limit as it should be. Nevertheless, in the thermodynamic limit, the range \(q\lesssim 2\pi/s\) shrinks to zero, and, hence, the formula (5) describes a finite-size effect anyway.
Figure 4 illustrates these features. For the RSA grid, the structure factor remains practically unchanged with increasing \(N\) when \(q\lesssim 2\pi/s\). For the DT based algorithm, the structure factor approaches the saturation curve with increasing \(N\). For \(N>80000\), the corresponding scattering curves (compare the cyan and violet ones) are practically indistinguishable in this region. Note that all the curves fall within the error bars when \(q\lesssim 2\pi/s\).
Figure 4 shows also that the power-law exponent at small momenta is exactly 2, as predicted by Eq. (5). Moreover, the coefficient of \(q^{2}\) in Eq. (5) is exactly recovered from the corresponding curves (see the insets of Figure 3).
### High momenta: Fractal region and beyond
The fractal range in reciprocal space is given by [8]
\[2\pi/R\lesssim q\ll 2\pi/a. \tag{6}\]
The lower and upper borders of the fractal region are related to the largest, and respectively smallest radii in the configuration. The left border of this range is indicated by a vertical dotted line in Figure 4(a) and 4(b), as well as in Figure 5(a) and 5(b). In the fractal range the structure factor decays proportional to \(q^{-\alpha}\). A linear fit in the range \(22\lesssim qR\lesssim 131\) gives the value \(\alpha=1.508\pm 0.009\) for the power-law exponent, which confirms numerically that \(\alpha=D\).
Beyond the fractal region, when \(q\gtrsim 2\pi/a\), the structure factor attains its asymptotics with \(S(q)\simeq 1\), as indicated by the horizontal dotted line in Figure 4(a) and 4(b), as well as in Figure 3(a), 3(b), 5(a) and 5(b). The lengthy region where the structure factor is smaller than one appears due to the finite values of smallest radius \(a\) (see the detailed discussion in Ref. [8]).
Figure 3: The structure factor of the density fluctuations (blue) [i.e., Eq. (3) with Fourier components (4)] in units of the inverse radius \(1/R\) of the largest disk (black). The structure factor of the densities (black) [i.e., Eq. (3) with Fourier components (4) but without the second term]. The scattering intensity from the embedding square is given in green. (a) RSA. (b) DT. In the both cases, the red bars represent errors (the standard deviations for 20 trails). The number of disks \(N=125000\) and \(s/R\simeq 15\). The corresponding structures are shown in Figure 1. The inset figures show the exact agreement between a single trial (blue) and the analytical relation (5) (orange) at small values of the wave vector \(q\lesssim 2\pi/s\).
### Random-sequential-addition grid _vs._ Delaunay triangulation algorithm
A direct comparison of the structure factor (3) for the RSA grid and DT-based algorithms is given in Figure 5 for the non-smeared (a) and smeared (b) curves. The results show an excellent agreement between the two curves in the fractal region and beyond. This indicates that the RSA and DT packing algorithms yield the same fractal exponent, which coincides with the power-law exponent of the size distribution. This is in contrast to jammed packings generated using discrete element method simulations: it has been reported that the structure-factor exponent is independent of the exponent of the power-law size distribution in three dimensions and only slightly depends on it in two dimensions [15].
At small momenta \(q\gtrsim 2\pi/a\), the two curves have very distinct values of the coefficients of \(q^{2}\) in Eq. (5). This is related to the fluctuations of the center-of-mass position of the disk centers. The fluctuations are artificially suppressed by the method of construction of the RSA grid (see Sec. II.1) and as a consequence, the coefficient is significantly reduced.
## IV Conclusions
In this work, the structure factor \(S(q)\) [Eq. (3)] is used to study in the thermodynamic limit the correlation properties of densely packed disks with a power-law distribution of their sizes. The thermodynamic limit is simulated with different methods of packings: the random sequential addition grid and Delaunay triangulation-based algorithms. The spike in the structure factor at small momenta, which is proportional to the total number of the disks, is excluded by subtracting the scattering amplitude of the embedding square, see Sec. III. Apart from the formal reasons, this definition can be quite appropriate from the physical point of view, for instance, in studying the internal structures by the small-angle neutron scattering with the contrast variation method.
The main result of the paper is that in the thermodynamic limit, the fractal range in \(S(q)\) is still observable for the both
Figure 4: The smeared structure factor (3) vs. the wave vector (in units of the inverse radius \(1/R\) of the larges disk) at various numbers of particles \(N\). (**a**) RSA. (**b**) DT. In the both cases the red bars represent the errors. The vertical dotted lines indicate the beginning of the fractal range.
Figure 5: (**a**) The structure factors (3) vs. the wave vector (in units of the inverse radius \(1/R\) of the largest disk) for the RSA grid and DT at \(N=80000\). (**b**) The corresponding smeared structure factors.
algorithms: its borders (6) are determined by ranges of the size distribution, and the fractal exponent coincides with the exponent of the power-law size distribution[8].
The finite size effects are studied as well. It is shown (see Sec. III.1) that in the range of small momenta, \(S(q)\propto q^{2}\) and it is independent of the number of particles, although this range shrinks in the thermodynamic limit. We derive the expression for the coefficient of \(q^{2}\), which is related to the fluctuations of the center-of-mass of the disk positions.
The obtained results can be used for analyzing experimental small-angle scattering data from densely packed objects, whose centers have a much higher scattering length density as compared to its surrounding shell.
|
2303.03243 | In-Stabilities of massive white dwarfs in modified gravity | Super-Chandrasekhar white dwarfs are a timely topic in the last years in the
scientific community due to its connection to supernovae type Ia (SN Ia). Some
early studies tackled the possibility of white dwarfs surpassing the
Chandrasekhar limit by means of a magnetic field. More recently, modified
gravity has been highlighted as the reason for these stars to surpass the
Chandrasekhar limit and becoming a supernova progenitor. However, in general
simple assumptions are considered for the stellar structure and equation of
state (EoS), which can lead to unreliable conclusions. In this work, we want to
be rigorous and consider a realistic EoS to describe the white dwarfs in
general relativity and modified gravity, taking into account nuclear
instabilities that limit the maximum mass. | Ronaldo V. Lobato, Geanderson A. Carvalho, Neelima G. Kelkar, Marek Nowakowski | 2023-03-06T16:01:32Z | http://arxiv.org/abs/2303.03243v1 | # In-Stabilities of massive white dwarfs in modified gravity+
###### Abstract
Super-Chandrasekhar white dwarfs are a timely topic in the last years in the scientific community due to its connection to supernovae type Ia (SN Ia). Some early studies tackled the possibility of white dwarfs surpassing the Chandrasekhar limit by means of a magnetic field. More recently modified gravity has been highlighted as the reason for these stars to surpass the Chandrasekhar limit and becoming a supernova progenitor. However, in general simple assumptions are considered for the stellar structure and equation of state (EoS), which can lead to unreliable conclusions. In this work we want to be rigorous and consider a realistic EoS to describe the white dwarfs in general relativity and modified gravity, taking into account nuclear instabilities that limit the maximum mass.
Modified gravity, Super-Chandrasekhar white dwarfs.
## 1 Introduction
White dwarfs (WD) are stars that can reach densities as high as \(\sim 10^{11}\) g/cm\({}^{3}\) in their interiors. Observed magnetic fields of WDs are also in the order of \(\sim 10^{9}\) G, while the masses are limited by the so called Chandrasekhar mass limit (maximum stable mass \(M_{\rm Ch}=1.44\)\(M_{\odot}\), with \(M_{\odot}\) representing the Solar mass). The radii of WDs are of order \(10^{4}\) km, which renders a surface gravity, \(\log_{10}g\), in the range \(8-10\). These extreme properties make WDs a laboratory of tests for strong gravity regimes, thus motivating their application to the study of modified gravity theories. In particular, WDs can help to constrain the parameter space of the new theories.
On the other hand, some peculiar, overluminous type Ia supernovae have been linked to the possible existence of super-Chandrasekhar white dwarfs. The origin of type Ia supernovae is understood as the collapse of either a WD binary or a massive, near Chandrasekhar mass accreting WD. However, the possible progenitor systems or the mechanism leading to such massive WDs are highly unknown. Several scenarios were studied: WDs with rotation [1, 2, 3, 4, 5, 6], within modified theories [7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19], with magnetic and electric fields [20, 21, 22, 23, 24], temperature [25, 26, 27, 28], under generalized uncertainty principle, in Einstein-\(\Lambda\) gravity [29].
## 2 Hydrostatic equilibrium
To model relativistic stars, one needs the general relativity equations
\[G^{\mu\nu}\equiv R^{\mu\nu}-\frac{1}{2}g^{\mu\nu}R=8\pi T^{\mu\nu}. \tag{1}\]
For a perfect fluid energy-momentum tensor and for a static spherical symmetric spacetime, the Einstein's field equations lead to the hydrostatic equilibrium equation, the Tolman-Oppenheimer-Volkoff (T.O.V.) equation [30, 31]. This equation reads in natural units
\[p^{\prime}=-(\rho+p)\frac{4\pi pr+m/r^{2}}{(1-2m/r)}, \tag{2}\]
where the prime indicates radial derivative and \(m\) is the gravitational mass enclosed within the surface of radius, \(m^{\prime}=4\pi\rho r^{2}\). To solve this system one needs the previous EoS (\(p(\rho)\)) and use the boundary conditions
\[m(r)|_{r=0}=0, \tag{3}\]
where \(m(r)=4/3\pi r^{3}\rho(0)\) and
\[p(r)|_{r=0}=p_{c}\quad\text{and}\quad\rho(r)|_{r=0}=\rho_{c}, \tag{4}\]
where \(p_{c}\) and \(\rho_{c}\) are the pressure and density at the center of the star. The numerical integration of Eq. (2) follows the pressure decrease as one moves away from the center, and it is stopped when the condition \(p(r)|_{r=R}=0\) is reached at the surface of the star \(R\). The integration of the profile density
\[M(R)\equiv 4\pi\int_{0}^{R}r^{2}\rho(r)dr \tag{5}\]
provides the total gravitational mass of the star \(M\). The resulting M-R relation can be compared to data from astronomical observations. Once the EoS is provided, the global properties of the stars can be obtained.
When one considers a modification in gravity theory, the field equations are changed. Generally, a symmetric spacetime/perfect fluid energy-momentum tensor are still used. In this case, one will have the T.O.V.-like equations for the hydrostatic equilibrium equations that model the relativistic stars.
For the specific theory called \(f(\texttt{R},\texttt{L}_{\texttt{m}})\) gravity, that we have considered before [13], where \(f(\texttt{R},\texttt{L}_{\texttt{m}})=\texttt{R}/2+\texttt{L}_{\texttt{m}}+ \sigma\texttt{RL}_{\texttt{m}}\), the equations are:
\[\alpha^{\prime}(p+\rho)+2z=0, \tag{6a}\] \[p^{\prime}-z=0,\] (6b) \[\Bigg{[}\bigg{(}2r^{2}\rho e^{\beta}+\big{(}2\texttt{R}r^{2}\rho e ^{\beta}+3r^{2}z\alpha^{\prime}+6pr\beta^{\prime}+2\big{(}2\texttt{R}pr^{2}+3 p\big{)}e^{\beta}-6p\big{)}\sigma\] \[\qquad\qquad-\big{(}(\texttt{R}-3p)r^{2}+3\big{)}\,e^{\beta}-3r \beta^{\prime}+3\bigg{)}e^{-\beta}\Bigg{]}(3r^{2})^{-1}=0, \tag{6c}\]
\[\Bigg{[}\Bigg{(}r^{2}\rho e^{\beta}+(\texttt{R}r^{2}\rho e^{\beta}+3 \,r^{2}z\beta^{\prime}+6\,pr\alpha^{\prime}-6\,r^{2}z^{\prime}-(\texttt{R}pr^{2}+6 \,p)e^{\beta}+6\,p)\sigma\] \[+\big{(}\texttt{R}r^{2}+3\big{)}e^{\beta}-3\,r\alpha^{\prime}-3 \bigg{)}e^{-\beta}\Bigg{]}(3\,r^{2})^{-1}=0. \tag{6d}\]
where, \(\alpha\) and \(\beta\) are the metric potentials depending on the radial coordinate \(r\) and \(z\) is an auxiliary variable, \(z=p^{\prime}\). For complete details, see Refs. [32, 33]. Once the EoS is defined, the global properties, such as mass and radius, can be found from (6).
## 3 Stability criteria for critical mass
The critical mass of white dwarfs is known from a long time ago, when Stoner [34] considered special relativity to describe Fermi-Dirac statistics stars, the mass was established as,
\[M_{\rm crit}\approx K\frac{M_{\rm P}^{3}}{\mu^{2}m_{n}^{2}}, \tag{7}\]
where \(M_{\rm P}\) is the Planck mass, \(m_{n}\) is the neutron mass and \(\mu\) the average molecular weight \(A/Z\). The constant \(K\) was determined as \(K=3.72\), later in the Chandrasekhar [35, 36], Landau [37] and Gamow [38] works, the value went to \(K=3.09\) using Lane-Emden equations. To reach this value, the simplest EoS was used, it considers a model of non-interacting relativistic Fermi gas of electrons, although the EoS can describe very well WDs, there were improvements such as the one by Hamada-Salpeter (HS), which accounts for corrections due electrostatic energy, Thomas-Fermi deviations, exchange energy and spin-spin interactions [39, 40]. However, only electrostatic corrections were found to be non-negligible. The Chandrasekhar EoS has a dependence on \(\mu\), and HS besides a dependence on \(\mu\) one changes the dependence on the nuclear composition of the star, for the have on \(Z\) also, which slightly decreases the Chandrasekhar limit as one can see in Fig. 1a, where we have the mass-radius for white dwarfs considering the HS EoS for different star's composition.
The electron pressure in HS EoS is lowered by the electrostatic attraction among the electrons and ions. Further and new developments were considered when heavy elements are important, in the Thomas-Fermi [41] and Feynman-Metropolis-Teller models [42]. The hole of electron-ion interaction started to be considered in these models in more ways, i.e., inclusion of corrections of nuclear thresholds such as inverse \(\beta\)-decay and pyconuclear reactions [43, 44], leading to study of the low mass neutron stars, that could be [45, 46], i.e., massive WDs near the Chandrasekhar limit [47]. generated by massive white dwarfs made of oxygen-neon-magnesium.
### Gravitational instability
When considering perturbations around the static equilibrium, one can start from the perturbed Euler equations,
\[\Delta\left(\frac{dv^{i}}{dt}+\frac{1}{\rho}\nabla_{i}p+\nabla_{i}\Phi\right) =0, \tag{8}\]
where \(\Phi\) is gravitational potential. Note that the unperturbed equations with \(\vec{v}=0\) (static configuration) lead to the equation of hydrostatic equilibrium. One can assume \(\Delta\vec{v}=\mathrm{d}\xi/\mathrm{d}t\), being \(\xi\) perturbations of the form \(\xi=\xi(\vec{x},t)\), and commutation relations between \(\Delta\) and derivatives. From
(8), one arrive at,
\[\rho\frac{\mathrm{d}^{2}\xi^{i}}{\mathrm{d}t^{2}}-\frac{\Delta\rho}{\rho}\nabla_{ i}p+\nabla_{i}\Delta P+\rho\nabla_{i}\Delta\Phi=0, \tag{9}\]
which gives the dynamical equation for the perturbations. If the perturbations are now restricted to follow \(\xi(\vec{x},t)=\xi(\vec{x})\mathrm{e}^{i\omega t}\), the problem becomes a Sturm-Liouville eigenvalue equation for \(\omega^{2}\). The eigenvalues form an infinite and discrete sequence \(\omega_{0}^{2}<\omega_{1}^{2}<\omega_{2}^{2}\), where \(\omega_{0}\) is the oscillation frequency of the fundamental mode. The eigenvalues are expected to be real, so \(\omega^{2}<0\) correspond to unstable oscillation modes. One interesting case is when the oscillation frequency corresponds to radial perturbations, \(\xi(\vec{x})=\xi(r)\). In this situation, one can show that the oscillation frequency become
\[\omega^{2}\propto 3\bar{\Gamma}_{1}-4, \tag{10}\]
where \(\bar{\Gamma}_{1}\) is the pressure-average adiabatic index, i.e.,
\[\bar{\Gamma}_{1}\equiv\frac{\int_{0}^{R}\Gamma_{1}pr^{2}\mathrm{d}r}{\int_{0} ^{R}pr^{2}\mathrm{d}r}, \tag{11}\]
with \(\Gamma_{1}\) the adiabatic index governing the perturbations, i.e., \(\Delta P/P=\Gamma_{1}\Delta\rho/\rho\).
When a one-parameter sequence of equilibrium stars is constructed with an EoS considering different central densities, the critical point \(dE/d\rho_{c}=0\) gives also the onset of instability. In particular, using the variational principle and an adiabatic EoS \(p=K\rho^{\Gamma}\), it is possible to show that [48]
\[\frac{\partial M}{\partial\rho_{c}}\propto\Gamma-\frac{4}{3}. \tag{12}\]
From equation (10) \(\Gamma<4/3\) gives a negative \(\omega^{2}\), so leading to unstable configurations under radial perturbations. On the other hand, if \(\Gamma>4/3\), positive values of \(\omega^{2}\) are achieved, now giving a region of stable configurations. This correlated to equation (12) translates into
\[\frac{\partial M}{\partial\rho_{c}}>0,\quad\text{for stable equilibrium configurations} \tag{13}\]
and
\[\frac{\partial M}{\partial\rho_{c}}<0,\quad\text{for unstable equilibrium configurations}. \tag{14}\]
So, if we have only one critical point in a \(M\)-\(\rho_{c}\) equilibrium sequence, it marks the onset of stability under radial oscillations, defining a maximum mass allowed due to gravitation. In general, the works that have studied white dwarfs in modified gravity applied only this gravitational stability criteria, and, in addition, they have used a simplistic Chandrasekhar EoS. When improvements in the EoS as seen previously are considered, the maximum mass decreases. Moreover, when considering the onset of nuclear instabilities, they are often reached before the gravitational instability, which limits even the maximum mass in GR [49]. That is also important for modified gravity, i.e., the onset of nuclear instability should be taken into account since it will turn on before the gravitational one.
### Nuclear instabilities
#### 3.2.1 Inverse \(\beta\)-decay
The first cutoff to the Chandrasekhar equation of state due to nuclear reactions comes from the effects of inverse \(\beta\)-decay, which reduces the maximum mass \(M\) of the white dwarfs [50]. That
occurs through the electron capture when the electron Fermi energy is larger than the mass in the initial and final state. As the star goes to higher density, the matter suffers compression and the electrons combine with nuclei, generating another nucleus and a neutrino [51],
\[\begin{array}{c}\stackrel{{ A}}{{Z}}X+e^{-}\rightarrow^{A}_{Z-1}Y+ \nu_{e}.\end{array} \tag{15}\]
The electron capture leads to a global instability of the star, that can induce a core-collapse of the white dwarf, to undergo collapse depends on the relation between the electron capture and the pycnonuclear reactions. The instability of a pure \({}^{12}\)C star, considering the general relativistic effects, has been calculated [52], leading to a maximum mass of \(M\approx 1.366M_{\odot}\). It was computed that the maximum Fermi energy of the electrons is 12.15 MeV, and mixed WD of \({}^{12}\)C/\({}^{16}\)O, the configuration becomes unstable when the \({}^{16}\)O concentration exceeds 0.06, leading to a maximum mass of \(M\approx 1.365M_{\odot}\).
For the reaction occurs one needs that the Gibbs energy per nucleon should be higher that of the Gibbs energy of the nucleon produced, so the condition below should be satisfied
\[g(p,A,Z)\geq g(p,A,Z-1). \tag{16}\]
For a detailed discussion about neutronization, see Sec. V. of Ref. [51].
#### 3.2.2 Pycnonuclear reaction
A second cutoff in maximum mass of the WDs EoS is due to the pycnonuclear reactions. The screen of Coulomb potential due to the electrons in the lattice that composes the white dwarfs make the potential barrier of the nuclei decreases, easier to cross. For higher densities as in the core of white dwarfs the oscillations of the nuclei in the lattice produce pycnonuclear reactions, these reactions can be written as [49]
\[\begin{array}{c}\stackrel{{ A}}{{Z}}X+\stackrel{{ A}}{{Z}}X\rightarrow^{2A}_{2Z}Y.\end{array} \tag{17}\]
As the elements fuses, the threshold for the pressure reduces as the resulting nucleus has a lower electron capture threshold [53]. As pointed out by Ref. [49] the rates which the pycnonuclear reactions happens is very uncertain. The threshold is defined by the electron capture that is lower for the \(\stackrel{{ 2A}}{{2Z}}Y\) than for \(\stackrel{{ A}}{{Z}}X\).
## 4 Results and discussions
Considering the Hamada-Salpeter EoS [40] for \({}^{4}\)He, \({}^{12}\)C, \({}^{16}\)O, \({}^{20}\)Ne, \({}^{24}\)Mg, \({}^{32}\)S, \({}^{56}\)Fe and using the mass density threshold for electron capture of Ref. [51], see Table 1, we have constructed stellar sequences of equilibrium considering general relativity and \(f(\texttt{R},\texttt{L}_{\texttt{m}})\) theory of gravity to explore the maximum mass allowed for the stars.
In Fig. 1a, we present the mass-radius for white dwarfs considering the HS EoS for different star's composition within general relativity. Pink triangles indicate the onset of the gravitational instability (maximum mass point) and black stars mark the onset of nuclear instabilities. The maximum mass is limited by the central density, therefore one needs to analyze it in a plot of mass-central energy density.
In Fig. 1b, we show the behavior of the star's mass against the central energy density within general relativity. Pink triangles indicate the onset of the gravitational instability. From those points to the right, the stellar mass decreases with the increment of \(\rho_{c}\) and thus this region is unstable under radial oscillations. Additionally, black stars mark the onset of nuclear instabilities.
\begin{table}
\begin{tabular}{l l} \hline \({}^{A}_{Z}X\) & \(\rho\) (g cm\({}^{-3}\)) \\ \hline \hline \({}^{4}\)He & \(1.41\times 10^{11}\) \\ \({}^{12}\)C & \(4.16\times 10^{10}\) \\ \({}^{16}\)O & \(2.06\times 10^{10}\) \\ \({}^{20}\)Ne & \(6.82\times 10^{9}\) \\ \({}^{24}\)Mg & \(3.52\times 10^{9}\) \\ \({}^{32}\)S & \(1.69\times 10^{8}\) \\ \({}^{56}\)Fe & \(1.38\times 10^{9}\) \\ \hline \end{tabular}
\end{table}
Table 1: Pressure and density threshold for which matter becomes unstable for four elements. The pressure values are taken from Ref. [51]. Considering the threshold pressure within the Hamada-Salpeter EoS, we have the corresponding density threshold.
Figure 1: White dwarfs with different EoS within general relativity.
From these points to the right side of the sequences, stars are unstable due to electron capture reactions. As one can see, for light elements, the gravitational instability limits the maximum mass of the star before the electron capture reactions can occur. However, for elements heavier than oxygen, the electron capture reactions take place before the maximum mass point is reached. As a result, the nuclear instabilities are one of the main factors in restricting the maximum stable mass.
In Fig. 2, we show the mass-radius relationship and the sequence of stellar masses against the central energy density within \(f(\texttt{R},\texttt{L}_{\texttt{m}})\) gravity for white dwarfs composed of \({}^{4}\)He. We have considered four values for the theory's parameter. The values are: 0.00, 0.05, 0.10 and 0.50 km\({}^{2}\). For \(\sigma=0.00\) the theory recovers the general relativity results.
In Fig. 1(b), we show the stellar masses against the central energy density for the element \({}^{4}\)He. We can see an increment in the masses according to the increase in the value of \(\sigma\). One can observe that when \(\sigma\neq 0\) the stability criterion is not applicable and the gravitational instability disappears, i.e., the \(dM/d\rho_{c}<0\) instability criterion is not achieved. Such a behavior could imply in principle
a white dwarf with an arbitrarily large mass, which is an unrealistic result given observational data. In this case, what constraints the maximum mass is the electron capture threshold marked by black stars.
In Fig. 3, we show the mass-radius and mass-central density relations for \({}^{16}O\) WDs. For \({}^{16}O\), the density threshold for nuclear instabilities is smaller, which means that maximum stable mass is highly constrained by pycnonuclear reactions. In this case, we can see that the maximum becomes around \(1.6~{}M_{\odot}\).
In Fig. 3(b), the element \({}^{56}\)Fe was considered in the stellar masses vs central energy density sequence. As in the previous case, increasing the theory's parameter also leads to an enhancement in the maximum masses. However, as the density threshold for electron capture in \({}^{56}\)Fe stars is remarkably smaller, the effects of the modified gravity theory for mass enhancement are almost negligible.
Hence, the density threshold for electron capture cannot be disregarded in dealing with modified
theories of gravity and, in particular it drastically reduces the maximum stable mass. This is important in the context of modified theories of gravity being used to generate high stellar masses. Once there is a limit in the density regime, it must be respected or one will obtain misleading results.
## 5 Concluding remarks
We have considered the stability of white dwarfs within the \(\mathtt{f(R,L_{m})}\) theory of gravity. We found that the standard gravitational Chandrasekhar limit is changed according to the increasing of the theory's parameter. However, instead of considering only gravitational instabilities for determining the maximum stable masses the nuclear instabilities must also be included, which leads to a remarkable decreasing of the maximum masses within modified theories.
Figure 4: \({}^{56}\)Fe white dwarfs within \(f(\mathtt{R,L_{m}})\) gravity.
## References
* [1] K. Boshkayev, J. Rueda and R. Ruffini, _ON THE MAXIMUM MASS OF GENERAL RELATIVISTIC UNIFORMLY ROTATING WHITE DWARFS_, _Int. J. Mod. Phys. E_**20** (2011) 136.
* [2] L. Becerra, K. Boshkayev, J.A. Rueda and R. Ruffini, _Time evolution of rotating and magnetized white dwarf stars_, _Mon. Not. R. Astron. Soc._**487** (2019) 812.
* [3] D.A. Terrero, D.M. Paret and A.P. Martinez, _Magnetized and rotating white dwarfs within Hartle's formalism in general relativity_, _Astron. Nachr._**338** (2017) 1056.
* [4] K. Boshkayev, L. Izzo, J.A.R. Hernandez and R. Ruffini, _SGR 0418+5729, Swift J1822.3-1606, and 1E 2259+586 as massive, fast-rotating, highly magnetized white dwarfs_, _Astron. Astrophys._**555** (2013) A151.
* [5] K. Boshkayev, _Equilibrium Configurations of Rotating White Dwarfs at Finite Temperatures_, _Astron. Rep._**62** (2018) 847.
* [6] K. Boshkayev and H. Quevedo, _Non-validity of I-Love-Q Relations for Hot White Dwarf Stars_, _Mon. Not. R. Astron. Soc._**478** (2018) 1893.
* [7] A. Wojnar, _White dwarf stars in modified gravity_, _Int. J. Geom. Methods Mod. Phys._**18** (2021) 2140006.
* [8] G.J. Olmo, D. Rubiera-Garcia and A. Wojnar, _Stellar structure models in modified theories of gravity: Lessons and challenges_, _Phys. Rep._**876** (2020) 1.
* [9] S.K. Maurya et al., _Gravitational decoupling minimal geometric deformation model in modified f(R,T) gravity theory_, _Phys. Dark Universe_**30** (2020) 100640.
* [10] S. Hansraj, A. Banerjee and P. Channuie, _Impact of the Rastall parameter on perfect fluid spheres_, _Ann. Phys._**400** (2019) 320.
* [11] M. Sharif and A. Siddiqa, _Study of stellar structures in f(R,T) gravity_, _Int. J. Mod. Phys. D_**27** (2018) 1850065.
* [12] R. Lobato et al., _Neutron stars in f(R,T) gravity using realistic equations of state in the light of massive pulsars and GW170817_, _J. Cosmol. Astropart. Phys._**2020** (2020) 039.
* [13] R.V. Lobato, G.A. Carvalho, N.G. Kelkar and M. Nowakowski, _Massive white dwarfs in f ( R, L m ) gravity_, _Eur. Phys. J. C_**82** (2022) 1.
* [14] F. Rocha, G.A. Carvalho, D. Deb and M. Malheiro, _Study of the charged super-Chandrasekhar limiting mass white dwarfs in the \(f(R,\mathcal{T})\) gravity_, _Phys. Rev. D_**101** (2020) 104008.
* [15] G.A. Carvalho, J.D.V. Arbanil, R.M. Marinho and M. Malheiro, _White dwarfs with a surface electrical charge distribution: equilibrium and stability_, _Eur. Phys. J. C_**78** (2018) 1.
* [16] E. Otoniel, R.V. Lobato, M. Malheiro, B. Franzon, S. Schramm and F. Weber, _White Dwarf Pulsars and Very Massive Compact Ultra Magnetized White Dwarfs_, _International Journal of Modern Physics: Conference Series_**45** (2017) 1760024.
* [17] E. Otoniel, B. Franzon, G.A. Carvalho, M. Malheiro, S. Schramm and F. Weber, _Strongly Magnetized White Dwarfs and Their Instability Due to Nuclear Processes_, _Astrophys. J._**879** (2019) 46.
* [18] G.A. Carvalho, R.V. Lobato, P.H.R.S. Moraes, J.D.V. Arbanil, E. Otoniel, R.M. Marinho et al., _Stellar equilibrium configurations of white dwarfs in the f(R, T) gravity_, _Eur. Phys. J. C_**77** (2017) 1.
* [19] M. Sharif and A. Siddiqa, _Equilibrium configurations of anisotropic polytropes in f(R, T) gravity_, _Eur. Phys. J. Plus_**133** (2018) 1.
* [20] P. Bera and D. Bhattacharya, _Mass-radius relation of strongly magnetized white dwarfs: nearly independent of Landau quantization_, _Mon. Not. R. Astron. Soc._**445** (2014) 3951.
* [21] P. Bera and D. Bhattacharya, _Mass-radius relation of strongly magnetized white dwarfs: dependence on field geometry, GR effects and electrostatic corrections to the EOS_, _Mon. Not. R. Astron. Soc._**456** (2016) 3375. |
2305.01902 | Flat beam plasma wakefield accelerator | Particle beams with highly asymmetric emittance ratios are expected at the
interaction point of high energy colliders. These asymmetric beams can be used
to drive high gradient wakefields in dielectrics and plasma. In the case of
plasma, the high aspect ratio of the drive beam creates a transversely
elliptical blowout cavity and the asymmetry in the ion column creates
asymmetric focusing in the two transverse planes. The ellipticity of the
blowout depends on the ellipticity and normalized charge density of the beam.
In this paper, simulations are performed to investigate the ellipticity of the
wakefield based on the initial driver beam parameters. The matching conditions
for this elliptical cavity are discussed. Example cases for employment using
the attainable parameter space at the AWA and FACET facilities are also
presented. | Pratik Manwani, Nathan Majernik, Joshua Mann, Yunbo Kang, Derek Chow, Havyn Ancelin, Gerard Andonian, James Rosenzweig | 2023-05-03T05:32:56Z | http://arxiv.org/abs/2305.01902v1 | # Flat beam plasma wakefield accelerator
###### Abstract
Particle beams with highly asymmetric emittance ratios are expected at the interaction point of high energy colliders. These asymmetric beams can be used to drive high gradient wakefields in dielectrics and plasma. In the case of plasma, the high aspect ratio of the drive beam creates a transversely elliptical blowout cavity and the asymmetry in the ion column creates asymmetric focusing in the two transverse planes. The ellipticity of the blowout depends on the ellipticity and normalized charge density of the beam. In this paper, simulations are performed to investigate the ellipticity of the wakefield based on the initial driver beam parameters. The matching conditions for this elliptical cavity are discussed. Example cases for employment using the attainable parameter space at the AWA and FACET facilities are also presented.
plasma acceleration, elliptical beams, asymmetric wake, beam matching, plasma lens
## I Introduction
Plasma wakefield acceleration (PWFA) using elliptical beams with highly asymmetric emittances, or 'flat beams' requires investigation for future collider applications. Flat beams are employed at accelerator facilities like AWA [1] and are also foreseen at FACET-II [2]. Flat drive beams yield a blowout cavity that is elliptical in cross section, which leads to asymmetric transverse focusing, and have been proposed for numerous applications. For example, asymmetric beams that drive wakefields in hollow channel plasmas are considered for positron acceleration [3]. For colliders, beams with highly asymmetric emittance are expected to mitigate beam-beam effects (beamstrahlung) at the interaction point [4]. These applications require accurate accounting of the matching conditions and beam loading effects of these asymmetric beams. Both the axisymmetric and approximated elliptical wakefield have linear focusing fields which allow us to match these beams. It is also important to understand how flat beams behave in plasma afterburner scenarios [5, 6], and compatibility with corresponding betatron diagnostics [7]. Practical considerations for the development of a flat beam PWFA experiment at the AWA facility, and its potential development at FACET-II, will be presented here.
## II Linear regime
We first consider a beam with general form, \(n_{b}=n_{b,0}X(x)Y(y)Z(ct-z)\). If the beam density is small compared to the background plasma density, \(n_{b,0}/n_{0}\ll 1\) we can use the linearized wake equation, with \(\xi=ct-z\), to get the perturbed plasma electron density (\(n_{1}\)):
\[\left(\frac{\partial^{2}}{\partial\xi^{2}}+k_{p}^{2}\right)n_{1}(x,y,\xi)=-k_ {p}^{2}n_{b,0}X(x)Y(y)Z(\xi) \tag{1}\]
where \(k_{p}=\omega_{p}/c\) is the plasma wavenumber. For a Gaussian beam, this gives
\[\begin{split} n_{1}(x,y,\xi)=-k_{p}n_{b,0}\exp\frac{-x^{2}}{2 \sigma_{x}^{2}}\exp\frac{-y^{2}}{2\sigma_{y}^{2}}\\ \int_{\epsilon}^{\infty}\exp\frac{-\xi^{\prime 2}}{2\sigma_{z}^{2 }}\sin\left(k_{p}(\xi-\xi^{\prime})\right)d\xi^{\prime}\end{split} \tag{2}\]
where \(\sigma_{x,y,z}\) are the spot sizes of the beam. We can simulate the beam plasma interaction in this linear-fluid limit, by using particle in cell (PIC) simulations with the three dimensional
fully relativistic PIC code, OSIRIS [8]. The simulation results of the interaction in this regime with the corresponding analytical results are shown in Figure 1. The full response of the fields can be found from the density response given in equation 2 and the electromagnetic wave equations which are given as:
\[(\nabla_{\perp}^{2}-\frac{\omega_{p}^{2}}{c^{2}})\vec{E}=-e\nabla( \frac{n_{1}+n_{b}}{\epsilon_{0}})-\frac{e}{\epsilon_{0}c}\frac{\partial n_{b}}{ \partial t}\hat{z} \tag{3}\] \[(\nabla_{\perp}^{2}-\frac{\omega_{p}^{2}}{c^{2}})\vec{B}=4\pi e( \nabla\times n_{b}\hat{z}) \tag{4}\]
## III Elliptical blowout
For high beam densities, the plasma electron trajectories cross and there is a formation of an elliptical blowout sheath. We simulate this by using particle in cell (PIC) simulations with the three dimensional quasi-static PIC code, QuickPIC [9]. The simulation parameters are listed in table I. The transverse asymmetry of the wakefield produced by the asymmetric beam can be seen in Figure 2. The single density spike that is observed in the axisymmetric case is replaced with a cluster of electrons that can be seen as a line of high density behind the blowout region due to the ellipsoidal shape of the wake. This elliptical cavity created by the evacuated plasma electrons and its radii (\(a_{p}\) and \(b_{p}\)) are shown in Figure 3. Inside this ion column (\(\frac{x^{2}}{a_{p}^{2}}+\frac{y^{2}}{b_{p}^{2}}<1\)), the wake equations can be written as:
\[\phi(\xi,\alpha_{b},\alpha_{p})=\phi_{0}(\xi)+\phi_{p}(\xi,\alpha _{p})+\phi_{b}(\xi,\alpha_{b}) \tag{5}\] \[A_{z}(\xi,\alpha_{b})=A_{z0}(\xi)+c\Phi_{b}(\xi,\alpha_{b})\] (6) \[F_{x}=\partial_{x}\phi_{p}+(1-v_{z})\partial_{x}\phi_{b}+(1-v_{z })\partial_{\xi}A_{x}-v_{y}B_{z}\] (7) \[F_{y}=\partial_{y}\phi_{p}+(1-v_{z})\partial_{y}\phi_{b}+(1-v_{z })\partial_{\xi}A_{y}+v_{x}B_{z} \tag{8}\]
Here, the parameter \(\alpha_{b}\) is defined as the beam ellipticity of a beam with spatial extent of \(a\) and \(b\) in the \(x\) and \(y\) direction respectively, and \(\alpha_{p}\) is defined as the ellipticity of the transverse cross section of the elliptical cavity. In addition, \(\phi\) denotes the scalar potential, \(\vec{A}\) denotes the vector potential, and \(F_{x}\) and \(F_{y}\) denote the transverse forces inside the blowout cavity.
As a starting point, we can get the ellipticity at the center of the blowout cavity using the long beam (\(r\ll\gamma\sigma_{z}\)) approximation where we neglect the longitudinal variation of the fields and also neglect the effects of the plasma electron velocity. Under this approximation:
\[\partial_{x,y}\phi_{p}=-\partial_{x,y}\phi_{b} \tag{9}\]
Fig. 1: PIC simulations showing the asymmetric wakefield created by an asymmetric driver when the beam density is low (\(n_{b}/n_{0}\ll 1\))
Fig. 2: The longitudinal profile of the elliptical blowout cavity formed by an asymmetric (flat) beam (\(\sigma_{x}/\sigma_{y}=10\))
The potential inside an ellipsoid is quadratic with respect to the coordinate, which leads to linear fields. The transverse fields in the elliptical blowout structure can be calculated analytically by approximating it as an infinite long cylinder of ions, similar to the axisymmetric case but with an elliptical cross section and are given by:
\[E_{x,p} =\frac{en_{0}b_{p}x}{\epsilon_{0}(a_{p}+b_{p})}=\frac{en_{0}x}{ \epsilon_{0}(1+\alpha_{p})} \tag{10}\] \[E_{y,p} =\frac{en_{0}a_{p}y}{\epsilon_{0}(a_{p}+b_{p})}=\frac{en_{0} \alpha_{p}y}{\epsilon_{0}(1+\alpha_{p})} \tag{11}\]
where \(a_{p}\) and \(b_{p}\) are the elliptical cross section's semimajor and semiminor axes respectively, and the ellipticity \(\alpha_{p}\) is defined as the ratio \(a_{p}/b_{p}\).
Outside the beam (assuming a uniform density beam with no longitudinal variation), the fields are given as [10]:
\[E_{x,b} =\frac{en_{b}ab}{\epsilon_{0}(x+(x^{2}+b^{2}-a^{2})^{1/2})} \tag{12}\] \[E_{y,b} =\frac{en_{b}ab}{\epsilon_{0}(y+(y^{2}+a^{2}-b^{2})^{1/2})} \tag{13}\]
Equating these fields on the boundaries, we get:
\[a_{p} =a\sqrt{\frac{\alpha_{b}^{2}-1}{2\alpha_{b}^{2}}}\sqrt{1+\sqrt{1+ \frac{4\alpha_{b}^{2}n_{b}^{2}}{(\alpha_{b}^{2}-1)^{2}}}} \tag{14}\] \[b_{p} =a\sqrt{\frac{\alpha_{b}^{2}-1}{2\alpha_{b}^{2}}}\sqrt{-1+\sqrt{1 +\frac{4\alpha_{b}^{2}n_{b}^{2}}{(\alpha_{b}^{2}-1)^{2}}}} \tag{15}\]
The ellipticity is then given by:
\[\alpha_{p}=\frac{a_{p}}{b_{p}}=\frac{\sqrt{\frac{4\alpha_{b}^{2}n_{p}^{2}}{( \alpha_{b}^{2}-1)^{2}}+1}}{\sqrt{\frac{4\alpha_{b}^{2}n_{b}^{2}}{(\alpha_{b}^{ 2}-1)^{2}}+1}-1} \tag{16}\]
The non-linearity of the wake along with the ellipticity reduces with an increase in beam density. Simulations using long Gaussian beams agree with this equation at initialization (To make the linear charge density constant, we scale the beam density by a factor of 2). Specifically, the asymmetric ellipse fits were obtained from a least squares fit performed on the curve defined by the border between the blowout and the surrounding neutral plasma. This border is defined where the normalized plasma density falls below 1/e. This method of fitting works well when the blowout is stronger as the sheath is more distinct at higher beam densities. There is good agreement between the predicted ellipticity and the simulated ellipticity at the center of the blowout cavity and the results are shown in Figure 4. The approximation of neglecting the plasma electron velocity does not hold for high beam densities, and will need to be taken into account to get more accurate results.
## IV Matching conditions
The focusing forces can be directly derived from the ellipticity of the blowout cavity by modeling it as an infinite elliptical cylinder as is done for the axisymmetric case. These linear forces can be used to match the beam into the blowout cavity by countering the divergence of the beam. A mismatch will lead to characteristic beating of the beam envelope, causing an increase in the transverse emittance due to phase space dilution [6, 11].The matched beta functions for the beam envelope are:
\[k_{\beta,x} =\sqrt{\frac{1}{\gamma m_{c}c^{2}}\frac{eE_{x}}{x}}=\sqrt{\frac{n _{0}e^{2}}{\gamma\epsilon_{0}m_{e}c^{2}(1+\alpha_{p})}} \tag{17}\] \[k_{\beta,y} =\sqrt{\frac{1}{\gamma m_{c}c^{2}}\frac{eE_{y}}{y}}=\sqrt{\frac{n _{0}\alpha_{p}e^{2}}{\gamma\epsilon_{0}m_{e}c^{2}(1+\alpha_{p})}} \tag{18}\]
The spot size of the beam is then given by \(\sigma_{x,m}=\sqrt{\beta_{x}\epsilon_{x}}\) and \(\sigma_{y,m}=\sqrt{\beta_{y}\epsilon_{y}}\). The ellipticity of the matched beam, \(\alpha_{b,m}\), is then given by:
\[\alpha_{b,m}=\frac{\sigma_{x,m}}{\sigma_{y,m}}=\sqrt{\frac{\epsilon_{x}}{ \epsilon_{y}}\frac{\beta_{x}}{\beta_{y}}}=\sqrt{\frac{\epsilon_{x}}{\epsilon _{y}}\frac{k_{\beta,y}}{k_{\beta,x}}}=\sqrt{\frac{\epsilon_{x}}{\epsilon_{y}} \sqrt{\alpha_{p,m}}} \tag{19}\]
The emittance ratio is
\[\frac{\epsilon_{x}}{\epsilon_{y}}=\alpha_{b}^{2}\left(\frac{\sqrt{\frac{4 \alpha_{b}^{2}n_{b}^{2}}{(\alpha_{b}^{2}-1)^{2}}+1}-1}{\sqrt{\frac{4\alpha_{b} ^{2}n_{b}^{2}}{(\alpha_{b}^{2}-1)^{2}}+1}+1}\right)^{\frac{1}{4}} \tag{20}\]
Fig. 3: The transverse profile of the center of the elliptical blowout cavity formed by an asymmetric (flat) beam (\(\sigma_{x}/\sigma_{y}=10\)) The radii of the ellipse are shown as \(a_{p}\) and \(b_{p}\)
This result shows that the emittance needed to match the beam envelope to the focusing forces needs to account for the ellipticity of the plasma column. The blowout ellipticity and concomitantly, the emittance ratio required to match the beam will vary along the length of the blowout cavity and this will be discussed in a subsequent paper.
## V Experimental considerations
The AWA facility is an ideal location to experimentally investigate key features of the flat-beam driven PWFA. The experiment at AWA [5] involves using the asymmetric beam to drive a weak blowout in a plasma created by a capillary discharge. The capillary discharge plasma source, commissioned at UCLA, has a multi-decade range of densities, which will allow exploration of the varying regimes described in this paper. The condition of weak blowout creates head erosion in the drive beam which decreases the beam density and leads to the increase in blowout ellipticity. This dynamicity is also reflected in the beam matching condition and an optimal solution to this problem require consideration of many factors. In the other scenario of strong blowout, beams that will be available at FACET-II would have significantly higher beam densities at the interaction point. At these high densities, an asymmetric beam will create an axisymmetric blowout. Preliminary simulations with the asymmetric FACET-II beam show the bifurcation of the beam after long propagation distances, in which the beam splits into two equal halves. This could be a potential violation of the quasi-static condition implied in QuickPIC and this extreme beam density case would need to be benchmarked with a full 3D PIC code but the asymmetric nature of the bunches makes it computationally intensive.
## VI Conclusion
The structure of the plasma columns formed by elliptical beams determines the focusing forces of the wakefield which governs the dynamics of the beam. This makes it integral to understand the formation and ellipticity of the plasma column for plasma wakefield experiments and plasma afterburner scenarios which employ asymmetric beams. The results shown here provide information on the ellipticity and emittance requirements for matching the beam inside the elliptical blowout cavity. The limiting case for the flat beam has been investigated theoretically [12] and work is ongoing to understand and generalize the properties of this asymmetric beam plasma interaction. The inequality in the focusing forces in the two transverse planes offers novel opportunities, one of which is the creation of an asymmetric plasma lens which is a subject of further study.
## Acknowledgment
This work was performed with the support of the US Department of Energy under Contract No. DE-SC0017648 and DESC0009914. This work used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231.
|
2306.07823 | The Rank of the Cartier operator on Picard Curves | For an algebraic curve $\mathcal{X}$ defined over an algebraically closed
field of characteristic $p > 0$, the $a$-number $a(\mathcal{X})$ is the
dimension of the space of exact holomorphic differentials on $\mathcal{X}$. We
compute the $a$-number for a family of certain Picard curves, using the action
of the Cartier operator on $H^0(\mathcal{X},\Omega^1)$. | Vahid Nourozi, Farhad Rahmati | 2023-06-13T14:54:49Z | http://arxiv.org/abs/2306.07823v2 | # The rank of the Cartier operator on Picard curves
###### Abstract.
For an algebraic curve \(\mathcal{X}\) defined over an algebraically closed field of characteristic \(p>0\), the \(a\)-number \(a(\mathcal{X})\) is the dimension of the space of exact holomorphic differentials on \(\mathcal{X}\). We compute the \(a\)-number for a family of certain Picard curves, using the action of the Cartier operator on \(H^{0}(\mathcal{X},\Omega^{1})\).
Key words and phrases:\(a\)-number; Cartier operator; Super-singular Curves; Picard Curves. * Corresponding author
A few results on the rank of the Cartier operator (especially \(a\)-number) of curves are introduced by Kodama and Washio [9], Gonzalez [6], Pries and Weir [16], Yui [19] and, Nourozi, Tafazolian and Rahmati [12, 13].
In Section 3, we prove that the \(a\)-number of the Picard curves with Equation (1.1) is \(0\) for \(p\equiv 1\) mod \(3\) and is \(1\) for \(p\equiv 2\) mod \(3\), see Theorem 3.1. The proofs directly use the Cartier operator's action on \(H^{0}(\mathcal{X},\Omega^{1})\). Finally, we provided the example of the \(a\)-number of Picard curves that conclude from the Hasse-Witt matrix.
## 2. The Cartier operator
Let \(k\) be an algebraically closed field of characteristic \(p>0\). Let \(\mathcal{X}\) be a curve defined over \(k\). The Cartier operator is a \(1/p\)-linear operator acting on the sheaf \(\Omega^{1}:=\Omega^{1}_{\mathcal{X}}\) of differential forms on \(\mathcal{X}\) in positive characteristic \(p>0\).
Let \(K=k(\mathcal{X})\) be the function field of the curve \(\mathcal{X}\) of genus \(g\) defined over \(k\). A separating variable for \(K\) is an element \(x\in K\setminus K^{p}\). We can write any function \(f\in K\) uniquely in form
\[f=f_{0}^{p}+f_{1}^{p}x+\cdots+f_{p-1}^{p}x^{p-1},\]
where \(f_{i}\in K\), for \(i=0,\cdots,p-1\).
**Definition 2.1**.: (The Cartier operator). Let \(\omega\in\Omega_{K/K_{p}}\). There exist \(f_{0},\cdots,f_{p-1}\) such that \(\omega=(f_{0}^{p}+f_{1}^{p}x+\cdots+f_{p-1}^{p}x^{p-1})dx\). The Cartier operator \(\mathfrak{C}\) is defined by
\[\mathfrak{C}(\omega):=f_{p-1}dx.\]
The definition does not depend on the choice of \(x\) (see [17, Proposition 1]).
We refer the reader to [17, 2, 3, 18] for the proofs of the following statements.
**Proposition 2.2**.: _(Global Properties of \(\mathfrak{C}\)). For all \(\omega\in\Omega_{K/K_{q}}\) and all \(f\in K\),_
* \(\mathfrak{C}(f^{p}\omega)=f\mathfrak{C}(\omega)\)_;_
* \(\mathfrak{C}(\omega)=0\Leftrightarrow\exists h\in K,\omega=dh\)_;_
* \(\mathfrak{C}(\omega)=\omega\Leftrightarrow\exists h\in K,\omega=dh/h\)_._
_Remark 2.3_.: Moreover, one can easily show that
\[\mathfrak{C}(x^{j}dx)=\left\{\begin{array}{ccc}0&\text{if}&p\nmid j+1\\ x^{s-1}dx&\text{if}&j+1=ps.\end{array}\right.\]
If \(\operatorname{div}(\omega)\) is effective, then differential \(\omega\) is holomorphic. The set \(H^{0}(\mathcal{X},\Omega^{1})\) of holomorphic differentials is a \(g\)-dimensional \(k\)-vector subspace of \(\Omega^{1}\) such that \(\mathfrak{C}(H^{0}(\mathcal{X},\Omega^{1}))\subseteq H^{0}(\mathcal{X},\Omega ^{1})\). If \(\mathcal{X}\) is a curve, then the \(a\)-number of \(\mathcal{X}\) equals the dimension of the kernel of the Cartier operator \(H^{0}(\mathcal{X},\Omega^{1})\) (or equivalently, the dimension of the space of exact holomorphic differentials on \(\mathcal{X}\)) (see [10, 5.2.8]).
The following theorem is due to Gorenstein; see [7, Theorem 12].
**Theorem 2.4**.: _A differential \(\omega\in\Omega^{1}\) is holomorphic if and only if it is of the form \((h(x,y)/F_{y})dx\), where \(H:h(X,Y)=0\) is a canonical adjoint._
**Theorem 2.5**.: _[_11_]_ _With the above assumptions,_
\[\mathfrak{C}(h\frac{dx}{F_{y}})=(\frac{\partial^{2p-2}}{\partial x^{p-1}\partial y ^{p-1}}(F^{p-1}h))^{\frac{1}{p}}\frac{dx}{F_{y}}\]
_for any \(h\in K(\mathcal{X})\)._
The differential operator \(\nabla\) is defined by
\[\nabla=\frac{\partial^{2p-2}}{\partial x^{p-1}\partial y^{p-1}},\]
has the property
\[\nabla(\sum_{i,j}c_{i,j}X^{i}Y^{j})=\sum_{i,j}c_{ip+p-1,jp+p-1}X^{ip}Y^{jp}. \tag{2.1}\]
## 3. The \(a\)-number of Picard Curves
This section considers Picard curves over a finite field with \(q^{2}\) elements of characteristic \(p>3\). Let \(\mathcal{X}\) be a Picard curve of genus \(3\) over \(\mathbb{F}_{q^{2}}\). Then \(\mathcal{X}\) can be defined by an affine equation of the form
\[y^{3}=f(x), \tag{3.1}\]
where \(f(x)\) is a polynomial of degree \(4\), without multiple roots. From Theorem 2.4, one can find a basis for the space \(H^{0}(\mathcal{X},\Omega^{1})\) of holomorphic differentials on \(\mathcal{X}\), namely
\[\mathcal{B}=\Bigg{\{}z_{i}=\frac{h_{i}dx}{f_{y}}\Bigg{\}},\]
where \(h_{1}=3,h_{2}=3x\) and \(h_{3}=3y\) for \(1\leq i\leq 3\). Therefore \(\mathcal{B}=\{\frac{dx}{y^{2}},\frac{xdx}{y^{2}},\frac{dx}{y}\}\).
**Theorem 3.1**.: _Suppose that \(p\not\equiv 0\) mod \(3\) and \(f(x)\) have a non-zero constant-coefficient, then_
1. _If_ \(p\equiv 1\) _mod_ \(3\)_, then_ \(a(\mathcal{X})=0\)_._
2. _If_ \(p\equiv 2\) _mod_ \(3\)_, then_ \(a(\mathcal{X})=1\)_._
Proof.:
1. Let us compute the image of \(\mathfrak{C}(\omega)\) for any \(\omega\in\mathcal{B}\). Take \(p=3r+1,r\in\mathbb{Z}\). Since \(p-1=3r\). We get \(2p-2=6r\). So
\[\mathfrak{C}(xdx/y^{2}) = \mathfrak{C}(y^{2p-2}y^{-2p}xdx)=y^{-2}\mathfrak{C}(xf(x)^{2r}dx)\] \[= y^{-2}\sum_{i=0}^{2r}c_{i}\mathfrak{C}(x^{i+1}dx)=c_{r}dx\neq 0.\] \[\mathfrak{C}(dx/y^{2}) = \mathfrak{C}(y^{2p-2}y^{-2p}dx)=y^{-2}\mathfrak{C}(f(x)^{2r}dx)\] \[= y^{-2}\sum_{i=0}^{2r}c_{i}\mathfrak{C}(x^{i}dx)=c_{r}dx\neq 0.\] Now we let \(p-1=3r\), \[\mathfrak{C}(dx/y) = \mathfrak{C}(y^{p-1}y^{-p}dx)=y^{-1}\mathfrak{C}(f(x)^{r}dx)\] \[= y^{-1}\sum_{i=0}^{r}c_{i}\mathfrak{C}(x^{i}dx)=c_{r}dx\neq 0.\] Therefore, we have \(a(\mathcal{X})=0\).
2. Assume that \(p=3r+2,r\in\mathbb{Z}\). We get \(p-2=3r\). Hence \[\begin{array}{rcl}\mathfrak{C}(xdx/y^{2})&=&\mathfrak{C}(y^{p-2}y^{-p}xdx)=y^{ -1}\mathfrak{C}(xf(x)^{r}dx)\\ &=&y^{-1}\sum_{i=0}^{r}c_{i}\mathfrak{C}(x^{i+1}dx)=c_{r}dx\neq 0.\end{array}\] \[\begin{array}{rcl}\mathfrak{C}(dx/y^{2})&=&\mathfrak{C}(y^{p-2}y^{-p}dx)=y^{ -1}\mathfrak{C}(f(x)^{r}dx)\\ &=&y^{-1}\sum_{i=0}^{r}c_{i}\mathfrak{C}(x^{i}dx)=c_{r}dx\neq 0.\end{array}\] Finally, from Remark 2.3 we have \[\begin{array}{rcl}\mathfrak{C}(dx/y)&=&\mathfrak{C}(y^{p-1}y^{-p}dx)=y^{-1} \mathfrak{C}(f(x)^{\frac{3r-1}{3}}dx)\\ &=&y^{-1}\sum_{i=0}^{\frac{3r-1}{3}}c_{i}\mathfrak{C}(x^{i}dx)=0.\end{array}\] Therfore \(a(\mathcal{X})=1\).
Where the coefficients \(c_{i}\in k\) are obtained from the expansion
\[y^{p-1}=f(x)^{(p-1)/2}=\sum_{i=0}^{N}c_{i}x^{i}\text{ with }N=\frac{p-1}{2}.\]
In [4], Estrada Sarlabous showed that if \(F^{p-1}=\sum_{i,j}c_{i,j}x^{i}y^{j}\), then the Hasss-Wit matrix of \(\mathcal{B}\) is
\[H=\left(\begin{array}{ccc}c_{p-1,p-1}&c_{2p-1,p-1}&c_{p-1,2p-1}\\ c_{p-2,p-1}&c_{2p-2,p-1}&c_{p-2,2p-1}\\ c_{p-1,p-2}&c_{2p-1,p-2}&c_{p-1,2p-2}\end{array}\right).\]
Since \(F=y^{3}-f(x),c_{i,j}=0\) for \(j\not\equiv 0\) mod \(3\). Thus, we have
1. \(p\equiv 1\) mod \(3\). Then \[H=\left(\begin{array}{ccc}c_{p-1,p-1}&c_{2p-1,p-1}&0\\ c_{p-2,p-1}&c_{2p-2,p-1}&0\\ 0&0&c_{p-1,2p-2}\end{array}\right).\] (3.2) where \(c_{p-1,p-1}\), \(c_{2p-1,p-1}\), \(c_{p-2,p-1},c_{2p-2,p-1}\) are the respective coefficients of \(x^{p-1},x^{2p-1}\), \(x^{p-2}\) and \(x^{2p-2}\) in \(f(x)^{(2p-2)/3}\) mod \(p\). And \(c_{p-1,2p-2}\) is the coefficient of \(x^{p-1}\) in \(f(x)^{(p-1)/3}\) mod \(p\).
2. \(p\equiv 2\) mod \(3\). Then \[H=\left(\begin{array}{ccc}0&0&c_{p-1,2p-1}\\ 0&0&c_{p-2,2p-1}\\ c_{p-1,p-2}&c_{2p-1,p-2}&0\end{array}\right).\] (3.3) where \(c_{p-1,2p-1}\) and \(c_{p-2,2p-1}\) are the coefficients of \(x^{p-1}\) and \(x^{p-2}\) in \(f(x)^{(p-2)/3}\) mod \(p\), and \(c_{p-1,p-2}\), and \(c_{2p-1,p-2}\) are the coefficients of \(x^{p-1}\) and \(x^{2p-1}\) in \(f(x)^{(2p-1)/3}\) mod \(p\).
**Example 3.2**.: Consider the Picard curves with the following equation
\[y^{3}=x^{4}+1.\]
We calculate the a-number with the Hasse-Wit matrix of this curve with \(p=5\) and \(p=13\). Firstly, suppose that \(p=5\) in this case from equation (3.3), we have
\[H=\left(\begin{array}{ccc}0&0&1\\ 0&0&0\\ 3&0&0\end{array}\right).\]
So \(\operatorname{rank}(H)=2\). From the definition of \(a\)-number, it is clear that \(a(\mathcal{X})=g(\mathcal{X})-\operatorname{rank}(H)\). Therefore \(a(\mathcal{X})=1\). Finally, Suppose that \(p=13\) in this case from equation (3.2), we have
\[H=\left(\begin{array}{ccc}4&0&0\\ 0&2&0\\ 0&0&4\end{array}\right).\]
So \(\operatorname{rank}(H)=3\), hence \(a(\mathcal{X})=0\). In addition, we achieve this result from MAGMA computation [1].
### Acknowledgements
This paper was written while Vahid Nourozi was visiting Unicamp (Universidade Estadual de Campinas) supported by TWAS/Cnpq (Brazil) with fellowship number \(314966/2018-8\). We are extremely grateful to the referee for their valuable comments and suggestions, which led us to find correct references for many assertions and improve the exposition.
|
2308.11535 | Geodesic congruences in acoustic spacetimes and the role of Raychaudhuri
equation | It has been known that the propagation of sound in fluids can be used to
model acoustic spacetimes. These acoustic spacetimes offer analogue models for
gravity. We use the Raychaudhuri equation to study the propagation of sound in
these fluids, which, via the Eikonal approximation, correspond to null geodesic
congruences in the acoustic spacetimes. We explore this within the acoustic
analogues of black holes and cosmological spacetimes. The robustness of the
Raychaudhuri equation and the limits of the acoustic analogue are emphasised. | Akshat Pandey | 2023-08-17T18:59:48Z | http://arxiv.org/abs/2308.11535v1 | # Geodesic congruences in acoustic spacetimes and the role of Raychaudhuri equation
###### Abstract
It has been known that the propagation of sound in fluids can be used to model acoustic spacetimes. These acoustic spacetimes offer analogue models for gravity. We use the Raychaudhuri equation to study the propagation of sound in these fluids, which, via the Eikonal approximation, correspond to null geodesic congruences in the acoustic spacetimes. We explore this within the acoustic analogues of black holes and cosmological spacetimes. The robustness of the Raychaudhuri equation and the limits of the acoustic analogue are emphasised.
## 1 Introduction
There has been a continual interest in the analogue gravity literature to describe fluids in terms of a Lorentzian metric in which an acoustic coupling is explicit [1, 2, 3]. Within this framework, the propagation of sound is described by by a scalar field \(\phi\) obeying the massless minimally coupled scalar field equation
\[\partial_{\mu}\left(\sqrt{-g}g^{\mu\nu}\partial_{\nu}\phi\right)=0 \tag{1}\]
Here \(g_{\mu\nu}\) is the Lorentzian acoustic metric describing the acoustic spacetime. The line element corresponding to the acoustic metric is
\[\mathrm{d}s^{2}=\frac{\rho_{0}(\vec{x},t)}{c_{s}(\vec{x},t)}\left[-c_{s}^{2} \ \mathrm{d}t^{2}+\delta_{ij}\left(\mathrm{d}x^{i}-v_{0}^{i}\ \mathrm{d}t\right)\left(\ \mathrm{d}x^{j}-v_{0}^{j}\ \mathrm{d}t \right)\right] \tag{2}\]
Here, \(\rho_{0}\) is the background fluid density, \(c_{s}\) is the speed of sound and \(v^{i}\) is the fluid velocity field. Imposing additional conditions on these variables will correspond to specific analogue spacetimes.
Further, the Eikonal approximation will be used in describing the scalar field. Within this approximation, we expect the sound waves to travel along null geodesics while the congruences obey the Raychaudhuri equation [4, 5, 6] for null case. Written in full, in (3+1) dimensions, the latter can be cast in the form
\[\frac{d\Theta}{d\lambda}=-\frac{1}{2}\Theta^{2}-\sigma^{\mu\nu}\sigma_{\mu\nu} +\omega^{\mu\nu}\omega_{\mu\nu}-R_{\mu\nu}k^{\mu}k^{\nu} \tag{3}\]
where \(\Theta\) is the expansion scalar, \(\sigma_{\mu\nu}\) is the shear tensor, \(\omega_{\mu\nu}\) is the rotation tensor, and the \(R_{\mu\nu}\) is the usual Ricci tensor. Further, \(\lambda\) is the affine parameter, and \(k^{\mu}\) is the vector field which is tangent to the geodesics. Note that \(\Theta\) can be represented by the divergence form \(\Theta=\nabla_{\mu}k^{\mu}\). The acoustic coupling, which is constructed within the confines of non-relativistic fluid mechanics, gives way to analogue models as counterparts of general relativity (GR) spacetimes that embrace the kinematic effects of GR [2]. The purpose of this work is to look at the behaviour of the congruence of geodesics in such analogue spacetimes. Another interesting query that deserves a close study is the role of Raychaudhuri equation which is basically concerned with the background geometry of the manifold rather than addressing any specific feature of GR which dictates, in particular, the kinematics of the model involved. [8, 9, 10]
The present paper is organized as follows. In section 2, we briefly sketch how the Eikonal approximation. This lets us describe the sound waves using null geodesics. In section 3, we address acoustic black holes, in particular the canonical acoustic black hole. We look at null geodesic congruences, derive the Raychaudhuri equation and show that it is in accordance with what is obtained from the Schwarzschild metric. We briefly discuss other acoustic black hole models. In section 4, we discuss the acoustic analogue of the FLRW cosmology, wherein the curvature of the spacetime is obtained by varying the speed of sound with time. In this context, we explore null geodesic congruences and the Raychaudhuri equation and try to interpret their physical meaning. Finally, a summary of our work is presented.
## 2 The Eikonal Approximation
Known also as the geometrical optics approximation, the Eikonal approximation is an analytical tool to approximate waves with rays, and hence with geodesics when the space(time) is curved. For a scalar field the approximation gives:
\[\phi(r,t)={\cal A}(r,t)\exp[\mp i\varphi(r,t)] \tag{4}\]
The spatial and temporal dependence of the phase \(\varphi(r,t)\) can be further separated, given the geometry is slowly evolving (\(\omega\gg\max\{|\dot{c}/c|,|\dot{v}/v|\}\))
\[\varphi(r,t)=\omega t-\int^{r}K\left(r^{\prime}\right)\mathrm{d}r^{\prime} \tag{5}\]
The condition of restricting the analysis to high values of \(\omega\) is at the core of the approximation.
Plugging for \(\phi\) into the minimally coupled scalar field equation,
\[\partial_{\mu}\left(\sqrt{-g}g^{\mu\nu}\partial_{\nu}\phi\right)=0 \tag{6}\]
The equation corresponding to \(\varphi\) turns out to be
\[g^{\mu\nu}\partial_{\mu}\varphi\partial_{\nu}\varphi=0 \tag{7}\]
Upon making the identification \(\partial_{\mu}\varphi=k_{\mu}\), we get \(k_{\mu}\) to be a null vector such that
\[k^{\mu}k_{\mu}=0 \tag{8}\]
Acting with the covariant derivative on equation(8) and exploiting the fact that it is symmetric \(\nabla_{\mu}k_{\nu}=\nabla_{\nu}k_{\mu}\), we end up with
\[k^{\nu}\nabla_{\nu}k_{\mu}=0 \tag{9}\]
These in turn are the geodesic equations. We see that within the validity of the Eikonal approximation, the waves travel along null geodesics. Choosing an entire non-intersecting family of such geodesics, we can construct null geodesic congruences. These congruences, representing high frequency sound wavefronts, would thus obey the Raychaudhuri equation for the null case, as we shall explore in further sections.
## 3 Acoustic Black holes
We will begin by looking at acoustic analogues to black hole spacetimes. In this case, the analogy is theoretically well established with the fluid velocity being responsible for the "curvature". Here we look at the two most well known models - the Draining bathtub and the Canonical acoustic spacetimes. The (2+1) dimensional draining bath-tub, which is a rotating solution, is especially relevant for its experimental feasibility. The canonical acoustic solution is stationary and spherically symmetric, and thus offers a close analogue for Schwarszchild-like spacetimes.
### The (2+1)-dimensional draining bathtub spacetime
Starting with equation (2) a draining bathtub model in (2+1) dimensions can be constructed that has a sink at the origin. The conditions imposed on the fluid are consistent with the velocity potential
\[\vec{v}=\frac{(A\hat{r}+B\hat{\theta})}{r} \tag{10}\]
It leads to the following form of the acoustic metric
\[ds^{2}=-dt^{2}+\left(dr-\frac{A}{r}dt\right)^{2}+\left(rd\theta-\frac{B}{r}dt \right)^{2} \tag{11}\]
which can be reset as
\[ds^{2}=-\left(1-\frac{A^{2}+B^{2}}{r^{2}}\right)dt^{2}-2\frac{A}{r}drdt-2Bd \theta dt+dr^{2}+r^{2}d\theta^{2} \tag{12}\]
We must note, that we have implicitly assumed cylindrical symmetry and chosen a constant \(z\) slice to reduce the number of effective dimensions. It is worthwhile to explore the behaviour of the null geodesic congruences in such a scenario essentially because it admits both the features of rotation, and a vortex, and is the closest known analogue to Kerr black holes.
Geodesic congruences in the context of Draining bathtub spacetimes of a similar kind, have been extensively discussed in the literature by Dolan _et al_[11, 12, 13].
However, for completeness, we will briefly sketch the form of the null Raychaudhuri equation. Firstly we note that the rotation tensor \(\omega_{\mu\nu}\) vanishes. This is because, in the case of outward (or inward) pointing geodesics, the congruences are hypersurface-orthogonal. This is, in fact, closely related to the definition of the rotation tensor to be symmetric. Further, the shear tensor vanishes identically. This is related to the fact that the shear tensor \(\sigma_{\mu\nu}\) is traceless and in (2+1) dimensions the null congruence is codimension one. Therefore, there is no trace-free contribution.
The Raychaudhuri equation in (2+1) dimensions thus reduces to
\[\frac{d\Theta}{d\lambda}=-\Theta^{2}-R_{\mu\nu}k^{\mu}k^{\nu} \tag{13}\]
Further, the Ricci tensor vanishes. This is to be expected, as we are looking at analogues to spacetimes which are vaccumm solutions to the Einstein equations. We are thus left with
\[\frac{d\Theta}{d\lambda}=-\Theta^{2} \tag{14}\]
\(\Theta\) upon solving comes out to be \(\frac{1}{r}\). The shortcomings of the expansional scalar in this context and the resolution in terms of the Van Vleck determinant are discussed in [14].
### The Canonical Acoustic Black hole spacetime
We start with the incompressible acoustic metric (2) which reads
\[\mathrm{d}s^{2}=\frac{\rho_{0}}{c_{s}}\left[-c_{s}^{2}\ \mathrm{d}t^{2}+ \delta_{ij}\left(\mathrm{d}x^{i}-v_{0}^{i}\ \mathrm{d}t\right)\left(\ \mathrm{d}x^{j}-v_{0}^{j}\ \mathrm{d}t\right)\right] \tag{15}\]
For this case the speed of sound \(c_{s}\) is taken to a constant independent of time. This leads to the time independence of \(\rho_{0}\), as \(c_{s}\) is merely a relation between fluid density and the corresponding pressure. Further, imposing spherical symmetry it implies that \(\rho_{0}\) has to be position independent. This, from the assumption of the barotropic nature of the fluid, points to the position-independence of pressure as well as that of the speed of the sound. Since from the continuity equation it transpires that \(v_{0}\propto 1/r^{2}\),
We can project, up to a normalisation constant \(r_{0}\), the fluid velocity in the manner
\[v_{0}=c_{s}\frac{r_{0}^{2}}{r^{2}} \tag{16}\]
Ignoring an irrelevant position dependent factor, we can express the acoustic metric in the form
\[ds^{2}=-c_{s}^{2}dt^{2}+\left(dr\pm c_{s}\frac{r_{0}^{2}}{r^{2}}dt\right)^{2}+ r^{2}\left(d\theta^{2}+\sin^{2}\theta d\phi^{2}\right). \tag{17}\]
At this point, we observe that on making the following coordinate transformation
\[d\tau=dt\mp\frac{r_{0}^{2}/r^{2}}{c_{s}\left[1-(r_{0}^{4}/r^{4})\right]}dr \tag{18}\]
We can map the line element (17) to the canonical structure [2]
\[ds^{2}=-c_{s}^{2}\left[1-\left(r_{0}^{4}/r^{4}\right)\right]d\tau^{2}+\frac{dr ^{2}}{1-(r_{0}^{4}/r^{4})}+r^{2}\left(d\theta^{2}+\sin^{2}\theta d\phi^{2}\right) \tag{19}\]
Remark: The \(r\) dependence of the above line element is different from that of the Schwarszchild line element. However, in the near horizon approximation (\(r\approx r_{0}\)), the \(r\) dependence, upon factorisation can be put in a form proportional to \(1/r\).
For the rest of the section we assume, without loss of generality, the convention \(c_{s}=1\) and \(r_{0}=1\)
We now define a quantity \(\beta\) as given by
\[\beta(r)=1-\frac{1}{r^{4}} \tag{20}\]
This transforms (19) to
\[ds^{2}=-\beta(r)d\tau^{2}+(\beta(r))^{-1}dr^{2}+r^{2}d\Omega^{2} \tag{21}\]
For radial null geodesics, \(d\theta=d\phi=0\) which means that the rotation (\(\omega_{\mu\nu}\)) and stress (\(\sigma_{\mu\nu}\)) tensors do not contribute and the line element is simplified to
\[ds^{2}=-\beta(r)d\tau^{2}+(\beta(r))^{-1}dr^{2}=0 \tag{22}\]
Introducing the so-called tortoise coordinate \(r_{*}\) satisfying the differential equation
\[\frac{dr_{*}}{dr}=\sqrt{-\frac{g_{rr}}{g_{00}}}=\sqrt{\frac{1}{\beta^{2}(r)}}= \frac{1}{\beta(r)} \tag{23}\]
and which integrates to
\[r_{*}=\frac{-\ln(|r+1|)-2\tan^{-1}(r)+\ln(|r-1|)}{4}+r \tag{24}\]
the line element in terms of \(r_{*}\) becomes
\[ds^{2}=\beta(r)\left(-d\tau^{2}+dr_{*}^{2}\right)=0 \tag{25}\]
In terms of Eddington-Finkelstein coordinates which defines a pair of coordinates \((u,v)\)
\[u=\tau-r_{*},\quad v=\tau+r_{*} \tag{26}\]
we easily notice that \(u\) is constant for outgoing null geodesics, while \(v\) is constant for incoming null geodesics. Focusing on the outgoing geodesics, we define the corresponding tangent vector field
\[k_{\mu}=-\partial_{\mu}u=(-1,(\beta(r))^{-1},0,0) \tag{27}\]
such that
\[k^{\mu}=g^{\mu\nu}k_{\nu}=\left((\beta(r))^{-1},1,0,0\right) \tag{28}\]
The calculations are relevant for the consequences we draw at the end. Hence we will put in several of the steps. The expansion scalar \(\Theta\) can now be calculated by using its representation
\[\Theta=\nabla_{\mu}k^{\mu}=\partial_{\mu}k^{\mu}+\Gamma^{\mu}_{\mu_{\alpha}}k^ {\alpha} \tag{29}\]
where the non-vanishing terms are
\[\Theta=\Gamma^{0}_{\ 01}k^{1}+\Gamma^{1}_{11}k^{1}+\Gamma^{2}_{21}k^{1}+\Gamma^{3 }_{31}k^{1} \tag{30}\]
Noting that
\[\Gamma^{0}_{\ 01}k^{1}=\frac{1}{2}\left((\beta(r))^{-1}\right)\frac{d\beta(r)}{dr}\]
\[\Gamma^{1}_{11}k^{1}=-\frac{1}{2}\left((\beta(r))^{-1}\right)\frac{d\beta(r)} {dr}\]
\[\Gamma^{2}_{21}k^{1}=\frac{1}{r}\]
\[\Gamma^{3}_{31}k^{1}=\frac{1}{r}\]
\(\Theta\) is easily estimated to be
\[\Theta=\frac{2}{r} \tag{31}\]
A similar calculation can be performed for incoming geodesics with \(k_{\mu}\) defined as
\[k_{\mu}=-\partial_{\mu}v \tag{32}\]
Then \(\Theta\) turns out to be
\[\Theta=-\frac{2}{r} \tag{33}\]
Notice this is twice of what we mentioned for the (2+1) dimensional spacetime. This is because in this case, there is an extra contribution in the sum from the third spatial dimension.
As, in the previous case, \(\sigma_{\mu\nu}\), \(\omega_{\mu\nu}\) and \(R_{\mu\nu}\) all vanish.
Raychaudhuri equation for either case yields
\[\frac{d\Theta}{d\lambda}=-\frac{2}{r^{2}} \tag{34}\]
Equations (31) and (33) and equation (34) match exactly with the result following from the Schwarzschild metric [7]. This is because, in the present calculation the explicit form of \(\beta(r)\) was not required to be implemented. Note that \(\beta(r)=1-\frac{2M}{r}\) would correspond to the Schwarzschild case. Thus our result for null congruences would hold more generally for any time independent, spherically symmetric metric which satisfies the form prescribed in equation (21).
As a trivial consequence, we see that the above result holds for \(\beta(r)=1\). Within the acoustic framework, this can be obtained by taking the fluid velocity 3-field \(v_{0}=0\)
everywhere in equation (2) and further taking \(c_{s}\) to be a constant in time. This is analogous to the Minkowski spacetime. Therefore, in homogenous fluids, the dynamics of high frequency sound waves can be studied using the Raychaudhuri equation. It would be interesting to study non-radial geodesics for this case, which we shall try to explore in a future work.
## 4 Acoustic FLRW cosmologies
We now turn towards an analogue cosmological model obtained from the acoustic metric. As Barcelo _et al_[15] pointed out, the appropriate acoustic analogue of a flat FLRW metric, is obtained by keeping the background fluid at rest (\(v_{0}=0\)) and instead varying the speed of the excitations (\(c_{s}(t)\)) within them.
Starting with equation (2), in spherical coordinates two more conditions are imposed.
1. The background fluid is considered to be at rest, \(v_{z}=0\). The continuity equation gives that \(\rho_{0}\) is a constant.
2. Spatially homogenity is assumed. This implies that \(c_{s}\) is independent of \(r\).
These lead to
\[{\rm d}s^{2}=\frac{\rho_{0}}{c_{s}(t)}\left[-c_{s}^{2}\ {\rm d}t^{2}+{\rm d} \vec{x}^{2}\right]. \tag{35}\]
Further, the metric can be rescaled by a constant factor \(\frac{c_{0}}{\rho_{0}}\). Here \(c_{0}\) is some convenient reference speed. The line element thus becomes
\[{\rm d}s^{2}=\frac{c_{0}}{c_{s}(t)}\left[-c_{s}^{2}(t)\ {\rm d}t^{2}+{\rm d} z^{2}\right]=-c_{0}c_{s}(t)\ {\rm d}t^{2}+\cdot\frac{c_{0}}{c_{s}(t)}{\rm d} \vec{x}^{2} \tag{36}\]
In order to convert this line element into the traditional FLRW form, a pseudo time is introduced. This pseudo time \(\tau\) is related to the laboratory time \(t\) such that
\[d\tau=dt\sqrt{c_{s}(t)/c_{0}} \tag{37}\]
Equation (36) becomes
\[{\rm d}s^{2}=-c_{0}^{2}\ {\rm d}\tau^{2}+\frac{c_{0}}{c_{s}(t)}{\rm d}\vec{x}^ {2}=-c_{0}^{2}\ {\rm d}\tau^{2}+\frac{c_{0}^{2}}{\bar{c}_{s}^{2}(\tau)}{\rm d} \vec{x}^{2} \tag{38}\]
Here, \(\bar{c}_{s}(\tau)\) is the speed of sound in terms of the pseudo time. This can be obtained via the chain rule
\[\bar{c}_{s}(\tau)=\frac{d\vec{x}}{d\tau}=\frac{dt}{d\tau}\frac{d\vec{x}}{dt}= \sqrt{c_{0}c_{s}(t)} \tag{39}\]
Equation (38) is completely equivalent to the flat FLRW line element upon making the identification
\[c_{0}d\tau\sim cdt_{F}\]
and
\[\frac{c_{0}^{2}}{\bar{c}_{s}^{2}(\tau)}\sim a^{2}(t_{F})\]
Here \(c\) is the speed of light and \(t_{F}\) is the comoving FLRW time. It is worth re-emphasising that within this model the "curvature" is obtained by adding time dependence to the speed of sound. This is unlike the case for acoustic black holes where the speed of sound was taken to be constant in time, and the curvature was a consequence of inhomogenous fluid flow. This particular analogue model is also interesting because here we have an analogy with a non-vaccumm solution of GR. For works exploring analogue cosmologies see, [16, 17, 18, 19].
For convenience of notation we will choose \(c_{0}=1\). Having obtained the metric, we would like to construct the various geometric quantities out of it. We shall, again, focus on radial null geodesics. For outgoing geodesics, we define the tangent vector field
\[k_{\mu}=-\partial_{\mu}\left(\bar{c}_{s}d\tau-dr\right)=(-\bar{c}_{s}(\tau),1,0,0) \tag{40}\]
Such that
\[k^{\mu}=\left(\bar{c}_{s}(\tau),\bar{c}_{s}^{2}(\tau),0,0\right) \tag{41}\]
This ensures, \(k_{\mu}k^{\mu}=0\)
Upon solving for the expansion factor, we end up with
\[\Theta=2\bar{c}_{s}^{2}(\tau)\left(-\frac{\dot{\bar{c}}_{s}(\tau)}{\bar{c}_{s }(\tau)}+\frac{1}{r}\right) \tag{42}\]
The dot represents a derivative with respect to the Pseudo time \(\tau\). Note that equation (39) tells us that this pseudo time is linearly related to the laboratory time \(t\). This is important as in this case, the qualitative features of the dynamics will get carried over, even when using \(t\).
There are a couple of things to notice about the expression for \(\Theta\). Firstly, for \(\dot{\bar{c}}_{s}(\tau)=0\), the expression reduces to
\[\Theta=\bar{c}_{s}^{2}(\tau)\frac{2}{r}\]
This is upto a constant, the same as equation (26). This was expected as \(\dot{\bar{c}}_{s}(\tau)=0\) would correspond to a static (and flat) acoustic spacetime. Further, the minus sign in
the \(\dot{\bar{c}}_{s}(\tau)\) term in (44) ensures that in order to get an expanding (analogue) spacetime, a decreasing speed of sound is required.
Now, we move on to calculate the Ricci tensor. The relevant non-vanishing terms are
\[R_{00}=3\left(\frac{\ddot{\bar{c}}_{s}}{c_{s}}-\frac{1}{2}\frac{\dot{\bar{c}}_{s }^{2}}{\bar{c}_{s}^{2}}\right) \tag{43}\]
and
\[R_{11}=\frac{5}{4}\frac{\dot{\bar{c}}_{s}^{2}}{\bar{c}_{s}^{4}}-\frac{\ddot{ \bar{c}}_{s}}{c_{s}^{3}} \tag{44}\]
Here, \(\bar{c}_{s}(\tau)\) is written as \(\bar{c}_{s}\) to avoid cluttering the equations.
Spatial homogeneity and isotropy together with the fact that we are looking at hypersurface orthogonal geodesics ensure that \(\sigma_{\mu\nu}\) and \(\omega_{\mu\nu}\) will vanish.
Writing out the Raychaudhuri equation in full gives us
\[\begin{split}\frac{d\Theta}{d\lambda}&=-\frac{1}{ 2}\Theta^{2}-R_{\mu\nu}k^{\mu}k^{\nu}\\ &=\ddot{\bar{c}}_{s}\left(\frac{1}{\bar{c}_{s}}-3\bar{c}_{s} \right)+\dot{\bar{c}}_{s}\left(\frac{7}{4}\dot{\bar{c}}_{s}-2c_{s}^{2}\dot{ \bar{c}}_{s}+\frac{4}{r\bar{c}_{s}^{3}}\right)+\frac{2\bar{c}_{s}^{4}}{r^{2}} \end{split} \tag{45}\]
This equation puts forward the various competing terms generating the dynamics of the expansion scalar. As expected there is a contribution from the second derivative of the scale factor (inverse of \(c_{s}\)). It could be interesting to think about this equation in a fluid mechanical context. It might give alternate geometry based explanations to known results.
However, this equation does not quite resemble the form for the null Raychaudhuri equation as is usually seen in cosmology textbooks. This is so because, in writing down that particular form of the Raychaudhuri equation, the Freidmann equation is implicitly used. Our inability to do that comes from the fact that the acoustic analogue of gravity can mimic only the kinematic effects of GR. That is, within the acoustic framework the dynamics of GR, namely, Einstein equations and consequently the (first) Freidmann equation cannot be reproduced. Hence we do not expect here to obtain the form involving terms from the energy-momentum tensor.
## 5 Summary
In this work, we emphasised the robustness of the Raychaudhuri equation by pointing out its relevance in an analogue spacetime situation where we did not have to make any use of Einstein equations or the null energy condition [20, 21]. We noted the similarities and differences in the results we obtained here with those in GR.
Therefore, we pointed out the limits of the acoustic analogue for gravity, which is well known to mimic only the kinematic effects of GR [22]. Finally, we provided an illustrative example of how the mathematical methods constructed for GR could be used to understand the behaviour of sound in fluids.
We would also like to mention a query which we believe deserves some investigation. It is the following -- Upon quantising the fluid excitations, we end up with phonons. Analysing the validity and consequences of the Quantum Raychaudhuri Equation (QRE) [23] in this framework could be valuable for practitioners of both analogue gravity and phonon-based physics. It could possibly be of interest to physicists studying the foundations of quantum mechanics as the QRE is set within the Bohmian [24, 25] formulation of Quantum Mechanics.
## Acknowledgements
I wish to thank Prof. Bijan Bagchi for the insightful discussions and for his comments on the early draft of this manuscript. I also wish to thank Rahul Ghosh and Sauvik Sen for their valuable feedback.
|
2307.08839 | Multishot Adversarial Network Decoding | We investigate adversarial network coding and decoding focusing on the
multishot regime. Errors can occur on a proper subset of the network edges and
are modeled via an adversarial channel. The paper contains both bounds and
capacity-achieving schemes for the Diamond Network and the Mirrored Diamond
Network. We also initiate the study of the generalizations of these networks. | Giuseppe Cotardo, Gretchen L. Matthews, Alberto Ravagnani, Julia Shapiro | 2023-07-17T20:59:14Z | http://arxiv.org/abs/2307.08839v1 | # Multishot Adversarial Network Decoding
###### Abstract
We investigate adversarial network coding and decoding focusing on the multishot regime. Errors can occur on a proper subset of the network edges and are modeled via an adversarial channel. The paper contains both bounds and capacity-achieving schemes for the Diamond Network and the Mirrored Diamond Network. We also initiate the study of the generalizations of these networks.
network decoding, adversarial network, multishot capacity
## I Introduction
Network coding is a communication strategy that outperforms routing [4, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17]. In that context, sources attempt to transmit information packets to one or more terminals through a network of intermediate nodes. In [1], the authors introduced the problem of computing the capacity of a network where errors can occur on a proper subset of the network edges. This scenario was modeled via an adversarial channel and a generalized Singleton Bound for the one-shot capacity (i.e., the largest amount of information packets that can be sent to all terminals in a single use of the network) of networks with restricted adversaries was established.
This study was furthered in [2, 3, 5], where the authors introduced the concept of _network decoding_ as a necessary strategy to achieve capacity in networks with a restricted adversary. In [5] five elementary families of networks were introduced, together with upper bounds on their one-shot capacity and communication schemes. The Diamond Network and the Mirrored Diamond Network are examples from these elementary families. The results of [5] demonstrate that when an adversary is confined to operate on a vulnerable region of the network, end-to-end communication strategies combined with linear network coding are sub-optimal.
In this paper, we initiate the study of the _multishot capacity_ of networks with restricted adversaries. In other words, we wish to compute the largest amount of information packets that can be sent to all terminals on average over multiple uses of the network. We focus on the Diamond Network and the Mirrored Diamond Network as building blocks of a more general theory that will be presented in an extended version of this work.
Our work gives insight into when using a network multiple times for communication allows us to send more information than in one use of a network with a large alphabet. We show that the multishot capacity of the Diamond Network has a strict increase in comparison to its one-shot capacity in one adversarial model. On the other hand, for the Mirrored Diamond Network and generalizations, the maximum capacity is the same over multiple uses of these networks.
This paper is organized as follows. In Section II we introduce the notation and background to be used throughout the paper. Section III establishes the exact multishot capacity of the Diamond Network, the Mirrored Diamond Network and the families of networks introduced in [3, 5], under the assumption that the adversary cannot change the edges attacked in each transmission round. In Section IV, we study the multishot capacities of these networks assuming that the adversary has freedom to change the edges attacked in each transmission round. In Section V, we include open questions and ideas for future work.
## II Network Decoding
This section includes preliminary definitions and results related to adversarial networks. We establish the notation that will be used throughout the paper.
In the following, we let \(q\) be a prime power and \(\mathbb{F}_{q}\) the finite field with \(q\) elements. We recall the definition of **network** proposed in [1, Definition 38].
A **network** is a 4-tuple \(\mathscr{N}=(\mathscr{V},\mathscr{E},S,\mathbf{T})\) where \((\mathscr{V},\mathscr{E})\) is a directed, acyclic and finite multigraph, \(S\in\mathscr{V}\) is the **source** and \(\mathbf{T}\subset\mathscr{V}\) is the set of **terminals**. We also assume that there exists a directed path from \(S\) to any \(T\in\mathbf{\mathrm{T}}\), \(|\mathbf{\mathrm{T}}|\geq 1\) and \(S\notin\mathbf{\mathrm{T}}\) and that for every \(V\in\mathscr{V}\setminus(\{S\}\cup\mathbf{\mathrm{T}})\), there exists a directed path from \(S\) to \(V\) and from \(V\) to one of the terminals \(T\in\mathbf{\mathrm{T}}\). The elements of \(\mathscr{V}\) are called **vertices** (or **nodes**) and the ones in \(\mathscr{E}\) are called **edges**. The refer to the elements in \(V\in\mathscr{V}\setminus(\{S\}\cup\mathbf{\mathrm{T}})\) as **intermediate vertices** (or **intermediate nodes**). We define **indegree** (respectively, **outdegree**) of a vertex \(V\in\mathscr{V}\) to be the the set of edges coming into (respectively, going out of) \(V\). We denote it by \(\mathrm{in}(V)\) (respectively, \(\mathrm{out}(V)\)) and its cardinality by \(\deg^{+}(V)\) (respectively, \(\deg^{-}(V)\)).
Each edge of the network \(\mathscr{N}\) carries one element from an **alphabet**\(\mathscr{A}\). We assume that \(\mathscr{A}\) has cardinality at least \(2\). The vertices in the network \(\mathscr{N}\) receive symbols from \(\mathscr{A}\) over the incoming edges, process them according to functions, and then send the outputs over the outgoing edges. We model errors in the transmission as being introduced by an adversary \(A\) who can corrupt up to \(t\) edges from a fixed subset \(\mathscr{U}\subseteq\mathscr{E}\). The adversary can change a symbol sent across one of the edges of \(\mathscr{U}\) and change it to any other symbol of the alphabet. We call the pair \((\mathscr{N},\mathbf{\mathrm{A}})\) an **adversarial network**.
**Definition II.1** ([1, Definition 1]).: An **adversarial channel** is a map \(\Omega:=\mathscr{X}\to 2^{\mathscr{Y}}\), where \(\mathscr{X}\) and \(\mathscr{Y}\) are the **input** and **output** alphabets respectively and \(\mathscr{X},\mathscr{Y}\neq\emptyset\). The notation for this type of channel is \(\Omega:\mathscr{X}\dashrightarrow\mathscr{Y}\).
**Definition II.2** ([1, Definition 40]).: A **network code**\(\mathscr{F}\) for \(\mathscr{N}\) is a family of functions \(\{\mathscr{F}_{V}:V\in\mathscr{V}\setminus\{S\cup\mathbf{\mathrm{T}}\}\}\), where \(\mathscr{F}_{V}:\mathscr{A}^{\text{deg}^{+}(V)}\to\mathscr{A}^{\text{deg}^{-} (V)}\) for all \(V\).
The functions in \(\mathscr{F}\) give a description of how \(\mathscr{N}\) processes the information in each node coming from the incoming edges. This can be uniquely interpreted according to a partial ordering. Let \(e_{1}\), \(e_{2}\) be edges in a network. We say \(e_{1}\in\mathscr{E}\)**precedes**\(e_{2}\in\mathscr{E}\) (\(e_{1}\preceq e_{2}\)) if there exists a directed path in \((V,\mathscr{E})\), with starting edge \(e_{1}\) and ending edge \(e_{2}\). It is well know that this partial order can be extended to a total order on \(\mathscr{E}\), denoted by \(\leq\). Let \(\mathscr{U},\mathscr{U}^{\prime}\in\mathscr{E}\) be subsets of edges in a network. As in [3, Definition 3], we say that \(\mathscr{U}\)**precedes**\(\mathscr{U}^{\prime}\) if every path from \(S\) to an edge of \(\mathscr{U}^{\prime}\) contains an edge of \(\mathscr{U}\).
**Definition II.3** ([3, Definition 4]).: Let \((\mathscr{N},\mathbf{A})\) be an adversarial network and let \(\mathscr{U},\mathscr{U}^{\prime}\subseteq\mathscr{E}\) be non empty with \(\mathscr{U}\) preceding \(\mathscr{U}^{\prime}\). Let \(\mathscr{F}\) be a network code for \(\mathscr{N}\). For \(x\in\mathscr{A}^{|\mathscr{U}|}\), we denote the set of vectors over the alphabet that can be exiting the edges of \(\mathscr{U}^{\prime}\) as
\[\Omega[\mathscr{N},\mathbf{A},\mathscr{F},\mathscr{U}\to\mathscr{U}^{\prime}](x )\subseteq\mathscr{A}^{|\mathscr{U}^{\prime}|}\]
We note that the vertices process the information according to the choice of \(\mathscr{F}\), the coordinates of \(x\) are values from \(\mathscr{A}\) entering the edges of \(\mathscr{U}^{\prime}\) and we consider the total order \(\leq\) throughout.
**Definition II.4** ([3, Definition 6]).: An **outer code** for \(\mathscr{N}\) is a subset \(C\subseteq\mathscr{A}^{\deg^{+}(V)}\) with \(|C|\geq 1\). We say that \(C\) is **unambiguous** (or **good**) for \((\mathscr{N},\mathbf{A},\mathscr{F})\) if for all \(x,y\in C\) with \(x\neq y\) and for all \(T\in\mathbf{T}\) we have
\[\Omega[\mathscr{N},\mathbf{A},\mathscr{F},\text{out}(V)\to\text{in}(V )](x)\cap\\ \Omega[\mathscr{N},\mathbf{A},\mathscr{F},\text{out}(V)\to\text{in}(V )](y)=\emptyset.\]
We note that the intersection being empty guarantees that every element of \(C\) can be uniquely recovered by every terminal, despite the actions the adversary takes. Next we define the notion of one-shot capacity of an adversarial network.
**Definition II.5** ([5, Definition 3.18]).: The **one-shot** capacity of an adversarial network \((\mathscr{N},\mathbf{A})\) is the maximum \(\alpha\in\mathbb{R}\) such that there exists an unambiguous code \(C\) and a network code \(\mathscr{F}\) with \(\alpha=\log_{|\mathscr{U}|}(|C|)\). We denote the maximum quantity by \(C_{1}(\mathscr{N},\mathbf{A})\).
The following result, proved in [1], gives an upper bound on the one-shot capacity of a network \(\mathscr{N}\) under the presence of an adversary who is restricted to corrupting a proper subset of its edges.
**Theorem II.6** (The Singleton Cut-Set Bound - [1, Theorem 67]).: Let \(t\geq 0\) and suppose that \(\mathbf{A}\) can corrupt up to \(t\) edges from a subset \(\mathscr{U}\subseteq\mathscr{E}\). Then
\[C_{1}(\mathscr{N},\mathbf{A})\leq\min_{T\in\mathbf{\mathrm{T}}}\min_{ \mathscr{E}^{\prime}} \left(|\mathscr{E}^{\prime}\setminus\mathscr{U}|\right.\\ +\max\{0,|\mathscr{E}^{\prime}\cap\mathscr{U}|-2t\}\right)\]
where \(\mathscr{E}^{\prime}\subseteq\mathscr{E}\) ranges over all edge-cuts between \(S\) and \(T\).
This work focuses on the **multishot capacity** of an adversarial network \((\mathscr{N},\mathbf{A})\). A formal definition, which extends [5, Definition 3.18], is the following.
**Definition II.7**.: Let \(i\) be a positive integer. The **i-th shot capacity** of \((\mathscr{N},\mathbf{A})\) is the maximum \(\alpha\in\mathbb{R}\) such that there exists an unambiguous code \(C\) for
\((\mathscr{N},\mathbf{A})\) with \(\alpha=\frac{\log_{|\mathscr{A}|}(|C|)}{i}\). We denote this maximum value by \(\mathsf{C}_{i}(\mathscr{N},\mathbf{A})\).
Using the network multiple times can also be modeled as the _power channel_ associated to it. We can formalize it as follows.
**Definition II.8** ([1, Definition 10]).: Let \(i\geq 1\) be an integer. The **i-th power** of a channel \(\Omega:\mathscr{X}\dashrightarrow\mathscr{Y}\) is represented by the channel
\[\Omega^{i}:=\underbrace{\Omega\times\cdots\times\Omega}_{i\text{ times}}:\mathscr{X}^{i}\dashrightarrow\mathscr{Y}^{i}.\]
We also recall the definitions the operations of _product_ and _concatenation_ of channels introduced in [1]. See [1, Definitions 7 and 14] respectively.
Let \(\Omega_{1}:\mathscr{X}_{1}\dashrightarrow\mathscr{Y}_{1}\) and \(\Omega_{2}:\mathscr{X}_{2}\dashrightarrow\mathscr{Y}_{2}\) be channels and assume that \(\mathscr{Y}_{1}\subseteq\mathscr{X}_{2}\). The **product** of \(\Omega_{1}\) and \(\Omega_{2}\) is the channel \(\Omega_{1}\times\Omega_{2}:\mathscr{X}_{1}\times\mathscr{X}_{2}\dashrightarrow \mathscr{Y}_{1}\times\mathscr{Y}_{2}\) defined by
\[(\Omega_{1}\times\Omega_{2})(x_{1},x_{2}):=\Omega_{1}(x_{1})\times\Omega_{2}( x_{2}),\]
for all \((x_{1},x_{2})\in\mathscr{X}_{1}\times\mathscr{X}_{2}\). The **concatenation** of \(\Omega_{1}\) and \(\Omega_{2}\) is the channel \(\Omega_{1}\blacktriangleright\Omega_{2}:\mathscr{X}_{1}\dashrightarrow \mathscr{Y}_{2}\) defined by
\[(\Omega_{1}\blacktriangleright\Omega_{2})(x):=\bigcup_{y\in\Omega_{1}(x)}\Omega _{2}(y).\]
The following proposition provides a lower bound on the one-shot capacity of product channels and establishes a connection between the one-shot capacity and the \(i-\)th shot capacity of a network.
**Proposition II.9** ([1, Proposition 12]).: For a channel \(\Omega:\mathscr{X}\dashrightarrow\mathscr{Y}\) and any \(i\geq 1\), we have
\[C_{1}(\Omega^{i})\geq i\cdot C_{1}(\Omega).\]
We restrict to networks with a single source \(S\) and we follow the notation introduced in [5, Section V]. A network \(\mathscr{N}=(\mathscr{V},\mathscr{E},S,\mathbf{T})\) is **simple** if it has only one terminal \(T\), i.e. \(\mathbf{T}=\{T\}\). We say that \(\mathscr{N}\) is a **simple two-level network** if every path from \(S\) to \(T\) has length \(2\). Let \(\mathscr{N}\) be a simple two-level network with \(j\geq 1\) intermediate nodes. In the sequel we follow [5, Notations 5.3 and Example 5.4], and refer the reader to that paper for the details. Let \([x_{1},\dots,x_{j}]\) and \([y_{1},\dots,y_{j}]^{\top}\) be the matrix representation of the graph induced by the source, the intermediate nodes, and the terminal respectively. We denote \(\mathscr{N}\) by \(([x_{1},\dots,x_{j}],[y_{1},\dots,y_{j}])\).
In the remainder of the paper, we will focus on particular families of networks introduced in [2, 3] and [5]. We recall their definitions.
Let \(\mathscr{D}\) be the network in Figure 1 and consider an adversary \(\mathbf{A}_{\mathscr{D}}\) able to corrupt at most one of the dashed edges. We call the pair \((\mathscr{D},\mathbf{A}_{\mathscr{D}})\) the **Diamond Network**. It was shown in [3, Section III] and [2] that this is the smallest example of a network that does not meet the Singleton Cut-Set Bound proved in [1]. In particular, the following holds.
**Theorem II.10** ([3, Theorem 13]).: For any alphabet \(\mathscr{A}\), we have \(C_{1}(\mathscr{D},\mathbf{A}_{\mathscr{D}})=\log_{|\mathscr{A}|}(|\mathscr{A}|-1)\).
In [3, Section 3 and 4], it was proved that the network obtained by adding an edge to the Diamond Network, as in Figure 2, attains the Singleton Cut-Set Bound. The pair \((\mathscr{S},\mathbf{A}_{\mathscr{S}})\) is called **Mirrored Diamond Network**, where \(\mathbf{A}_{\mathscr{S}}\) again represents an adversary able to corrupt at least on dashed edge.
**Theorem II.11** ([3, Proposition 14]).: For any alphabet \(\mathscr{A}\), we have \(C_{1}(\mathscr{S},\mathbf{A}_{\mathscr{S}})=1\).
The Diamond Networks were further generalized in [5, Section V.C]. Let \(t\geq 2\) be an integer and define the simple two-level networks \(\mathfrak{C}_{t}=([t,t+1],[t,t])\) and \(\mathfrak{D}_{t}=([2t,2t],[1,1])\). It is not hard to check that \(\mathfrak{C}_{1}\) and \(\mathfrak{D}_{1}\) recovers the Diamond and the Mirrored Diamond Network respectively.
We let \(i\in\mathbb{N}\) represent the number of uses of each network. The remainder of the paper is organized according to the following adversarial models.
Fig. 1: The Diamond Network \(\mathscr{D}\)
Fig. 2: The Mirrored Diamond Network \(\mathscr{S}\)
1. The adversary corrupts the same \(t\) edges over the multiple uses of the network.
2. The adversary can change the \(t\) edges to corrupt over the multiple uses of the network.
## III Multishot Regime I
In this section, we study the multishot capacity of the Diamond and Mirrored Diamond Network introduced in [3] along with the families of networks introduced in [5], in Scenario A.1.
### _The Diamond Network_
We assume that \(t=1\) in this section. We start by noticing that reusing the strategy previously proposed in [3, Proposition 11] one can easily show that \(C_{i}(\mathscr{D},\mathbf{A}_{\mathscr{D}})\geq\text{log}_{|\mathscr{A}|}(| \mathscr{A}|-1)\). We let \(\Omega[\mathscr{D},\mathbf{A}_{\mathscr{D}},\mathscr{F},\text{out}(S)\to\text{ in}(T)]\) be the channel representing the transfer from \(S\) to \(T\) of the Diamond Network as in Definition II.3 for the remainder of the paper. The aim of this section is to explicitly compute the multishot capacity of the Diamond Network in Scenario A.1. We provide a construction of an unambiguous code of cardinality \(|\mathscr{A}|^{i}-1\) for \((\mathscr{D},\mathbf{A}_{\mathscr{D}},\mathscr{F})\) that models \(i\) transmission rounds and we show that this is the maximum possible in this setting. In particular, we prove that
\[\text{C}_{i}(\mathscr{D},\mathbf{A}_{\mathscr{D}})=\frac{\log_{|\mathscr{A}|}(| \mathscr{A}|^{i}-1)}{i}\]
We start with the following result.
**Proposition III.1**.: Let \(\mathscr{F}\) be a network code for \((\mathscr{D},\mathbf{A}_{\mathscr{D}})\). If \(C\subseteq(\mathscr{A}^{i})^{3}\) is an unambiguous code for \((\mathscr{D},\mathbf{A}_{\mathscr{D}},\mathscr{F})\) then \(|C|\geq|A|^{i}-1\).
Proof.: Let \(\star\) be a symbol of the alphabet \(\mathscr{A}\). We want to show that the code
\[C=\{(a\mid a\mid a):a\in\mathscr{A}\setminus(\star,\ldots,\star)\}\subseteq( \mathscr{A}^{i})^{3}\]
is unambiguous for \((\mathscr{D},\mathbf{A}_{\mathscr{D}})\). At each round, we use the same network code \(\mathscr{F}\) as in the proof of [3, Proposition 11]. Suppose the adversary corrupts \(e_{1}\) and recall that in this case the adversary is forced to change the symbol. One can check that, for any \(a\in\mathscr{A}^{i}\), we have
\[\Omega^{i}[\mathscr{D},\mathbf{A},\mathscr{F},\text{out}(S)\to\text{in}(T)]((a\mid a \mid a))=\{(b\mid a)\}\]
for some \(b\in\mathscr{A}^{i}\setminus\{(\star,\ldots,\star)\}\). On the other hand, if the adversary corrupts \(e_{2}\) or \(e_{3}\) then, for any \(a\in\mathscr{A}^{i}\setminus\{(\star,\ldots,\star)\}\), we get
\[\Omega^{i}[\mathscr{D},\mathbf{A},\mathscr{F},\text{out}(S)\to\text{ in}(T)]((a\mid a\mid a))=\\ \{(a\mid\star,\ldots,\star)\}.\]
It follows that, for any \(c,c^{\prime}\in C\), we have
\[\Omega^{i}[\mathscr{D},\mathbf{A},\mathscr{F},\text{out}(S)\to\text{ in}(T)](c)\cap\\ \Omega[\mathscr{N},\mathbf{A},\mathscr{F},\text{out}(S)\to\text{in}(T) ](c^{\prime})\neq\emptyset\]
if and only if \(c=c^{\prime}\) which implies that the code \(C\) is unambiguous. This concludes the proof.
Note that the appropriateness of the construction of \(C\) in Proposition III.1 relies on the adversary not being able to change the edge attacked, meaning we know the exact location of the adversary for the next \(i-1\) transmission rounds. Therefore, the strategy provided in [3, Proposition 11] can be applied to \(i\) transmission rounds by modeling the unambiguous code in \((\mathscr{A}^{i})^{3}\). The lower bound provided in Proposition III.1 is a strict improvement on the lower bound provided by Proposition II.9. This shows that we can send more information over multiple use of the Diamond Network.
The action of the adversary on \((\mathscr{D},\mathbf{A}_{\mathscr{D}})\) is modeled by the channel \(\text{H}_{\mathscr{D}}:\mathscr{A}^{3}\dashrightarrow\mathscr{A}^{3}\) defined as \(\text{H}_{\mathscr{D}}(x):=\{y\in\mathscr{A}^{3}:d_{\text{H}}(x,y)\leq 1\}\) for all \(x\in\mathscr{A}^{3}\), where \(d^{\text{H}}\) represents Hamming distance. Therefore, a code \(C\subseteq\mathscr{A}^{3}\) is unambiguous for \(\text{H}_{\mathscr{D}}\) if \(d^{\text{H}}(C)=3\). We can extend this to \(i\) uses of the network and describe the adversary as a power channel \(\text{H}_{\mathscr{D}}^{i}\). Let \(x=(x_{1},\ldots,x_{3i})\in(\mathscr{A}^{3})^{i}\) and we can define \(x^{(j)}:=(x_{3j+1},x_{3j+2},x_{3j+3})\) for all \(j\in\{0,\ldots,i-1\}\). Then, for any \(x\in C\), we have
\[\text{H}_{\mathscr{D}}(x)=\{y\in(\mathscr{A}^{3})^{i}:d^{\text{H}} (x^{(j)},y^{(j)})\\ \text{for all }j\in\{0,\ldots,i-1\}\}.\]
and hence \(C\subseteq(\mathscr{A}^{i})^{3}\) is good code for \(\text{H}_{\mathscr{D}}^{i}\) if and only if for all \(x,y\in C\), with \(x\neq y\) we have \(d^{\text{H}}(x^{(j)},y^{(j)})=3\) for some \(j\in\{0,\ldots,i-1\}\).
Let \(s\in\{1,2,3\}\) and define \(\pi_{s}^{i}:\mathscr{A}^{3i}\longrightarrow\mathscr{A}^{i}\) to be the projection onto the components in the set \(\{3j+s:j\in\{0,\ldots,i-1\}\}\). Intuitively, these are the components corresponding to the edge \(e_{s}\) in each round. The following generalizes [3, Claim A]
**Lemma III.2**.: Let \(\mathscr{F}\) be a network code for \((\mathscr{D},\mathbf{A}_{\mathscr{D}})\). If \(C\subseteq\mathscr{A}^{3i}\) be an unambiguous code for \((\mathscr{D},\mathbf{A}_{\mathscr{D}},\mathscr{F})\) then \(|\pi_{1}^{i}(C)|=|C|\).
Proof.: We already showed that \(C\) is unambiguous for the Diamond Network if and only if for any \(x,y\in C\), with \(x\neq y\), \(d^{\text{H}}(x^{(j)},y^{(j)})=3\) for some \(j\in\{0,\ldots,i-1\}\). Suppose, towards a contradiction, that there exist \(x,y\in C\) with \(x\neq y\) and \(\pi^{i}(x)=\pi^{i}(y)\). It implies that \(x_{3j+1}=y_{3j+1}\) and
therefore \(d^{\text{H}}(x^{(j)},y^{(j)})\leq 2\) for all \(j\in\{0,\ldots,i-1\}\). This leads to a contradiction.
The following result generalizes [3, Claim B].
**Lemma III.3**.: Let \(\mathscr{F}\) be a network code for \((\mathscr{P},\mathbf{A}_{\mathscr{D}})\). If \(C\subseteq(\mathscr{A}^{i})^{3}\) be an unambiguous code for \((\mathscr{D},\mathbf{A}_{\mathscr{D}},\mathscr{F})\) then the restriction of \(\mathscr{F}_{V_{1}}\) to \(\pi_{1}^{i}(C)\) is injective.
Proof.: Suppose, towards a contradiction, that there exist \(x,y\in C\) such that \(x\neq y\) and \(\mathscr{F}_{V_{1}}(\pi_{1}^{i}(x))=\mathscr{F}_{V_{2}}(\pi_{1}^{i}(x))\). One can check that the vector
\[\big{(}\mathscr{F}_{V_{1}}(\pi_{1}^{i}(x))\mid\mathscr{F}_{V_{2}}\left(\pi_{2 }^{i}(x)\mid\pi_{3}^{i}(y)\right)\big{)}\in\mathscr{A}^{2i}\]
is in \(\Omega^{i}[\mathscr{D},\mathbf{A}_{\mathscr{D}},\mathscr{F},\text{out}(S)\to\text {in}(T)](x)\cap\Omega^{i}[\mathscr{D},\mathbf{A}_{\mathscr{D}},\mathscr{F},\text{ out}(S)\to\text{in}(T)](y)\). This implies that \(C\) is not unambiguous for \((\mathscr{D},\mathbf{A}_{\mathscr{D}})\) which is a contradiction.
Following the notation in [3], we let
\[\overline{\Omega}^{i}:= \Omega^{i}[\mathscr{D},\mathbf{A}_{\mathscr{D}},\mathscr{F},\{e_{1},e_{2},e_{3}\}\to\{e_{2},e_{3}\}],\] \[\Omega^{i}:= \Omega^{i}[\mathscr{D},\mathbf{A}_{\mathscr{D}},\mathscr{F},\{e_{1},e_{2},e_{3}\}\to\{e_{5}\}]\]
which is well-defined since \(\{e_{1},e_{2},e_{3}\}\) precedes \(\{e_{5}\}\). The following result generalizes [3, Claim C].
**Lemma III.4**.: Let \(\mathscr{F}\) be a network code for \((\mathscr{D},\mathbf{A}_{\mathscr{D}})\). If \(C\subseteq\mathscr{A}^{3i}\) is an unambiguous code for \((\mathscr{D},\mathbf{A}_{\mathscr{D}},\mathscr{F})\) then there exists at most one element \(x\in C\) such that \(|\Omega^{i}(x)|=1\).
Proof.: Suppose, towards a contradiction, that there exist \(x,y\in C\) such that \(|\Omega^{i}(x)|=|\Omega^{i}(y)|=1\) and \(x\neq y\). It implies \(|\mathscr{F}_{V_{2}}(\overline{\Omega}^{i}(x))|=|\mathscr{F}_{V_{2}}(\overline {\Omega}^{i}(y))|=1\). Since \((\pi_{2}^{i}(x)\mid\pi_{3}^{i}(x)),(\pi_{2}^{i}(x)\mid\pi_{3}^{i}(y))\in \overline{\Omega}^{i}(x)\) and \((\pi_{2}^{i}(y)\mid\pi_{3}^{i}(y)),(\pi_{2}^{i}(x)\mid\pi_{3}^{i}(y))\in \overline{\Omega}^{i}(y)\), we have
\[\mathscr{F}_{V_{2}}(\pi_{2}^{i}(x)\mid\pi_{3}^{i}(x)) =\mathscr{F}_{V_{2}}(\pi_{2}^{i}(x)\mid\pi_{3}^{i}(y))\] \[=\mathscr{F}_{V_{2}}(\pi_{2}^{i}(y)\mid\pi_{3}^{i}(y)).\]
It follows that
\[\Omega^{i}[\mathscr{D},\mathbf{A}_{\mathscr{D}},\mathscr{F},\text{ out}(S)\to\text{in}(T)](x)\cap\] \[\Omega^{i}[\mathscr{D},\mathbf{A}_{\mathscr{D}},\mathscr{F},\text{ out}(S)\to\text{in}(T)](y)\neq\emptyset\]
since the adversary can corrupt the edge \(e_{1}\). This is a contradiction.
We are now ready to prove a result analogous to [3, Proposition 12].
**Proposition III.5**.: Let \(\mathscr{F}\) be a network code for \((\mathscr{P},\mathbf{A}_{\mathscr{D}})\). If \(C\subseteq\mathscr{A}^{3i}\) is an unambiguous code for \((\mathscr{D},\mathbf{A}_{\mathscr{D}},\mathscr{F})\) then
\[|C|^{2}+|C|-1-|\mathscr{A}|^{2i}\leq 0.\]
In particular, we have \(|C|\leq|\mathscr{A}|^{i}-1\).
Proof.: For ease of notation, we define
\[\widehat{\Omega}^{i}:= \Omega^{i}[\mathscr{D},\mathbf{A}_{\mathscr{D}},\mathscr{F},\{e_{1}, e_{2},e_{3}\}\to\{e_{4},e_{5}\}],\] \[\widehat{\Omega}^{i}_{1}:= \{y\in\widehat{\Omega}^{i}:\mathscr{F}_{V_{1}}(\pi_{1}^{i}(y))= \pi_{1}^{i}(x)\},\] \[\widehat{\Omega}^{i}_{2}:= \{y\in\widehat{\Omega}^{i}:\mathscr{F}_{V_{2}}((\pi_{2}^{i}(y) \mid\pi_{2}^{3}(y)))\] \[= (\pi_{2}^{i}(x)\mid\pi_{2}^{3}(x))\}.\]
for any \(x\in C\), and write \(\widehat{\Omega}^{i}(x)=\widehat{\Omega}^{i}_{1}(x)\cup\widehat{\Omega}^{i}_{2}(x)\). By definition, we have \(|\widehat{\Omega}^{i}(x)|=|\widehat{\Omega}^{i}_{1}(x)|+|\widehat{\Omega}^{i}_{2 }(x)|-1\). Using the three lemmas above we get
\[\sum_{x\in C}|\widehat{\Omega}^{i}(x)| \geq 1-2(|C|-1)+\sum_{x\in C}|C|-|C|\] \[=2|C|-1+|C|^{2}-|C|\] (III.1) \[=|C|^{2}-|C|-1.\]
On the other hand, since the code is unambiguous, we have
\[\sum_{x\in C}|\widehat{\Omega}^{i}(x)|\leq|\mathscr{A}|^{2i}.\]
Combining this with (III.1) we get the statement.
As a consequence of Propositions III.1 and III.5, we have that in the Scenario A.1 the maximum size of an unambiguous code for \((\mathscr{D},\mathbf{A}_{\mathscr{D}},\mathscr{F})\) is exactly \(|\mathscr{A}|^{i}-1\) and therefore \(\text{C}_{i}(\mathscr{D},\mathbf{A}_{\mathscr{D}})=\frac{\log_{|\mathscr{A}|}(| \mathscr{A}|^{i}-1)}{i}\). Therefore, we have a gain in capacity over multiple uses of \((\mathscr{D},\mathbf{A}_{\mathscr{D}})\).
### _The Mirrored Diamond Network_
The goal of this section is to explicitly compute the multishot capacity of the Mirrored Diamond Network in Scenario A.1. Let \(\Omega_{\mathscr{S}}:=\Omega[\mathscr{S},\mathbf{A}_{\mathscr{S}},\mathscr{F},\text{ out}(S)\to\text{in}(T)]\) be the channel that represents the transfer from \(S\) to \(T\) of \(\mathscr{S}\). Assume that \(t=1\). Applying the strategy provided in [3, Proposition V.1] one can check that \(C_{i}(\mathscr{S},\mathbf{A}_{\mathscr{S}})\geq 1\). We will show that
\[C_{i}(\mathscr{S},\mathbf{A}_{\mathscr{S}})=1\]
and hence the Singleton Cut-Set Bound can be achieved in each transmission round independently. As in the previous section, we model the action of an adversary acting on \((\mathscr{S},\mathbf{A}_{\mathscr{S}})\) by the channel \(\text{H}_{\mathscr{S}}:\mathscr{A}^{4}\dashrightarrow\mathscr{A}^{4}\) defined by \(\text{H}_{\mathscr{S}}(x):=\{y\in\mathscr{A}^{\prime}:d^{\text{H}}(x,y)\leq 1\}\) for all \(x\in\mathscr{A}\). One can check that the largest unambiguous code for \(\text{H}_{\mathscr{S}}\) has cardinality \(|\mathscr{A}|\) and there is no unambiguous code with larger cardinality. Thus \(C_{1}(\text{H}_{\mathscr{S}})=1\).
**Notation III.6**.: We let
\[\Omega^{\prime}_{i}:=(\Omega_{\mathscr{F}_{1}}[\text{out}(S)\to \text{in}(T)]\blacktriangleright\text{H}_{\mathscr{F}})\times\ldots\\ \times(\Omega_{\mathscr{F}_{i}}[\text{out}(S)\to\text{in}(T)] \blacktriangleright\text{H}_{\mathscr{F}})\]
represent the channel describing \(i\) uses of the network \((\mathscr{S},\mathbf{A}_{\mathscr{S}})\), where \(\mathscr{F}_{j}\) denotes the network code used in transmission round \(j\), with \(j\in\{1,\ldots,i\}\). Notice that \(\Omega^{\prime}_{i}:(\mathscr{A}^{4})^{i}\dashrightarrow(\mathscr{A}^{2})^{i}\).
We now show that there does not exist an unambiguous code \(C\subseteq(\mathscr{A}^{4})^{i}\) for the adversary channel \(\text{H}^{i}_{\mathscr{S}}\) such that \(|C|=|\mathscr{A}|^{i}+1\).
**Example III.7**.: For \(x\in(\mathscr{A}^{4})^{i}\), we let \(x^{j}:=(x_{4j+1},x_{4j+2},x_{4j+3},x_{4j+4})\) for all \(j\in 0,\ldots,i-1\). The code \(C\subseteq(\mathscr{A}^{4})^{i}\) is unambiguous for \(\text{H}^{i}_{\mathscr{S}}\) if and only if one among \(d^{\text{H}}(x^{1},y^{1}),\ldots,d^{\text{H}}(x^{i},y^{i})\) has Hamming distance atleast \(3\) for all \(x,y\) with \(x\neq y\). Let \(c_{1},\ldots,c_{|\mathscr{A}|^{i}+1}\) be elements in \(C\). We want to show that there are no two codewords of \(C\) that coincide in the first \(|\mathscr{A}|^{i}\) components. Assume, towards a contradiction, that \(c_{1}^{1}=c_{2}^{1}\). We can also assume that \(c_{1}=0\) without loss of generality, which implies \(c_{2}^{1}=0\). It follows that \(c_{3}^{1},\ldots,c_{|\mathscr{A}|^{i}+1}^{1}\) must have Hamming weight at least \(3\). If \(c_{3}^{1}\) has Hamming weight less than \(3\), then one among \(\{c_{1}^{2},c_{2}^{2},c_{3}^{2}\},\ldots,\{c_{1}^{i},c_{2}^{i},c_{3}^{i}\}\) is an unambiguous code for \(\text{H}_{\mathscr{S}}\) of cardinality \(3\) with minimum Hamming distance \(3\), which contradicts the fact that \(C_{1}(\text{H}_{\mathscr{S}})=1\). On the other hand, since \(c_{3}^{1},\ldots,c_{|\mathscr{A}|^{i}+1}^{1}\) have Hamming weight at least \(3\), we have that the Hamming distance between two elements in \(\{c_{3}^{1},\ldots,c_{|\mathscr{A}|^{i}+1}^{1}\}\) is almost \(2\). Therefore assuming that \(C\) is good for \(\text{H}^{i}_{\mathscr{S}}\) implies that one of \(\{c_{1}^{2},c_{2}^{2},c_{3}^{2}\},\ldots,\{c_{1}^{i},c_{2}^{i},c_{3}^{i}\}\) must be a code with cardinality \(3\) and minimum Hamming distance \(3\), which again contradicts \(C_{1}(\text{H}_{\mathscr{S}})=1\).
We are now ready to prove the main result of this section.
**Proposition III.8**.: The \(i\)-shot capacity of the Mirrored Diamond Network in Scenario A.1 is
\[C_{i}(\mathscr{S},\mathbf{A}_{\mathscr{S}})=1.\]
Proof.: We use an approach similar to the one in [1, Example 56]. Let \(\mathscr{F}\) be a network code for \((\mathscr{S},\mathbf{A}_{\mathscr{S}})\) and assume that \(\mathbf{A}\) can corrupt at most \(1\) edge from \(\{e_{1},e_{2},e_{3},e_{4}\}\). Let \(x=(x_{1},\ldots,x_{4i})\in\mathscr{A}^{4i}\). We have that
\[\Omega^{\prime}_{i}(x)=\text{H}^{i}_{\mathscr{S}}( \mathscr{F}^{1}(x_{1},x_{2},x_{3},x_{4}),\ldots,\] \[\mathscr{F}^{i}(x_{4i-3},x_{4i-2},x_{4i-1},x_{4i})).\]
We want to show that \(C_{1}(\Omega^{\prime}_{i})<|\mathscr{A}|^{i}+1\). Assume that there exists a code \(C\subseteq(\mathscr{A}^{4})^{i}\) with \(|C|=|\mathscr{A}|^{i}+1\) which is good for \(\Omega^{\prime}_{i}\). Then
\[C^{\prime}:=\{\mathscr{F}^{1}(x_{1},x_{2},x_{3},x_{4}),\ldots,\\ \mathscr{F}^{i}(x_{4i-3},x_{4i-2},x_{4i-1},x_{4i})):x\in C\} \subseteq\mathscr{A}^{2i}\]
is an unambiguous code for \(\text{H}^{i}_{\mathscr{S}}\) with \(|C^{\prime}|=|\mathscr{A}|^{i}+1\). We have that there must exist \(a,a^{\prime}\in C\), \(a\neq a^{\prime}\) such that \((a_{1},\ldots,a_{|\mathscr{A}|^{i}})=(a^{\prime}_{1},\ldots,a^{\prime}_{| \mathscr{A}|^{i}})\). Therefore, \(C^{\prime}\) is a good code for \(\text{H}^{i}_{\mathscr{S}}\) which has two different code words that coincide in the first \(|\mathscr{A}|^{i}\) components. However, by Example III.7, this code does not exist. This implies that \(C_{1}(\Omega^{\prime}_{i})<\text{log}_{|\mathscr{A}|}(|\mathscr{A}|^{i}+1)\) and we get \(C_{i}(\mathscr{S},\mathbf{A}_{\mathscr{S}})\leq 1\).
It remains to show that \(C_{i}(\mathscr{S},\mathbf{A}_{\mathscr{S}})\geq 1\). This immediately follows by [3, Proposition 12] using the channel \(\Omega^{i}_{\mathscr{S}}\). In particular, we have that
\[C_{i}(\Omega_{\mathscr{S}})=\frac{C_{1}(\Omega^{i}_{\mathscr{S}})}{i}\geq\frac {iC_{1}(\Omega_{\mathscr{S}})}{i}=C_{1}(\Omega_{\mathscr{S}})=1.\]
It follows the statement.
We concludes this section with the following remark on the mulishot capacity of Family \(\mathfrak{C}\) and Family \(\mathfrak{D}\).
**Remark III.9**.: It is not hard to check that for an adversary able to attack up to \(t\) edges, \(C_{i}(\mathfrak{C},\mathbf{A}_{\mathfrak{C}})=1\) and \(C_{i}(\mathfrak{D},\mathbf{A}_{\mathfrak{D}})=1\). This follows by using arguments similar to the ones in Example III.7 and in the proof of Proposition III.8, under the assumption that the adversary can corrupt up to \(t\) edges, with \(t\geq 2\).
In summary, we showed that in Scenario A.1, there is a gain in capacity for multiple uses of the Diamond Network. In contrast the Mirrored Diamond Network, and the networks in families \(\mathfrak{C}\) and \(\mathfrak{D}\), have a constant capacity \(1\) over multiple uses.
## IV Multishot Regime II
In this section, we study the multishot capacity of the Diamond and Mirrored Diamond Network along with the families \(\mathfrak{C}\) and \(\mathfrak{D}\) in Scenario A.2.
### _The Diamond Network_
In the main result of this section, we show that the \(i\)-shot capacity of the Diamond Network in Scenario A.2 is
\[\text{C}_{i}(\mathscr{D},\mathbf{A}_{\mathscr{D}})=\text{log}_{|\mathscr{A}|}(| \mathscr{A}|-1).\]
This shows that for multiple uses of the Diamond Network, there is no gain in capacity, in contrast with
Scenario A.1. In the following examples, we provide a characterization of an unambiguous code for \(\text{H}_{\mathscr{D}}\).
**Example IV.1**.: Let \(\text{H}_{\mathscr{D}}\) be the adversarial channel for \(\mathscr{D}\) as in Section III. The source \(S\) can send any symbol of \(\mathscr{A}^{\prime}=\mathscr{A}\setminus\{\star\}\), where \(\star\) is a reserved symbol in the alphabet \(\mathscr{A}\). One can check that, under the assumption of Scenario A.2, the largest unambiguous code for \(\text{H}_{\mathscr{D}}\) has cardinality \(|\mathscr{A}|-1\) and there is no larger unambiguous code. Recall that the source \(S\) cannot send \(\star\), implying \(C_{1}(\text{H}_{\mathscr{D}})=\text{log}_{|\mathscr{A}|}(|\mathscr{A}|-1)\).
The argument similar to the one in Scenario A.1 shows that a code \(C\subseteq(\mathscr{A}^{3})^{i}\) is unambiguous for \(\text{H}_{\mathscr{D}}^{i}\) if and only if, for all \(x,y\in C\) with \(x\neq y\), we have \(d^{\text{H}}(x^{(j)}),y^{(j)}\geq 3\), with \(j\in\{1,\ldots,i-1\}\). In the next example, we show that there does not exist a code \(C\) with \(|C|=(|\mathscr{A}|-1)^{i}+1\) for \(i\) uses of the Diamond Network.
**Example IV.2**.: We suppose that that there exists an unambiguous code \(C\subseteq(\mathscr{A}^{3})^{i}\) for \(\text{H}_{\mathscr{D}}^{i}\) such that \(|C|<(|\mathscr{A}|-1)^{i}+1\). We use the same notation as in Example III.7, for any \(x=(x_{1},\ldots,x_{3i})\in(\mathscr{A}^{3})^{i}\), we let \(x^{1}:=(x_{1},x_{2},x_{3}),\ldots,x^{i}:=(x_{1}^{i},x_{2}^{i},x_{3}^{i})\) and \(c_{1},\ldots,c_{(|\mathscr{A}|-1)^{i}+1}\) be elements in \(C\). We claim that there are no two codewords of \(C\) that coincide in the first \((|\mathscr{A}|-1)^{i}\) components. Assume, towards a contradiction, that \(c_{1}^{1}=c_{2}^{1}\) for \(c_{1}^{1}\neq c_{2}^{1}\) and we let \(c_{1}=0\), without loss of generality. This implies that \(c_{2}^{1}=0\) and that \(c_{3}^{1},\ldots,c_{(|\mathscr{A}|-1)^{i}+1}^{1}\) must have Hamming weight at least \(3\). Observe that if \(c_{3}^{1}\) has Hamming weight less than \(3\), then one of \(\{c_{1}^{2},c_{2}^{2},c_{3}^{2}\},\ldots,\{c_{1}^{i},c_{2}^{i},c_{3}^{i}\}\) is a code in \(\mathscr{A}^{3}\) of cardinality \(3\) with minimum Hamming distance \(3\), contradicting \(C_{1}(\text{H}_{\mathscr{D}})=\text{log}_{|\mathscr{A}|}(|\mathscr{A}|-1)\). On the other hand, since \(c_{3}^{1},\ldots,c_{(|\mathscr{A}|-1)^{i}+1}^{1}\) have Hamming weight at least \(3\) which implies that the Hamming distance between any two elements in \(\{c_{3}^{1},\ldots,c_{(|\mathscr{A}|-1)^{i}+1}^{1}\}\) is at most \(2\). Since we assumed that \(C\) is unambiguous for \(\text{H}_{\mathscr{D}}^{i}\), we have that one of \(\{c_{1}^{2},c_{2}^{2},c_{3}^{2}\},\ldots,\{c_{1}^{i},c_{2}^{i},c_{3}^{i}\}\) must be a code with cardinality \(3\) and minimum Hamming distance \(3\), which again contradicts the capacity of \(\text{H}_{\mathscr{D}}\).
We introduce the following notation.
**Notation IV.3**.: We let
\[\Omega_{i}:=(\Omega_{\mathscr{F}_{1}}[\text{out}(S)\to\text{in}(T) ]\blacktriangleright\text{H}_{\mathscr{D}})\times\ldots\\ \times(\Omega_{\mathscr{F}_{i}}[\text{out}(S)\to\text{in}(T)] \blacktriangleright\text{H}_{\mathscr{D}})\]
be the channel modeling \(i\) uses of the network \((\mathscr{D},\mathbf{A}_{\mathscr{D}})\), where \(\mathscr{F}_{j}\) denotes the network code used in transmission round \(j\), with \(j\in\{1,\ldots,i\}\). Note that \(\Omega_{i}:(\mathscr{A}^{3})^{i}\dashrightarrow(\mathscr{A}^{2})^{i}\).
We are now ready to prove the main theorem of this section.
**Theorem IV.4**.: The \(i\)-shot capacity of the Diamond Network in Scenario A.2 is
\[\text{C}_{i}(\mathscr{D},\mathbf{A}_{\mathscr{D}})=\log_{|\mathscr{A}|}(|\mathscr{ A}|-1).\]
Proof.: We first show that \(|C|\leq(|\mathscr{A}|-1)^{i}\). Assume that an adversary \(\mathbf{A}_{\mathscr{D}}\) can corrupt at most of the edges in the set \(\{e_{1},e_{2},e_{3}\}\). We observe that, for an \(x=(x_{1},\ldots,x_{3i})\in\mathscr{A}^{3i}\), we have
\[\Omega_{i}(x)=\text{H}_{\mathscr{D}}^{i}(\mathscr{F}^{1}(x_{1},x _{2},x_{3}),\ldots,\\ \mathscr{F}^{i}(x_{3i-2},x_{3i-1},x_{3i})).\]
Using a similar approach as in Proposition III.8, we assume that there exists an unambiguous code \(C\subseteq\mathscr{A}^{3i}\) for \(\Omega_{i}\) with \(|C|=(|\mathscr{A}|-1)^{i}+1\). The code
\[C^{\prime}:=\{\mathscr{F}^{1}(x_{1},x_{2},x_{3}),\ldots,\\ \mathscr{F}^{i}(x_{3i-2},x_{3i-1},x_{3i}))\}\subseteq(\mathscr{A }^{3})^{i}\]
is unambiguous for \(\text{H}_{\mathscr{D}}^{i}\) of cardinality \((|\mathscr{A}|-1)^{i}+1\). Since \(|C^{\prime}|=(|\mathscr{A}|-1)^{i}+1\), there must exist \(a,a^{\prime}\in C\), \(a\neq a^{\prime}\) such that \((a_{1},\ldots,a_{(|\mathscr{A}|-1)^{i}})=(a_{1}^{\prime},\ldots,a_{(|\mathscr{A} |-1)^{i}}^{\prime})\). Thus \(C^{\prime}\in(\mathscr{A}^{3})^{i}\) is an unambiguous code for \(\text{H}_{\mathscr{D}}^{i}\) with two different code words that coincide in the first \((|\mathscr{A}|-1)^{i}\) components. However such a code does not exist by Example IV.2. It follows \(C_{1}(\Omega_{i})<\text{log}_{|\mathscr{A}|}((|\mathscr{A}|-1)^{i}+1)\) and hence \(|C|\leq(|\mathscr{A}|-1)^{i}\). It remains to show that \(|C|\geq(|\mathscr{A}|-1)^{i}\). This immediately follows by applying [1, Proposition 12] and the strategy introduced in [3, Proposition 11] to the argument above.
It is interesting to note that when the adversary can change the edge attacked in each transmission round, we recover the same scheme as in [3].
### _Families \(\mathfrak{C}\) and \(\mathfrak{D}\)_
In this section, we show that the multishot capacities of networks in families \(\mathfrak{C}\) and \(\mathfrak{D}\) is \(1\). Recall that family \(\mathfrak{D}\) includes the Mirrored Diamond Network. We start with the following preliminary result.
**Proposition IV.5**.: The \(i\)-shot capacity of a network in families \(\mathfrak{C}\) and \(\mathfrak{D}\) in Scenario A.2 is \(1\).
Proof.: The result follows similarly as in PropositionIII.8 and Remark III.9, since we do not assume anything on the adversary except that it can attack up to \(1\) edge.
These results shows that, in Scenario A.2, there is no gain of using these networks multiple times for communication under this adversarial model.
## V Open Questions and Future Work
In this paper we have investigated the multishot capacity of networks with restricted adversaries focusing on the Diamond Network and the Mirrored Diamond Network as elementary building blocks of a general theory. In future work, we plan to investigate the multishot capacity of arbitrary networks by establishing a multishot version of the double cut-set bound of in [5] and by computing the multishot capacities of all the five families introduced in the same paper.
|
2302.13379 | Radiative Meson and Glueball Decays in the Witten-Sakai-Sugimoto Model | We calculate radiative decay rates of mesons and glueballs in the top-down
holographic Witten-Sakai-Sugimoto model with finite quark masses. After
assessing to what extent this model agrees or disagrees with experimental data,
we present its predictions for so far undetermined decay channels. Contrary to
widespread expectations, we obtain sizeable two-photon widths of scalar,
tensor, and pseudoscalar glueballs, suggesting in particular that the observed
two-photon rate of the glueball candidate $f_0(1710)$ is not too large to
permit a glueball interpretation, but could be even much higher. We also
discuss the so-called exotic scalar glueball, which in the
Witten-Sakai-Sugimoto model is too broad to match either of the main glueball
candidates $f_0(1500)$ and $f_0(1710)$, but might be of interest with regard to
the alternative scenario of the so-called fragmented scalar glueball. Employing
the exotic scalar glueball for the latter, much smaller two-photon rates are
predicted for the ground-state glueball despite a larger total width;
relatively large two-photon rates would then apply to the excited scalar
glueball described by the predominantly dilatonic scalar glueball. In either
case, the resulting contributions to the muon $g-2$ from hadronic
light-by-light scattering involving glueball exchanges are small compared to
other single meson exchanges, of the order of $\lesssim 10^{-12}$. | Florian Hechenberger, Josef Leutgeb, Anton Rebhan | 2023-02-26T18:35:23Z | http://arxiv.org/abs/2302.13379v2 | # Radiative Meson and Glueball Decays in the Witten-Sakai-Sugimoto Model
###### Abstract
We calculate radiative decay rates of mesons and glueballs in the top-down holographic Witten-Sakai-Sugimoto model with finite quark masses. After assessing to what extent this model agrees or disagrees with experimental data, we present its predictions for so far undetermined decay channels. Contrary to widespread expectations, we obtain sizeable two-photon widths of scalar, tensor, and pseudoscalar glueballs, suggesting in particular that the observed two-photon rate of the glueball candidate \(f_{0}(1710)\) is not too large to permit a glueball interpretation, but could be even much higher. We also discuss the so-called exotic scalar glueball, which in the Witten-Sakai-Sugimoto model is too broad to match either of the main glueball candidates \(f_{0}(1500)\) and \(f_{0}(1710)\), but might be of interest with regard to the alternative scenario of the so-called fragmented scalar glueball. Employing the exotic scalar glueball for the latter, much smaller two-photon rates are predicted for the ground-state glueball despite a larger total width; relatively large two-photon rates would then apply to the excited scalar glueball described by the predominantly dilatonic scalar glueball. In either case, the resulting contributions to the muon \(g-2\) from hadronic light-by-light scattering involving glueball exchanges are small compared to other single meson exchanges, of the order of \(\lesssim 10^{-12}\).
###### Contents
* I Introduction and Summary
* II The Witten-Sakai-Sugimoto model augmented by quark masses
* II.1 Pseudoscalar masses
* II.2 Hadronic vector and axial vector meson decays
* III Radiative Meson Decays
* III.1 Vector meson dominance
* III.2 Radiative decays of pseudoscalars and vector mesons
* III.2.1 Vector meson \(1\gamma\)-decays
* III.2 Pseudoscalar meson \(2\gamma\)-decays
* III.3 Radiative axial-vector decays
* III.3.1 Axial-vector \(1\gamma\)-decays
* III.3.2 Axial-vector \(2\gamma\)-decays
* IV Glueballs in the Witten-Sakai-Sugimoto model
* IV.1 Exotic and dilatonic scalar glueballs
* IV.2 Extrapolations to realistic glueball masses
* V Radiative Glueball Decays
* V.1 Dilatonic Scalar Glueball Decays
* V.1.1 Dilatonic scalar glueball \(1\gamma\)-decays
* V.1.2 Dilatonic scalar glueball \(2\gamma\)-decays
* V.2 Tensor Glueball Decays
* V.2.1 Tensor glueball \(1\gamma\)-decays
* V.2.2 Tensor glueball \(2\gamma\)-decays
* V.3 Pseudoscalar Glueball Decays
* VI Glueball contributions to hadronic light-by-light scattering and the muon \(g-2\) Acknowledgments
* A Hadronic decays of the scalar and tensor glueballs
* A.1 Dilatonic scalar glueball
* A.2 Exotic scalar glueball
* A.2.1 Vector meson \(1\gamma\)-decays
* A.2.2 Pseudoscalar meson \(2\gamma\)-decays
## I Introduction and Summary
Glueballs, bound states of gluons without valence quarks, have been proposed as a consequence of QCD from the start [1; 2; 3; 4], but it is still a widely open question how they manifest themselves in the hadron spectrum [5; 6; 7; 8; 9]. Lattice QCD [10; 11; 12; 13; 14], mostly in the quenched approximation, provides
more or less clear predictions for the spectrum, with a lightest glueball being a scalar, followed by a tensor glueball with an important role as the lightest state associated with the pomeron [15], a pseudoscalar glueball participating in the manifestation of the U(1)\({}_{A}\) anomaly responsible for the large mass of the \(\eta^{\prime}\) meson [16], and towers of states with arbitrary integer spin as well as parity. However, it has turned out to be difficult to discriminate glueball states from bound states of quarks with the same quantum numbers with which they can mix, since the various available phenomenological models give strongly divergent pictures, in particular for the lightest glueballs. For the ground-state scalar glueball, the initially favored scenario that the isoscalar meson \(f_{0}(1500)\) contains the most glue content while being strongly mixed with quarkonia [17; 18; 19] is contested by models which identify the \(f_{0}(1710)\) as a glueball candidate [20; 21; 22] with more dominant glue content. The latter also appears favored by its larger production rate in supposedly gluon-rich radiative \(J/\psi\) decays [23], but there it was proposed that the glue content might rather be distributed over several scalars involving a new meson \(f_{0}(1770)\) previously lumped together with the established \(f_{0}(1710)\)[24; 8; 25].
In order to clarify the situation, dynamical information on decay patterns is required from first principles, which is difficult to extract from Euclidean lattice QCD. Analytical approaches always involve uncontrollable approximations, albeit recently interesting progress has been made using Schwinger-Dyson equations [26].
In this work we continue the analytical explorations made using gauge/gravity duality, which has been employed for studying glueball spectra in strongly coupled nonabelian theories shortly after the discovery of the AdS/CFT correspondence [27; 28; 29; 30; 31], inspiring phenomenological "bottom-up" model building for glueball physics [32; 33; 34; 35; 36; 37; 38]. Of particular interest here is the top-down construction of a dual to low-energy QCD in the large-\(N_{c}\) limit from type-IIA string theory by Witten [39], where the glueball spectrum has been obtained in [40; 41]. Sakai and Sugimoto [42; 43] have extended this model by a D-brane construction introducing \(N_{f}\) chiral quarks in the 't Hooft limit \(N_{c}\gg N_{f}\), which turns out to reproduce many features of low-energy QCD and chiral effective theory, not only qualitatively, but often semi-quantitatively, while having a minimal set of free parameters.
Glueball decay patterns have been first studied in the Witten-Sakai-Sugimoto (WSS) model for the scalar glueball in [44] and revisited and extended in [45]. This involves a so-called exotic scalar glueball [40] for which it is unclear whether it should be identified with the ground-state glueball in QCD or instead be discarded together with the other states that more evidently do not relate to states in QCD.
Assuming that the ground-state scalar glueball corresponds to the predominantly dilatonic bulk metric fluctuations which do not involve polarizations in the extra Kaluza-Klein dimension employed for supersymmetry breaking, [46; 47] found that the resulting decay pattern could match remarkably well the one of the \(f_{0}(1710)\) meson when effects of finite quark masses are included (or \(f_{0}(1770)\) when this is split off from a tetraquark \(f_{0}(1710)\)[24]). Instead of the chiral suppression postulated for flavor asymmetries of scalar glueball decay [48], a nonchiral enhancement of decays into heavier pseudoscalars was obtained, which is correlated with a reduction of the \(\eta\eta^{\prime}\) decay mode [47]. This mechanism of flavor symmetry violation is absent for the tensor glueball, whose hadronic decays have been worked out also in [45]; hadronic decays of pseudoscalar and pseudovector glueballs have been studied in [49; 50; 51].
In the present paper, we revisit and extend the study of glueball decay patterns of [45; 46; 47] to also include radiative decays. As discussed already in [43], the WSS model naturally incorporates vector meson dominance (VMD), crucially involving an infinite tower of vector mesons. After assessing the predictions of the WSS model with regard to radiative decays of ordinary pseudoscalar and (axial) vector mesons, we analyze its corresponding results for glueballs.
Contrary to widespread expectations, the WSS model predicts that glueballs can have sizeable radiative decay widths in the keV range, exceeding even the claimed observation of two-photon
rates for \(f_{0}(1710)\) by the BESIII collaboration [52], which was taken as evidence against its glueball nature.
In this context we also reconsider the exotic scalar glueball, which differs from the dilatonic one in that it has smaller couplings to vector mesons as well as photons, while having a total width in excess of the one of either \(f_{0}(1500)\) or \(f_{0}(1710)\), when its mass is suitably adjusted. As such it may instead be a candidate for the so-called fragmented scalar glueball proposed in [8; 24; 25], which is a wider resonance distributed over \(f_{0}(1710)\), a novel \(f_{0}(1770)\), \(f_{0}(2020)\), and \(f_{0}(2100)\), without showing up as an identifiable meson on its own.
In the case of the tensor glueball, where the WSS model is unequivocal in identifying the ground state, even though its mass also needs correction, we find again two-photon rates in the keV region, larger than the old predictions of Kada et al. [53], but comparable to those obtained by Cotanch and Williams [54] using VMD. (The latter have obtained even larger two-photon rates for the scalar glueball, which are an order of magnitude above the WSS results.)
The next heavier glueball, the pseudoscalar glueball, which plays an important rule in the realization of the U(1)\({}_{A}\) anomaly [50], is also found to have two-photon rates in the keV region.
Because of their sizeable two-photon coupling in the WSS model, we consider also the effect the lightest three glueballs may have as single-meson contributions to hadronic light-by-light scattering, which is an important ingredient of the Standard Model prediction of the anomalous magnetic moment of the muon [55]\(a_{\mu}=(g-2)_{\mu}/2\). With the dilatonic scalar glueball as ground state, we find results of \(a_{\mu}^{G}=-(1\ldots 1.6)\times 10^{-12}\), and one order of magnitude smaller when the exotic scalar glueball is used instead with mass raised to the value of the fragmented glueball of [24]. With its larger mass and comparable two-photon rate, the tensor glueball is bound to contribute less than the dilatonic scalar glueball. The pseudoscalar glueball, which contributes with a different sign, yields \(a_{\mu}^{G_{PS}}=+(0.2\ldots 0.4)\times 10^{-12}\) depending on its actual mass. All these results are thus safely smaller than the current uncertainties in the hadronic light-by-light scattering contributions to \(a_{\mu}\).
## II The Witten-Sakai-Sugimoto model augmented by quark masses
The Witten-Sakai-Sugimoto (WSS) model [42; 43] is constructed by placing a stack of \(N_{f}\) flavor probe D8 and \(\overline{\rm D8}\)-branes into the near-horizon double Wick rotated black D4-brane background proposed in [39] as a supergravity dual of four-dimensional U(\(N_{c}\to\infty\)) Yang-Mills (YM) theory at low energies, where supersymmetry and conformal symmetry are broken by compactifications. It thus serves as a model for the low-energy limit of large \(N_{c}\) QCD with \(N_{f}\ll N_{c}\), corresponding to a quenched approximation when extrapolated to \(N_{f}=N_{c}=3\). The background geometry is given by the metric
\[\mathrm{d}s^{2}=\left(\frac{U}{R_{\rm D4}}\right)^{3/2}\left[\eta _{\mu\nu}\mathrm{d}x^{\mu}\mathrm{d}x^{\nu}+f(U)\mathrm{d}\tau^{2}\right]+ \left(\frac{R_{\rm D4}}{U}\right)^{3/2}\left[\frac{\mathrm{d}U^{2}}{f(U)}+U^{ 2}\mathrm{d}\Omega_{4}^{2}\right],\] \[e^{\phi}=g_{s}\left(\frac{U}{R_{\rm D4}}\right)^{3/4},\qquad F_ {4}=\mathrm{d}C_{3}=\frac{2\pi N_{c}}{V_{4}}\epsilon_{4},\qquad f(U)=1-\frac{ U_{\rm KK}^{3}}{U^{3}}, \tag{1}\]
with dilaton \(\phi\) and Ramond-Ramond three-form field \(C_{3}\), a solution of type IIA supergravity, whose bosonic part of the action reads
\[S_{\rm grav}=\frac{1}{2\kappa_{10}^{2}}\int\mathrm{d}^{10}x\sqrt{-g}\left[e^{ -2\phi}\left(R+4\left(\nabla\phi\right)^{2}-\frac{\left(2\pi\right)^{4}l_{s} ^{2}}{2\cdot 4!}F_{4}^{2}\right)\right]. \tag{2}\]
The \(N_{c}\) D4-branes extend along the directions parametrized by the coordinates \(x^{\mu}\), \(\mu=0,1,2,3\) and another spatial dimension with coordinate \(\tau\), while \(U\) corresponds to the radial (holographic)
direction transverse to the D4-brane. The remaining four transverse coordinates span a unit \(S^{4}\) with line element \(\mathrm{d}\Omega_{4}^{2}\), volume form \(\epsilon_{4}\) and volume \(V_{4}=8\pi^{2}/3\). The \(\tau\)-direction is compactified to a supersymmetry breaking \(S^{1}\), whose period is chosen as
\[\tau\simeq\tau+\delta\tau=\tau+2\pi M_{\mathrm{KK}}^{-1},\;\;\;M_{\mathrm{KK}}= \frac{3}{2}\frac{U_{\mathrm{KK}}^{1/2}}{R_{\mathrm{D4}}^{3/2}}, \tag{3}\]
to avoid a conical singularity at \(U=U_{\mathrm{KK}}\). The radius \(R_{\mathrm{D4}}\) is related to the string coupling \(g_{s}\) and the string length \(l_{s}\) through \(R_{\mathrm{D4}}^{3}=\pi g_{s}N_{c}l_{s}^{3}\), and the 't Hooft coupling of the dual four-dimensional Yang-Mills theory is given by
\[\lambda=g_{\mathrm{YM}}^{2}N_{c}=\frac{g_{5}^{2}}{\delta\tau}N_{c}=2\pi g_{s}l _{s}M_{\mathrm{KK}}N_{c}. \tag{4}\]
The flavor D8 and \(\overline{\mathrm{D8}}\)-branes extend along \(x^{\mu}\), \(U\), and the \(S^{4}\). They are placed antipodally on the \(\tau\)-circle to join at \(U_{\mathrm{KK}}\). In adopting the probe approximation, i.e. \(N_{c}\gg N_{f}\) for the \(N_{f}\) D8 branes, one can ignore backreactions from the D8-branes to the D4-brane background. The gauge fields on the D8-branes, which are dual to left and right chiral quark currents separated in the Kaluza-Klein (\(\tau\)) direction, are governed at leading order by a Dirac-Born-Infeld (DBI) plus Chern-Simons (CS) action
\[S_{\mathrm{DBI}}= -T_{8}\int\mathrm{d}^{9}xe^{-\phi}\operatorname{Tr}\sqrt{-\det \left(g_{MN}+2\pi\alpha^{\prime}F_{MN}\right)},\] \[S_{\mathrm{CS}}= T_{8}\int_{D8}C\wedge\,\operatorname{Tr}\,\left[\exp\left\{ \frac{F}{2\pi}\right\}\right]\sqrt{\hat{A}(\mathcal{R})}, \tag{5}\]
where \(\hat{A}(\mathcal{R})\) is the so-called A-roof genus [56; 57].
Considering only SO(5)-invariant excitations and restricting to terms quadratic in the field strength, the nine-dimensional DBI action can be reduced to a five-dimensional Yang-Mills theory with action [42; 43]1
Footnote 1: Note that in (6) one uses the Minkowski metric \(\eta_{\mu\nu}\), in the mostly plus convention, to contract the four-dimensional spacetime indices.
\[S_{\mathrm{D8}}^{\mathrm{DBI}}=-\kappa\int\mathrm{d}^{4}x\,\mathrm{d}z\, \operatorname{Tr}\,\left[\frac{1}{2}K^{-1/3}F_{\mu\nu}^{2}+M_{\mathrm{KK}}^{2} KF_{\mu z}^{2}\right], \tag{6}\]
with
\[\kappa\equiv\frac{\lambda N_{c}}{216\pi^{3}},\qquad K(z)\equiv 1+z^{2}=U^{3}/U _{\mathrm{KK}}^{3}. \tag{7}\]
To identify the four-dimensional meson fields, we make the ansatz
\[A_{\mu}(x^{\mu},z)=\sum_{n=1}^{\infty}B_{\mu}^{(n)}(x^{\mu})\psi _{n}(z)\] \[A_{z}(x^{\mu},z)=\sum_{n=0}^{\infty}\varphi^{(n)}(x^{\mu})\phi_{n }(z) \tag{8}\]
for the five-dimensional gauge field using the complete sets \(\left\{\psi_{n}(z)\right\}_{n\geq 1}\) and \(\left\{\phi_{n}(z)\right\}_{n\geq 0}\) of normalizable functions of \(z\) with normalization conditions
\[\begin{split}&\kappa\int\mathrm{d}zK^{-1/3}\psi_{m}\psi_{n}=\delta_{mn },\\ &\kappa\int\mathrm{d}zK\phi_{m}\phi_{n}=\delta_{mn},\end{split} \tag{9}\]
satisfying the completeness relations
\[\begin{split}&\kappa\sum_{n}K^{-1/3}\psi_{n}(z)\psi_{n}(z^{\prime})= \delta(z-z^{\prime}),\\ &\kappa\sum_{n}K\phi_{n}\phi_{n}=\delta(z-z^{\prime}).\end{split} \tag{2.10}\]
With this ansatz, the fields \(B_{\mu}^{(n)}\) and \(\varphi^{(n)}\) have canonical kinetic terms; the eigenvalue equation
\[-K^{-1/3}\partial_{z}\left(K\partial_{z}\psi_{n}\right)= \lambda_{n}\psi_{n}, \tag{2.11}\]
which can be used to relate the two complete sets via \(\phi_{n}(z)\propto\partial_{z}\psi_{n}(z)\) for \((n\geq 1)\), yields a mass term for \(B_{\mu}^{(n)}\). The remaining massless mode is given by \(\phi_{0}(z)=1/\left(\sqrt{\pi\kappa}M_{\rm KK}K(z)\right)\).
Inserting the separation ansatz (2.8) into the DBI action (2.6) and integrating over \(z\), we obtain
\[\begin{split} S_{\rm D8}^{\rm DBI}=&-\operatorname{ Tr}\,\int\mathrm{d}^{4}x\left[\left(\partial_{\mu}\varphi^{(0)}\right)^{2}+ \sum_{n=1}^{\infty}\left(\frac{1}{2}\left(\partial_{\mu}B_{\nu}^{(n)}- \partial_{\nu}B_{\mu}^{(n)}\right)^{2}+m_{n}^{2}\left(B_{\mu}^{(n)}-m_{n}^{-1 }\partial_{\mu}\varphi^{(n)}\right)^{2}\right)\right]\\ &+\left(\text{interaction terms}\right).\end{split} \tag{2.12}\]
The scalar fields \(\varphi^{(n)}\) with \((n\geq 1)\) can be absorbed by the fields \(B_{\mu}^{(n)}\), which are interpreted as (axial) vector meson fields, with masses \(m_{n}=\sqrt{\lambda_{n}}M_{\rm KK}\) determined by the eigenvalue equation for the normalizable modes (2.11). The lightest vector mesons, identified with the rho and omega mesons, have \(m_{\rho}=m_{1}=\sqrt{0.669314}M_{\rm KK}\), with the traditional value [42; 43] of \(M_{\rm KK}=949\) MeV corresponding to \(m_{\rho}=776.4\) MeV.
The remaining field \(\varphi^{(0)}\) is identified as the multiplet of massless pion fields produced by chiral symmetry breaking, which is realized geometrically by D8 and \(\overline{\rm D8}\)-branes joining at \(z=0\), with the U\((N_{f})\)-valued Goldstone boson field given by the holonomy
\[U(x)=e^{i\Pi^{a}(x)\lambda^{a}/f_{\pi}}=\mathrm{P}\,\exp i\int_{-\infty}^{ \infty}\mathrm{d}z\,A_{z}(z,x), \tag{2.13}\]
where \(\lambda^{a}=2T^{a}\) are Gell-Mann matrices including \(\lambda^{0}=\sqrt{2/N_{f}}\mathbf{1}\). For \(N_{f}=3\) we have
\[\Pi(x)\equiv\Pi^{a}(x)T^{a}=\frac{1}{2}\begin{pmatrix}\pi^{0}+\eta^{8}/\sqrt {3}+\eta^{0}\sqrt{2/3}&\sqrt{2}\pi^{+}&\sqrt{2}K^{+}\\ \sqrt{2}\pi^{-}&-\pi^{0}+\eta^{8}/\sqrt{3}+\eta^{0}\sqrt{2/3}&\sqrt{2}K^{0}\\ \sqrt{2}K^{-}&\sqrt{2}\tilde{K}^{0}&-2\eta^{8}/\sqrt{3}+\eta^{0}\sqrt{2/3} \end{pmatrix}. \tag{2.14}\]
The pion decay constant is determined by
\[f_{\pi}^{2}=\frac{\lambda N_{c}M_{\rm KK}^{2}}{54\pi^{4}}; \tag{2.15}\]
with the choice \(f_{\pi}\approx 92.4\) MeV one obtains \(\lambda\approx 16.63\). Following [45], we shall also consider the smaller value \(\lambda\approx 12.55\) obtained by matching the large-\(N_{c}\) lattice result for the string tension obtained in Ref. [58] (resulting in \(f_{\pi}\approx 80.3\) MeV). A smaller 't Hooft coupling has also been argued for in Ref. [59] from studies of the spectrum of higher-spin mesons in the WSS model. The downward variation of \(\lambda\approx 16.63\ldots 12.55\) will thus be used as an estimate of the variability of the predictions of this model.
### Pseudoscalar masses
In the WSS model, the U(1)\({}_{A}\) flavor symmetry is broken by an anomalous contribution of order \(1/N_{c}\) due to the \(C_{1}\) Ramond-Ramond field, which gives rise to a Witten-Veneziano [60; 61] mass term for the singlet \(\eta_{0}\) pseudoscalar with [42]
\[m_{0}^{2}=\frac{2N_{f}}{f_{\pi}^{2}}\chi_{g}=\frac{N_{f}}{27\pi^{2}N_{c}}\lambda ^{2}M_{\rm KK}^{2}, \tag{16}\]
where \(\chi_{g}\) is the topological susceptibility.
For \(N_{f}=N_{c}=3\), one has \(m_{0}=967\ldots 730\) MeV for \(\lambda=16.63\ldots 12.55\), which is indeed a phenomenologically interesting ballpar when finite quark masses are added to the model by the addition of an effective Lagrangian
\[{\cal L}_{m}^{\cal M} \propto \,{\rm Tr}\,\left({\cal M}\,U(x)+h.c.\right), \tag{17}\] \[{\cal M} = {\rm diag}(m_{u},m_{d},m_{s}).\]
This deformation can be generated by either worldsheet instantons [62; 63] or nonnormalizable modes of bifundamental fields corresponding to open-string tachyons [64; 65; 66; 67]. Assuming for simplicity isospin symmetry, \(m_{u}=m_{d}=\hat{m}\), this leads to masses [47]
\[m_{\eta,\eta^{\prime}}^{2}=\frac{1}{2}m_{0}^{2}+m_{K}^{2}\mp\sqrt{\frac{m_{0} ^{4}}{4}-\frac{1}{3}m_{0}^{2}(m_{K}^{2}-m_{\pi}^{2})+(m_{K}^{2}-m_{\pi}^{2})^ {2}} \tag{18}\]
for the mass eigenstates
\[\eta = \eta_{8}\cos\theta_{P}-\eta_{0}\sin\theta_{P}\] \[\eta^{\prime} = \eta_{8}\sin\theta_{P}+\eta_{0}\cos\theta_{P}, \tag{19}\]
with mixing angle
\[\theta_{P}=\frac{1}{2}\arctan\frac{2\sqrt{2}}{1-\frac{3}{2}m_{0}^{2}/(m_{K}^ {2}-m_{\pi}^{2})}. \tag{20}\]
Using \(m_{\pi}^{2}=m_{\pi_{0}}^{2}\approx(135{\rm MeV})^{2}\) and
\[m_{K}^{2}=\frac{1}{2}(m_{K_{\pm}}^{2}+m_{K_{0}}^{2})-\frac{1}{2}(m_{\pi_{\pm} }^{2}-m_{\pi_{0}}^{2})\approx(495{\rm MeV})^{2} \tag{21}\]
as isospin symmetric parameters, the WSS result \(m_{0}\approx 967\ldots 730\) MeV for \(\lambda=16.63\ldots 12.55\) leads to \(\theta_{P}\approx-14^{\circ}\cdots-24^{\circ}\) and \(m_{\eta}\approx 520\ldots 470\), \(m_{\eta^{\prime}}\approx 1080\ldots 890\) MeV. In the following we shall consider this range of mixing angles in conjunction with the variation of \(\lambda\), but we shall fix \(m_{\eta}\) and \(m_{\eta^{\prime}}\) to their experimental values when evaluating phase space integrals. In the radiative decay rates considered below, the explicit quark masses will not modify the (chiral) results for the couplings; they only appear in phase space factors.
### Hadronic vector and axial vector meson decays
Vertices for the hadronic decays of vector and axial vector meson involving pseudoscalar mesons are contained in the second term of the DBI action (6). For the \(\rho\) meson, this contains the term (with indices restricted to the first two quark flavors)
\[{\cal L}_{\rho\pi\pi}=-g_{\rho\pi\pi}\varepsilon_{abc}(\partial_{ \mu}\pi^{a})\rho^{b\mu}\pi^{c},\] \[g_{\rho\pi\pi}=\int{\rm d}z\frac{1}{\pi K}\psi_{1}=33.98\,\lambda ^{-\frac{1}{2}}N_{c}^{-\frac{1}{2}}, \tag{22}\]
yielding \(\Gamma_{\rho\to\pi\pi}=98.0\ldots 130\) MeV for \(\lambda=16.63\ldots 12.55\), which somewhat underestimates the experimental result of \(\approx 150\) MeV.
There is also a vertex involving one vector, one axial vector, and one pseudoscalar meson, which for the ground-state isotriplet mesons reads
\[{\cal L}_{a_{1}\rho\pi}=g_{a_{1}\rho\pi}\varepsilon_{abc}a_{\mu}^ {a}\rho^{b\mu}\pi^{c},\] \[g_{a_{1}\rho\pi}=2M_{\rm KK}\sqrt{\frac{\kappa}{\pi}}\int{\rm d} z\psi_{2}^{\prime}\psi_{1}=-34.43\,\lambda^{-\frac{1}{2}}N_{c}^{-\frac{1}{2}}M_{ \rm KK}. \tag{23}\]
In the WSS model, the predicted mass of the \(a_{1}\) meson, 1186.5 MeV, is rather close to the experimental result [68] of 1230(40) MeV. The predicted width for \(a_{1}\to\rho\pi\) (already studied in [43]) is \(425\ldots 563\) MeV, which is within the experimental result for the total width of 250\(\ldots 600\) MeV (average value 420(35) MeV), but according to [69] only 60% of the three-pion decays are due to S-wave \(\rho\pi\) decays, whereas the latter saturate the hadronic decays in the WSS model.
For the light quark flavors, these results for the decay rates of \(\rho\) and \(a_{1}\) seem to indicate that the WSS model is working quite well. When the mass of the strange quark is included, a shortcoming of the model, which is shared by many bottom-up holographic QCD models (see e.g. [70]), is that the \(\phi\) meson remains degenerate with \(\rho\) and \(\omega\). In the following we shall nevertheless also consider \(K^{*}\) and \(\phi\) mesons by simply raising their masses in the resulting phase space factors while keeping their vertices such as \(g_{K^{*}K\pi}=g_{\phi KK}=g_{\rho\pi\pi}\) unchanged. The resulting widths, \(\Gamma(K^{*}\to K\pi)=28\ldots 37\) MeV and \(\Gamma(\phi\to K\bar{K})=2.12\ldots 2.82\) MeV, are between 40 and 20% too small. These deviations are at least not dramatically larger than the one for the \(\rho\) width, which amounts to \(33\ldots 12\%\); all appear to remain in the range to be expected for a large-\(N\) approach.
## III Radiative meson decays
Before considering radiative decays of the experimentally elusive glueballs, we shall evaluate the predictions of the WSS model with nonzero quark masses for radiative decay widths of regular mesons and compare with experimental data as far as available. As discussed extensively in the second paper of Sakai and Sugimoto [43], holographic QCD models naturally provide a realization of vector meson dominance (VMD) [71; 72; 73; 74] involving an infinite tower of vector mesons. There it was already observed that the chiral WSS model yields a result for \(\Gamma(\omega\to\pi^{0}\gamma)\) which is roughly consistent with the experimental value. In the following we shall recapitulate the results of [43] and extend them to the WSS model including quark masses and the Witten-Veneziano mass term.
### Vector meson dominance
According to the holographic principle, non-normalizable modes are interpreted as external sources. This permits to study electromagnetic interactions to leading order by setting asymptotic values of the gauge field \(A_{\mu}\) on the D8 branes according to [43]
\[\lim_{z\to\pm\infty}A_{\mu}(x,z)=A_{L,R\mu}(x)=eQA_{\mu}^{\rm em}(x), \tag{24}\]
where \(e\) is the electromagnetic coupling constant and \(Q\) is the electric charge matrix, given as
\[Q=\tfrac{1}{3}\left(\begin{array}{ccc}2&&\\ &-1&\\ &&-1\end{array}\right) \tag{25}\]
for the \(N_{f}=3\) case. The ansatz (8) changes to
\[\begin{split} A_{\mu}(x^{\mu},z)=& A_{L\mu}(x^{\mu}) \psi_{+}(z)+A_{R\mu}(x^{\mu})\psi_{-}(z)\\ &+\sum_{n=1}^{\infty}v_{\mu}^{n}(x^{\mu})\psi_{n}(z),\end{split} \tag{3}\]
with the functions \(\psi_{\pm}(z)\) defined as
\[\psi_{\pm}(z)\equiv\frac{1}{2}\left(1\pm\psi_{0}(z)\right),\qquad\psi_{0}(z) \equiv\frac{2}{\pi}\arctan z, \tag{4}\]
They satisfy (11) as non-normalizable zero modes, because \(\partial_{z}\psi_{\pm}(z)\propto\phi_{0}(z)\propto 1/K(z)\).
To distinguish between vector and axial-vector fields we introduce the notation
\[\begin{split}\mathcal{V}_{\mu}&\equiv\frac{1}{2} \left(A_{L\mu}+A_{R\mu}\right),\qquad\mathcal{A}_{\mu}\equiv\frac{1}{2}\left( A_{L\mu}-A_{R\mu}\right),\\ v_{\mu}^{n}&\equiv B_{\mu}^{(2n-1)},\qquad a_{\mu} ^{n}\equiv B_{\mu}^{(2n)},\end{split} \tag{5}\]
so that
\[\begin{split} A_{\mu}(x^{\mu},z)=&\mathcal{V}_{\mu} (x^{\mu})+\mathcal{A}_{\mu}(x^{\mu})\psi_{0}(z)\\ &+\sum_{n=1}^{\infty}v_{\mu}^{n}(x^{\mu})\psi_{2n-1}(z)+\sum_{n= 1}^{\infty}a_{\mu}^{n}(x^{\mu})\psi_{2n}(z).\end{split} \tag{6}\]
The first term in (6) can then be expanded as
\[\frac{\kappa}{2}\int\mathrm{d}z \ K^{-1/3}F_{\mu\nu}^{2} \tag{7}\] \[=\frac{a_{\mathcal{V}\mathcal{V}}}{2}\mathrm{tr}\left(\partial_{ \mu}\mathcal{V}_{\nu}-\partial_{\nu}\mathcal{V}_{\mu}\right)^{2}\] \[+\frac{a_{\mathcal{A}\mathcal{A}}}{2}\mathrm{tr}\left(\partial_{ \mu}\mathcal{A}_{\nu}-\partial_{\nu}\mathcal{A}_{\mu}\right)^{2}\] \[+\frac{1}{2}\mathrm{tr}\left(\partial_{\mu}v_{\nu}^{n}-\partial_ {\nu}v_{\mu}^{n}\right)^{2}+\frac{1}{2}\mathrm{tr}\left(\partial_{\mu}a_{\nu} ^{n}-\partial_{\nu}a_{\mu}^{n}\right)^{2}\] \[+a_{\mathcal{V}\mathcal{V}^{n}}\mathrm{tr}\left(\left(\partial_{ \mu}\mathcal{V}_{\nu}-\partial_{\nu}\mathcal{V}_{\mu}\right)\left(\partial_{ \mu}v_{\nu}^{n}-\partial_{\nu}v_{\mu}^{n}\right)\right)\] \[+a_{\mathcal{A}a^{n}}\left(\left(\partial_{\mu}\mathcal{A}_{\nu}- \partial_{\nu}\mathcal{A}_{\mu}\right)\left(\partial_{\mu}a_{\nu}^{n}- \partial_{\nu}a_{\mu}^{n}\right)\right)\] \[+\left(\text{interaction terms}\right),\]
with coupling constants
\[\begin{split} a_{\mathcal{V}\mathcal{V}^{n}}&=\kappa \int dzK^{-1/3}\psi_{2n-1},\quad a_{\mathcal{V}\mathcal{V}}=\kappa\int dzK^{- 1/3},\\ a_{\mathcal{A}a^{n}}&=\kappa\int dzK^{-1/3}\psi_{2n} \psi_{0},\quad a_{\mathcal{A}A}=\kappa\int dzK^{-1/3}\psi_{0}^{2},\end{split} \tag{8}\]
mixing the photon field \(\mathcal{V}\) with every vector meson \(v^{n}\). The coefficients \(a_{\mathcal{V}\mathcal{V}}\) and \(a_{\mathcal{A}\mathcal{A}}\) are divergent, since the external fields correspond to non-normalizable modes in the radial direction, and need to be renormalized to canonical values. The photon field \(\mathcal{V}\) does not appear in the interaction terms of this model and can only couple via the mixing (8), fully realizing VMD. Alternatively, it is possible to perform a field redefinition to diagonalize the action and to get rid of the mixing terms, thus producing new interaction terms coupling mesons to photons.
### Radiative decays of pseudoscalars and vector mesons
The relevant vertices for radiative decays of pseudoscalars and (axial) vector mesons come from the Chern-Simons term
\[S_{CS} \supset iT_{8}\int\,\mathrm{tr}\,\left(\exp\left(2\pi\alpha^{\prime}F_{2 }+B_{2}\right)\wedge C_{3}\right)\] \[\supset i\frac{N_{c}}{96\pi^{2}}\epsilon^{\mu\nu\rho\sigma z}\int\, \mathrm{tr}\,\left(3A_{z}F_{\mu\nu}F_{\rho\sigma}-4A_{\mu}\partial_{z}A_{\nu} F_{\rho\sigma}\right), \tag{3.9}\]
where we have used partial integration.
Inserting the mode expansion (3.6) and integrating over the radial coordinate we obtain for the interaction term involving two vectors and one pseudoscalar
\[\mathcal{L}_{\Pi v^{m}v^{n}}= \frac{N_{c}}{4\pi^{2}}\frac{i}{f_{\pi}}c_{v^{n}v^{m}}\epsilon^{ \mu\nu\rho\sigma}\,\mathrm{tr}\,\left(\Pi\partial_{\mu}v^{n}_{\nu}\partial_{ \rho}v^{m}_{\sigma}\right), \tag{3.10}\]
with coupling constants
\[c_{v^{n}v^{m}}=\frac{1}{\pi}\int\mathrm{d}zK^{-1}\psi_{2n-1}\psi_{2m-1}=\left\{ \frac{1350.83}{\lambda N_{c}},\ldots\right\} \tag{3.11}\]
as studied in [43], where numerical results for the coefficients beyond \(c_{v^{1}v^{1}}\) given above can be found.
#### iii.2.1 Vector meson \(1\gamma\)-decays
Using VMD, we can calculate the interaction term for the radiative decay of a vector meson into a pseudoscalar and one photon as
\[\mathcal{L}_{\Pi Vv^{n}}=\frac{N_{c}}{4\pi^{2}}\frac{i}{f_{\pi}}c_{\mathcal{V} v^{n}}\epsilon^{\mu\nu\rho\sigma}\,\mathrm{tr}\,\left(\Pi\partial_{\mu}v^{n}_{ \nu}\partial_{\rho}\mathcal{V}_{\sigma}+\Pi\partial_{\mu}\mathcal{V}_{\nu} \partial_{\rho}v^{n}_{\sigma}\right), \tag{3.12}\]
with coupling
\[\begin{split} c_{\mathcal{V}v^{n}}=&\sum_{m}c_{v^{n }v^{m}}a_{\mathcal{V}v^{m}}=\frac{1}{\pi}\int\mathrm{d}zK^{-1}\psi_{2n-1}\\ &=\left\{33.9839,\ldots\right\}(N_{c}\lambda)^{-1/2},\end{split} \tag{3.13}\]
where we have used the completeness relation (2.10) to eliminate the summed-over modes.
Performing the polarization sums we get
\[\left|\mathcal{M}_{(v^{n}\to\Pi\gamma)}\right|^{2}=\sum_{(v^{n})} \sum_{(\mathcal{V})}\frac{1}{3}\epsilon^{(v^{n})}_{\mu}\epsilon^{(v^{n})*}_{ \nu}\epsilon^{(\mathcal{V})}_{\rho}\epsilon^{(\mathcal{V})*}_{\sigma}\mathcal{ M}^{\mu\rho}_{(\Pi v^{n}\mathcal{V})}\mathcal{M}^{\nu\sigma*}_{(\Pi v^{n} \mathcal{V})}\] \[=\frac{c_{\mathcal{V}v^{n}}{}^{2}e^{2}N_{c}^{2}}{96\pi^{4}f_{\pi} ^{2}}\left(\,\mathrm{tr}\,\left(T_{\Pi}T_{v^{n}}Q\right)+\,\mathrm{tr}\,\left( T_{\Pi}QT_{v^{n}}\right)\right)^{2}\left(m_{\Pi}^{2}-m_{v^{n}}^{2}\right)^{2}.\]
The partial width then reads
\[\Gamma_{v^{n}\to\Pi\gamma}= \frac{1}{8\pi}\left|\mathcal{M}_{(v^{n}\to\Pi\gamma)}\right|^{2} \frac{|\mathbf{p}_{v}|}{m_{v}^{2}}. \tag{3.14}\]
#### iii.2.2 Pseudoscalar meson \(2\gamma\)-decays
Employing VMD a second time, we can derive the interaction term for a decay of a pseudoscalar meson in two photons
\[\mathcal{L}_{\Pi\mathcal{V}\mathcal{V}}= -\frac{N_{c}}{4\pi^{2}}\frac{i}{f_{\pi}}c_{\mathcal{V}\mathcal{V}} e^{\mu\nu\rho\sigma}\operatorname{tr}\,\left(\Pi\partial_{\mu}\mathcal{V}_{ \nu}\partial_{\rho}\mathcal{V}_{\sigma}\right), \tag{3.15}\]
where the sum over the entire tower of vector mesons yields
\[c_{\mathcal{V}\mathcal{V}}= \sum_{m}c_{\mathcal{V}v^{m}}a_{\mathcal{V}v^{m}}=\frac{1}{\pi} \int\mathrm{d}zK^{-1}=1, \tag{3.16}\]
leading to the standard result
\[\Gamma_{\Pi\to\gamma\gamma}= \frac{1}{8\pi}\left|\mathcal{M}_{(\Pi\to\mathcal{V}\mathcal{V})} \right|^{2}\frac{\left|\mathbf{p}_{\gamma}\right|}{m_{\Pi}^{2}}\frac{1}{2} \tag{3.17}\]
with
\[\left|\mathcal{M}_{(\Pi\to\mathcal{V}\mathcal{V})}\right|^{2}= \frac{e^{4}N_{c}^{2}}{4\pi^{4}f_{\pi}^{2}}\left(\operatorname{tr }\,\left(T_{\Pi}Q^{2}\right)\right)^{2}m_{\Pi}^{4}. \tag{3.18}\]
The numeric results for the various radiative decays involving one pseudoscalar and two vector particles are summarized in Table 1 for \(\lambda=16.63\ldots 12.55\). As mentioned above, \(\lambda=16.63\) is the traditional [42; 43] value matching \(f_{\pi}=92.4\) MeV, whereas \(\lambda=12.55\) is an alternative choice matching the large-\(N\) string tension at the expense of \(f_{\pi}\). The decay rate for \(\pi^{0}\) is therefore close
\begin{table}
\begin{tabular}{l c c} \hline \hline & \(\Gamma^{\text{exp.}}[\text{keV}]\) & \(\Gamma^{\text{WSS}}[\text{keV}]\) \\ \hline \(\pi^{0}\to 2\gamma\) & \(0.00780(12)\) & \(0.00773\ldots 0.0102\) \\ \(\eta\to 2\gamma\) & \(0.515(18)\) & \(0.480\ldots 0.978\) \\ \(\eta^{\prime}\to 2\gamma\) & \(4.34(14)\) & \(5.72\ldots 5.87\ldots 5.75\) \\ \hline \(\rho^{0}\to\pi^{0}\gamma\) & \(70(12)\) & \(56.2\ldots 98.6\) \\ \(\rho^{\pm}\to\pi^{\pm}\gamma\) & \(68(7)\) & \(56.2\ldots 98.6\) \\ \(\rho^{0}\to\eta\gamma\) & \(45(3)\) & \(40.3\ldots 90.5\) \\ \(\omega\to\pi^{0}\gamma\) & \(725(34)\) & \(521\ldots 915\) \\ \(\omega\to\eta\gamma\) & \(3.9(4)\) & \(4.87\ldots 10.9\) \\ \(\eta^{\prime}\to\rho^{0}\gamma\) & \(55.4(1.9)^{\text{fit}}\text{,}68(7)^{\text{av.}}\) & \(54.1\ldots 59.2\ldots 58.5\) \\ \(\eta^{\prime}\to\omega\gamma\) & \(4.74(20)^{\text{fit}}\text{,}5.8(7)^{\text{av.}}\) & \(5.37\ldots 5.89\ldots 5.81\) \\ \(\phi\to\pi^{0}\gamma\) & \(5.6(2)\) & \(0\) \\ \(\phi\to\eta\gamma\) & \(55.3(1.2)\) & \(84.7\ldots 92.8\ldots 91.6\) \\ \(\phi\to\eta^{\prime}\gamma\) & \(0.264(10)\) & \(0.525\ldots 1.18\) \\ \(K^{*0}\to K^{0}\gamma\) & \(116(10)\) & \(124\ldots 218\) \\ \(K^{*\pm}\to K^{\pm}\gamma\) & \(50(5)\) & \(31.0\ldots 54.5\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results for various radiative decay widths of pseudoscalar and vector mesons involving vector and pseudoscalar mesons, with ’t Hooft coupling \(\lambda=16.63\ldots 12.55\) (\(\lambda=16.63\) is the traditional [42; 43] value matching \(f_{\pi}=92.4\) MeV; \(\lambda=12.55\) an alternative choice matching the large-\(N\) string tension at the expense of \(f_{\pi}\)). For nonmonotonic dependence on \(\lambda\) intermediate extremal values are also given. Ideal mixing is assumed for \(\omega\) and \(\phi\). Experimental results are from the PDG [68] except for the \(\pi^{0}\) width, which is from [75].
to the experimental value only for the first value of \(\lambda\), but the partial widths of the decays \(\rho\) and \(\omega\) into \(\pi\gamma\) are reproduced by an intermediate value of \(\lambda\).
In processes involving \(\eta\) and \(\eta^{\prime}\), we have used the pseudoscalar mixing angle following from (20), which varies as \(\theta_{P}\approx-14^{\circ}\cdots-24^{\circ}\) when \(\lambda=16.63\ldots 12.55\). Here the dependence on \(\lambda\) is nonmonotonic, and Table 1 also gives the extremal values when attained at intermediate values of \(\lambda\). The vector couplings in the WSS model are flavor-symmetric, but we distinguish \(\omega\) and \(\phi\) through their experimental masses, assuming ideal mixing. This gives generally good results for decays involving \(\omega\), but larger discrepancies with experiment for \(\phi\) mesons.
### Radiative axial-vector decays
From the 5-dimensional CS-term (9) we can also extract a term including two vector mesons and one axial-vector meson
\[\mathcal{L}_{v^{m}v^{n}a^{p}}= -i\frac{N_{c}}{4\pi^{2}}d_{v^{m}v^{n}a^{p}}\epsilon^{\mu\nu\rho \sigma}\,\mathrm{tr}\,\left(v_{\mu}^{m}a_{\nu}^{p}\partial_{\rho}v_{\sigma}^{ n}\right), \tag{19}\]
with
\[d_{v^{m}v^{n}a^{p}}= \int\mathrm{d}z\psi_{2m-1}\psi_{2n-1}^{\prime}\psi_{2p}, \tag{20}\]
where we again made use of partial integration.
As noted already in [43] and observed before in other holographic models [76; 77; 78] as well as in the hidden local symmetry approach of [79], the vertex for the decay of an axial vector meson into a pseudoscalar and a photon, which would have to come from the DBI part of the action, vanishes,2 even though there is a nonvanishing vertex for \(a_{1}^{\pm}\to\pi^{\pm}\rho^{0}\), see (23). But the corresponding coupling for an on-shell photon is obtained by replacing \(\psi_{1}\) therein by a unity, leading to
Footnote 2: In the hidden local symmetry approach, \(a_{1}\to\pi\gamma\) has been included by adding higher-derivative terms to the action [80].
\[g_{a_{1}\pi\mathcal{V}}=2M_{\mathrm{KK}}\sqrt{\frac{\kappa}{\pi}}\int\mathrm{ d}z\psi_{2}^{\prime}=0, \tag{21}\]
implying a cancellation between the contribution from the lowest vector meson and the remaining tower. Indeed, the experimental result for \(a_{1}^{\pm}\to\pi^{\pm}\gamma\) is much smaller than expected from naive VMD [81].
#### iii.3.1 Axial-vector \(1\gamma\)-decays
Employing VMD once we obtain for the interaction between one axial vector meson, one vector meson and one photon
\[\mathcal{L}_{\mathcal{V}v^{n}a^{p}}= -i\frac{N_{c}}{4\pi^{2}}d_{\mathcal{V}v^{n}a^{p}}\epsilon^{\mu \nu\rho\sigma}\,\mathrm{tr}\,\left(v_{\mu}^{n}a_{\nu}^{p}\partial_{\rho} \mathcal{V}_{\sigma}\right), \tag{22}\]
with
\[d_{\mathcal{V}v^{n}a^{p}}= \int\mathrm{d}z\psi_{2n-1}^{\prime}\psi_{2p}=\{-2497.14,\ldots\} N_{c}^{-1}\lambda^{-1}, \tag{23}\]
where we had to sum over the radial mode without the derivative to get a non-vanishing result since the bulk-to-boundary propagator associated to an on-shell photon is constant. The amplitudes for the decay \(a\to v{\cal V}\), for the different combinations of polarizations read
\[\left|{\cal M}_{-101}^{a^{p}\to v^{n}{\cal V}}\right|= \frac{d_{{\cal V}^{n}a^{p}}\left(m_{a^{p}}^{2}-m_{v^{n}}^{2}\right) N_{c}}{8m_{v^{n}}\pi^{2}}\,{\rm tr}\,\left(eQT_{a^{n}}T_{v^{p}}\right)\] \[\left|{\cal M}_{-110}^{a^{p}\to v^{n}{\cal V}}\right|= \frac{d_{{\cal V}^{n}a^{p}}\left(m_{a^{p}}^{2}-m_{v^{n}}^{2}\right) N_{c}}{8m_{a^{p}}\pi^{2}}\,{\rm tr}\,\left(eQT_{a^{n}}T_{v^{p}}\right), \tag{3.24}\]
which yields
\[\left|{\cal M}_{a^{p}\to v^{n}{\cal V}}\right|^{2}= \frac{1}{3}\left(2\left|{\cal M}_{-101}^{a^{p}\to v^{n}{\cal V}} \right|^{2}+2\left|{\cal M}_{-110}^{a^{p}\to v^{n}{\cal V}}\right|^{2}\right)\] \[= \frac{d_{{\cal V}^{n}a^{p}}^{2}\left(m_{a^{p}}^{2}-m_{v^{n}}^{2} \right)^{2}\left(m_{a^{p}}^{2}+m_{v^{n}}^{2}\right)N_{c}^{2}}{96\pi^{4}m_{a^{p} }^{2}m_{v^{n}}^{2}}\left(\,{\rm tr}\,\left(eQT_{a^{n}}T_{v^{p}}\right)\right)^ {2}. \tag{3.25}\]
The decay width is given by
\[\Gamma_{a^{p}\to v^{n}\gamma}= \frac{1}{8\pi}\left|{\cal M}_{a^{p}\to v^{n}{\cal V}}\right|^{2} \frac{\left|{\bf p}_{\cal V}\right|}{m_{a^{p}}^{2}}, \tag{3.26}\]
and the numerical results are listed in table 2.
The PDG [68] gives experimental results only for the \(f_{1}\) mesons, which in the WSS model have the same mass as the \(a_{1}\) meson. Besides extrapolating to their experimental masses we consider also two possible values (motivated below) for the mixing angle for the \(f_{1}\) and \(f_{1}^{\prime}\) mesons using the convention
\[\left|f_{1}(1285)\right\rangle = \cos\theta_{f}|\bar{n}n\rangle-\sin\theta_{f}|\bar{s}s\rangle\] \[\left|f_{1}(1420)\right\rangle = \sin\theta_{f}|\bar{n}n\rangle+\cos\theta_{f}|\bar{s}s\rangle \tag{3.27}\]
so that ideal mixing corresponds to \(\theta_{f}=0\).
#### iii.1.2 Axial-vector \(2\gamma\)-decays
As mentioned above, the radial derivative of the bulk-to-boundary propagator for a photon vanishes for on-shell photons, which implies that in accordance with the Landau-Yang theorem at
least one photon in the two-photon-decay of an axial vector meson has to be off-shell. Denoting the virtual photon by \(v^{*}\) we have
\[d_{\mathcal{V}v^{*}a^{p}}=\int\mathrm{d}z\mathcal{J}^{\prime}\psi_{a^{p}}, \tag{3.28}\]
where we have introduced the (off-shell) bulk-to-boundary propagator \(\mathcal{J}\) defined by
\[\left(1+z^{2}\right)^{1/3}\partial_{z}\left[\left(1+z^{2}\right) \partial_{z}\mathcal{J}\right]=\frac{Q^{2}}{M_{\mathrm{KK}}^{2}}\mathcal{J}. \tag{3.29}\]
Since we are only interested in the low \(Q\) regime we make the ansatz
\[\mathcal{J}(Q,z)= 1+\frac{Q^{2}}{M_{\mathrm{KK}}^{2}}\alpha(z)+\mathcal{O}(Q^{4}) \tag{3.30}\]
satisfying
\[\left(1+z^{2}\right)^{1/3}\partial_{z}\left[\left(1+z^{2}\right) \partial_{z}\alpha\right]=1. \tag{3.31}\]
With the solution
\[\partial_{z}\alpha= \frac{z}{\left(1+z^{2}\right)}\,_{2}F_{1}\left(\frac{1}{3},\frac {1}{2},\frac{3}{2},-z^{2}\right) \tag{3.32}\]
we obtain for the relevant coupling constant
\[d_{\mathcal{V}v^{*}a^{p}}= \frac{Q^{2}}{M_{\mathrm{KK}}^{2}}\int\mathrm{d}z\alpha^{\prime} \psi_{a^{p}}+\mathcal{O}(Q^{4})\] \[= \frac{Q^{2}}{M_{\mathrm{KK}}^{2}}c_{\mathcal{V}v^{*}a^{p}}+ \mathcal{O}(Q^{4}) \tag{3.33}\]
with
\[c_{\mathcal{V}v^{*}a^{p}}= 101.309N_{c}^{-1/2}\lambda^{-1/2}. \tag{3.34}\]
The decay widths then read
\[\Gamma(f(1285)\to\gamma_{L}^{*}\gamma_{T})= \frac{2}{3}\left(\frac{c_{\mathcal{V}v^{*}a}m_{a}^{2}N_{c}}{8\pi ^{2}M_{\mathrm{KK}}^{2}}\right)^{2}\frac{1}{8\pi}\frac{|\mathbf{p}|}{m_{a}^{2 }}\left(\frac{5e^{2}}{18}\cos\theta_{f}-\frac{e^{2}}{9\sqrt{2}}\sin\theta_{f} \right)^{2}Q^{2}+\mathcal{O}(Q^{4})\] \[\Gamma(f(1285)\to\gamma_{T}^{*}\gamma_{T})= \mathcal{O}\left(Q^{6}\right). \tag{3.35}\]
and
\[\Gamma(f(1420)\to\gamma_{L}^{*}\gamma_{T})= \frac{2}{3}\left(\frac{c_{\mathcal{V}v^{*}a}m_{a}^{2}N_{c}}{8\pi ^{2}M_{\mathrm{KK}}^{2}}\right)^{2}\frac{1}{8\pi}\frac{|\mathbf{p}|}{m_{a}^{2 }}\left(\frac{5e^{2}}{18}\sin\theta_{f}+\frac{e^{2}}{9\sqrt{2}}\cos\theta_{f} \right)^{2}Q^{2}+\mathcal{O}(Q^{4}). \tag{3.36}\]
In the literature one usually finds the values for the so-called equivalent photon rate
\[\tilde{\Gamma}_{\gamma\gamma}=\lim_{Q^{2}\to 0}\frac{m_{a}^{2}}{Q^{2}} \frac{1}{2}\Gamma\left(a\to\gamma_{L}^{*}\gamma_{T}\right), \tag{3.37}\]
which are listed in III.
The mixing angle is inferred from
\[\tan^{2}\left(\theta_{f}-\arctan\frac{\sqrt{2}}{5}\right)=\left(\frac{m_{f_{1}}}{ m_{f_{1}^{\prime}}}\right)^{1+\xi}\frac{\tilde{\Gamma}_{\gamma\gamma}^{f_{1}^{ \prime}\exp}}{\tilde{\Gamma}_{\gamma\gamma}^{f_{1}^{\prime}\exp}}, \tag{3.38}\]
where the usual assumption of \(\xi=0\) leads to \(\theta_{f}=26.4^{\circ}\), corresponding to the central value of \(\theta_{A}=62(5)^{\circ}\) in [83]. However, in the WSS the coupling \(d_{\mathcal{V}v^{*a}p}\) is proportional to \(1/M_{\text{KK}}^{2}\), which leads to a scaling of \(\tilde{\Gamma}_{\gamma\gamma}\) with four additional powers of \(m_{a}\), i.e. \(\xi=4\), resulting in \(\theta_{f}=20.4^{\circ}\).
In Tables 2 and 3 we consider two possible extrapolations to axial vector mesons with realistic masses. In the first we keep the parameters of the theory unchanged in the expressions for the couplings and use the measured masses only in kinematical factors, which leads to \(\xi=4\) and \(\theta_{f}=20.4^{\circ}\); in the second we rescale \(M_{\text{KK}}\) proportional to \(m_{a}^{\exp}/m_{a}^{\text{WSS}}\) such that \(\xi=0\) and \(\theta_{f}=26.4^{\circ}\).
While the predictions for the equivalent photon rate for the \(f_{1}\) mesons (shown in Table 3) agree well with the experimental result for the standard choice of \(\lambda=16.63\) and \(\theta_{f}=20.4^{\circ}\), the 1-\(\gamma\) decay rates are significantly underestimated. In contrast to the radiative decays of vector mesons, lowering \(\lambda\) does not increase the rates sufficiently to cover the experimental results. Unfortunately no experimental results are available for isotriplet axial vector mesons, where the WSS model is generally performing best.
## IV Glueballs in the Witten-Sakai-Sugimoto model
Glueballs are realized in the WSS model as fluctuations of the background in which the probe D8 branes are placed, where certain superselection rules are applied. In particular states with odd parity in the extra circle along \(\tau\) are discarded, as well as Kaluza-Klein modes of the compact \(S^{1}\) and \(S^{4}\) subspaces. The resulting glueball spectrum was discussed in [41], where the lift of (2.1) to 11-dimensional supergravity is used. In the following we shall consider scalar, tensor, and pseudoscalar glueballs, for which hadronic decays have been worked out in the WSS model in [44; 45; 46; 47] and which we review and update in Appendix A in some detail for the scalar and tensor glueballs.
The lift of a type IIA string-frame metric to 11-dimensional supergravity is given by the relation
\[\begin{split}\mathrm{d}s^{2}&=G_{MN}\mathrm{d}x^{ M}\mathrm{d}x^{N}\\ &=e^{-2\phi/3}g_{AB}\mathrm{d}x^{A}\mathrm{d}x^{B}+e^{4\phi/3} \left(\mathrm{d}x^{11}+A_{B}\mathrm{d}x^{B}\right)^{2},\end{split} \tag{4.1}\]
with \(M,N=0,\ldots 10\) and \(A,B=0,\ldots 9\), omitting the 11th index. By introducing the radial
\begin{table}
\begin{tabular}{l c c} \hline \hline & \(\tilde{\Gamma}_{\gamma\gamma}^{\exp}[\text{keV}]\) & \(\tilde{\Gamma}_{\gamma\gamma}^{\text{WSS}}[\text{keV}]\) \\ \hline \(a_{1}\,(1260)\) & - & \(1.60\ldots 2.12|1.39\ldots 1.85\) \\ \hline \(f_{1}\,(1285)\) & \(3.5(8)\) & \(3.84\ldots 5.09|2.39\ldots 3.17\) \\ \(f_{1}\,(1420)\) & \(3.2(9)\) & \(3.50\ldots 4.64|2.19\ldots 2.90\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Equivalent photon rates of axial vector mesons for two values of the \(f_{1}\) mixing angle \(\theta_{f}=20.4^{\circ}|26.4^{\circ}\) (in the latter case with \(M_{\text{KK}}\) rescaled such that \(m_{a}\) is raised to the experimental value which reduces \(\xi\) in (3.38) to zero); the range denoted by dots corresponds again to \(\lambda=16.63\ldots 12.55\), where only the first value is matching the axial anomaly exactly. Experimental values from L3 [84; 85], see also [83].
coordinate \(r\) related to \(U\) by \(U=\frac{r^{2}}{2L}\), we get the lifted metric
\[\mathrm{d}s^{2}=\frac{r^{2}}{L^{2}}\left[f(r)\mathrm{d}x_{4}^{2}+\eta_{\mu\nu} \mathrm{d}x^{\mu}\mathrm{d}x^{\nu}+\mathrm{d}x_{11}^{2}\right]+\frac{L^{2}}{r^ {2}}\frac{\mathrm{d}r^{2}}{f\left(r\right)}+\frac{L^{2}}{4}\mathrm{d}\Omega_{4} ^{2}, \tag{10}\]
and the field strength \(F_{\alpha\beta\gamma\delta}=\frac{6}{L}\sqrt{g_{S^{4}}}\epsilon_{\alpha\beta \gamma\delta}\), which are solutions to the equations of motion following from the unique supergravity action
\[2\kappa_{11}^{2}S_{11}=\int\mathrm{d}^{11}x\sqrt{-G}\left(R-\frac{1}{2}|F_{4}| ^{2}\right)-\frac{1}{3!}\int A_{3}\wedge F_{4}\wedge F_{4}. \tag{11}\]
Scalar and tensor glueball modes appear as normalizable modes of metric fluctuations \(\delta G\), which translate to perturbations of the type-IIA string metric and dilaton through
\[g_{\mu\nu}= \frac{r^{3}}{L^{3}}\left[\left(1+\frac{L^{2}}{2r^{2}}\delta G_{1 1,11}\right)\eta_{\mu\nu}+\frac{L^{2}}{r^{2}}\delta G_{\mu\nu}\right]\] \[g_{44}= \frac{r^{3}f}{L^{3}}\left[1+\frac{L^{2}}{2r^{2}}\delta G_{11,11} +\frac{L^{2}}{r^{2}f}\delta G_{44}\right]\] \[g_{rr}= \frac{L}{rf}\left[1+\frac{L^{2}}{2r^{2}}\delta G_{11,11}+\frac{r ^{2}f}{L^{2}}\delta G_{rr}\right]\] \[g_{r\mu}= \frac{r}{L}\delta G_{r\mu}\] \[g_{\Omega\Omega}= \frac{r}{L}\left(\frac{L}{2}\right)^{2}\left(1+\frac{L^{2}}{2r^{2 }}\delta G_{11,11}\right)\] \[e^{4\phi/3}= \frac{r^{2}}{L^{2}}\left(1+\frac{L^{2}}{r^{2}}\delta G_{11,11} \right). \tag{12}\]
Inducing these metric fluctuations to the world volume of the D8-brane system described by the action (5), [44] calculated interaction vertices of the lightest scalar glueball with mesons, which was revisited and extended in [45].
Pseudoscalar, vector, and pseudovector glueballs appear as fluctuations of the type-IIA form fields; glueballs with higher spin would need a stringy description beyond the supergravity approximation [86].
### Exotic and dilatonic scalar glueballs
Superficially, the emerging glueball spectrum resembles the one found in lattice calculations (see Fig. 1 in [47]), containing a lightest scalar glueball with a mass below that of the tensor glueball, whereas most other holographic models have the scalar glueball degenerate with the tensor. This is achieved by an "exotic" polarization of the bulk metric involving the extra compact dimension
(\(\tau\)) separating the D8-branes,
\[\delta G_{\tau\tau} =-\frac{r^{2}}{\mathcal{N}_{E}\,L^{2}}f(r)S_{4}(r)G_{E}(x^{\sigma}),\] \[\delta G_{\mu\nu} =\frac{r^{2}}{\mathcal{N}_{E}\,L^{2}}S_{4}(r)\left[\frac{1}{4} \eta_{\mu\nu}-\left(\frac{1}{4}+\frac{3r_{\rm KK}^{6}}{5r^{6}-2r_{\rm KK}^{6}} \right)\frac{\partial_{\mu}\partial_{\nu}}{M^{2}}\right]G_{E}(x^{\sigma}),\] \[\delta G_{11,11} =\frac{r^{2}}{\mathcal{N}_{E}\,4L^{2}}S_{4}(r)G_{E}(x^{\sigma}),\] \[\delta G_{rr} =-\frac{L^{2}}{\mathcal{N}_{E}\,r^{2}f(r)}\frac{3r_{\rm KK}^{6}}{ 5r^{6}-2r_{\rm KK}^{6}}S_{4}(r)G_{E}(x^{\sigma}),\] \[\delta G_{r\mu} =\delta G_{\mu\tau} =\frac{90r^{7}r_{\rm KK}^{6}}{\mathcal{N}_{E}\,M^{2}L^{2}\left(5r ^{6}-2r_{\rm KK}^{6}\right)^{2}}S_{4}(r)\partial_{\mu}G_{E}(x^{\sigma}), \tag{4.5}\]
with eigenvalue equation [41]
\[\frac{\rm d}{{\rm d}r}\left(r^{7}-r\,r_{KK}^{6}\right)\frac{\rm d}{{\rm d}r}S_ {4}(r)+\left(L^{4}M_{E}^{2}r^{3}+\frac{432r^{5}r_{\rm KK}^{12}}{\left(5r^{6}-2 r_{KK}^{6}\right)^{2}}\right)S_{4}(r)=0. \tag{4.6}\]
However, with \(M_{E}=855\) MeV its mass is only a bit higher than that of the \(\rho\) meson, whereas the predominantly dilatonic mode that is the ground state of another tower of scalar modes with respect to 3+1 dimensions is only a little lighter than the traditional glueball candidates \(f_{0}(1500)\) and \(f_{0}(1710)\). This mode is degenerate with the tensor mode and involves only metric fluctuations \(\delta G_{11,11}\) and \(\delta G_{\mu\nu}\), see (A1).
The exotic scalar glueball, denoted by \(G_{E}\) in the following, turns out [45] to have a relative width \(\Gamma/M\) that is much higher than that of the predominantly dilatonic scalar glueball (\(G_{D}\)), but only the latter has a \(\Gamma/M\) in the ballpark of \(f_{0}(1500)\) and \(f_{0}(1710)\).
It was therefore proposed in [45] to discard \(G_{E}\) from the spectrum of glueballs of the WSS model as a spurious mode that perhaps disappears in the inaccessible limit \(M_{\rm KK}\to\infty\), where the supergravity approximation breaks down. Already in [40] it was speculated that only one of the two scalar glueball towers might correspond to the glueballs in QCD. Since it appears somewhat unnatural that an excited scalar glueball should have a smaller width than the ground-state scalar glueball, [45] preferred the dilatonic scalar glueball as candidate for the actual ground state.
Indeed, the dilatonic scalar glueball turns out to have a decay pattern that can match surprisingly well the glueball candidate \(f_{0}(1710)\), in particular when including additional couplings associated with the quark mass term [46, 47]. This may actually apply instead to \(f_{0}(1770)\), which was proposed originally in [87] as an additional \(f_{0}\) resonance between 1700 and 1800 MeV and more recently in [24] in radiative \(J/\psi\) decays, where it appears dominantly as the most glue-rich resonance.3
Footnote 3: The next (2023) update of the PDG [68] will in fact include \(f_{0}(1770)\) as a separate resonance (C. Amsler, private communication).
The fact that the ratio \(\Gamma_{f_{0}\to K\bar{K}}/\Gamma_{f_{0}\to\pi\pi}\) is significantly higher for \(f_{0}(1710)\)[68] (or for \(f_{0}(1770)\) according to [24]) than expected from a flavor-symmetric glueball coupling can be attributed to the fact that dilaton fluctuations couple naturally to quark mass terms, similar to, but more pronounced than, in a model by Ellis and Lanik [88]. There is therefore no need to invoke the previous conjecture of chiral suppression of scalar glueball decay [48, 89, 90], which was questioned in [91].
In the following we shall mainly explore the consequences of this identification of the scalar glueball. In the radiative decay rates considered here, the explicit quark masses will however not modify the (chiral) results for the couplings; they are only included in phase space factors.
We shall however need to make assumptions on how to extrapolate to realistic glueball masses, which we describe in more detail below. While the mass of \(f_{0}(1710)\) is not too much above the original mass of \(G_{D}\) in the WSS model, larger extrapolations are required for the tensor and pseudoscalar glueballs when comparing to the various glueball candidates or lattice results.
As an alternative scenario, we shall also consider the option of keeping the exotic scalar glueball mode \(G_{E}\), whose relative decay width \(\Gamma/M\) is much too large to be identified with the traditional glueball candidates \(f_{0}(1510)\) or \(f_{0}(1710)\) with total width 112(9) MeV and 128(18) MeV, respectively, see Table 4. It would in fact fit better to the proposal in [24; 8; 25] of a relatively broad fragmented glueball of mass 1865 MeV and a width of 370 MeV that does not show up as a separate meson but only as admixture in the mesons \(f_{0}(1710)\), a novel \(f_{0}(1770)\), \(f_{0}(2020)\), and \(f_{0}(2100)\). Of course, this requires a drastic rise of the original mass of \(G_{E}\) by a factor of over 2, but also the mass of the tensor mode \(G_{T}\) would have to be raised by a factor of 1.6 to match the expectation of \(m_{T}\sim 2400\) MeV from lattice QCD; the mass of \(G_{D}\), which would then be identified with the first excited scalar glueball, would need to be raised somewhat more, as lattice results point to a mass above the tensor glueball, from around 2670 MeV[11] to around 3760 MeV [13].
### Extrapolations to realistic glueball masses
In the WSS model, the masses of glueballs are given by pure numbers times \(M_{\rm KK}\), which is also the case for the (axial) vector mesons. However, when \(M_{\rm KK}\) has been fixed by the mass of the \(\rho\) meson, the glueball masses appear to be too small compared to lattice QCD results.
In order to predict decay rates for different glueball candidates we manually change the masses of glueball modes in amplitudes and phase space integrals, which could be viewed as assuming a different scale \(M_{\rm KK}\) for the glueball sector. The coupling constants involving glueballs are all inversely proportional to \(M_{\rm KK}\) and we interpret this appearance of \(M_{\rm KK}\) to be tied to the mass scale of glueballs, which shows up also in their normalization factors \(\mathcal{N}\), whereas explicit appearances of \(M_{\rm KK}\) in the DBI action of the D branes are considered as being fixed like the mass of the \(\rho\) meson. When upscaling glueball masses, we have therefore correspondingly reduced the dimensionful glueball-meson/photon coupling constants. [Without such a rescaling, the results for all glueball decay rates and the glueball contributions to \(a_{\mu}\) presented in Sect. V extrapolated to some mass \(M_{G}\) would be simply larger by a factor \((M_{G}/M_{G}^{\rm WSS})^{2}\).]
We consider this rescaling plausible in that the overlap integrals of glueball and meson holographic profiles should become smaller when glueball and meson modes are separated further in energy. It may well be, however, that this reduction is only insufficiently accounted for by the over
\begin{table}
\begin{tabular}{c c c c} \hline \hline \(M_{E}\) & \(\Gamma_{G_{E}}^{=0}[{\rm MeV}]\) & \(\Gamma_{G_{E}}^{=1}[{\rm MeV}]\) \\ \hline
855 & 72...96 & 85...113 \\
1506 & 286...383 & 430...570 \\
1712 & 351...469 & 483...640 \\
1865 & 398...530 & 521...691 \\ \hline \hline \end{tabular}
\begin{tabular}{c c c c} \hline \hline \(M_{D}\) & \(\Gamma_{G_{D}}^{=0}[{\rm MeV}]\) & \(\Gamma_{G_{D}}^{=1}[{\rm MeV}]\) \\ \hline
1487 & 19...26 & 80...106 \\
1506 & 19...27 & 80...106 \\
1712 & 88...113 & 139...180 \\
1865 & 151...197 & 198...259 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Total decay widths of the exotic and the dilatonic scalar glueball \(G_{E}\) and \(G_{D}\) with original masses of 855 MeV and 1487 MeV, respectively, and also with extrapolations to the masses of the glueball candidates \(f_{0}(1510)\) or \(f_{0}(1710)\) and the fragmented glueball of [24; 8; 25], for two choices of the extra coupling parameter \(x\) associated with the quark mass term as defined in [47]. The range of results corresponds again to \(\lambda=16.63\ldots 12.55\). In addition to the two-body decays reviewed in Appendix A, the decays \(G_{D}\to\rho\pi\pi\to 4\pi\) which interfere destructively with \(G_{D}\to\rho\rho\to 4\pi\) have been taken into account here.
all change of the mass scale in the glueball coupling constants; thus our numerical results should be considered as somewhat rough estimates.
## V Radiative glueball decays
In the following we shall concentrate on glueball interactions involving vector mesons which through VMD also give rise to glueball-photon vertices. Other hadronic interactions of glueballs are reviewed in Appendix A.
We shall consider the first three lightest glueball states, scalar, tensor, and pseudoscalar in turn, choosing the dominantly dilatonic scalar glueball over the exotic scalar glueball, since the former has been found to match remarkably well to the decay pattern of the glueball candidate \(f_{0}(1710)\). The more unwieldy results for the exotic scalar glueball are worked out in Appendix A and B.
### Dilatonic Scalar Glueball Decays
Inducing the fluctuation (A1) in the D8 brane action (5) we obtain the interaction terms of the dilatonic scalar glueball with two vector mesons as
\[\mathcal{L}_{G_{D}v^{m}v^{n}}= \,\mathrm{tr}\,\int\mathrm{d}^{4}x\,\left(d_{3}^{mn}\eta^{\rho \sigma}F_{\mu\rho}^{m}F_{\nu\sigma}^{n}+d_{2}^{mn}M_{\mathrm{KK}}^{2}v_{\mu}^{ m}v_{\nu}^{n}\right)\left(\eta^{\mu\nu}-\frac{\partial^{\mu}\partial^{\nu}}{ \square}\right)G_{D}, \tag{10}\]
where the coupling constants are given by
\[\begin{split} d_{2}^{mn}&=\kappa\int\mathrm{d}z\,K \psi_{2n-1}^{\prime}\psi_{2m-1}^{\prime}H_{D}=\{4.3714,\dots\}\frac{1}{\lambda^ {\frac{1}{2}}N_{c}M_{\mathrm{KK}}},\\ d_{3}^{mn}&=\kappa\int\mathrm{d}z\,K^{-1/3}\psi_{ 2n-1}\psi_{2m-1}H_{D}=\{18.873,\dots\}\frac{1}{\lambda^{\frac{1}{2}}N_{c}M_{ \mathrm{KK}}}.\end{split} \tag{11}\]
Restricting to the ground-state vector mesons (\(m=n=1\)), the amplitudes for the decay of the dilatonic scalar glueball into vector mesons with transverse and longitudinal polarizations read
\[\left|\mathcal{M}_{T}^{\left(G_{D}\to v^{1}v^{1}\right)}\right| =\left[d_{3}^{11}\left(2m_{v^{1}}^{2}-\frac{3M_{D}^{2}}{4}\right) -d_{2}^{11}M_{\mathrm{KK}}^{2}\right],\] \[\left|\mathcal{M}_{L}^{\left(G_{D}\to v^{1}v^{1}\right)}\right| =\left[\frac{d_{2}^{11}M_{D}^{2}M_{\mathrm{KK}}^{2}}{4m_{v^{1}}^{ 2}}+d_{3}^{11}m_{v^{1}}^{2}\right], \tag{12}\]
in terms of which the partial decay width is given by
\[\Gamma_{G_{D}\to v^{1},u_{v}^{1,b}}=\frac{1}{S}\left(2\left|\mathcal{M}_{T}^{ \left(G_{D}\to v^{1}v^{1}\right)}\right|^{2}+\left|\mathcal{M}_{L}^{\left(G_{ D}\to v^{1}v^{1}\right)}\right|^{2}\right)\frac{|\mathbf{p}_{v^{1}}|}{8\pi M_{D}^{2}}, \tag{13}\]
where \(S\) equals 2 for identical particles (\(a=b\)) and 1 otherwise.
In the narrow-resonance approximation, this vanishes for the WSS model mass \(M_{D}=1487\) MeV, which is below the threshold of two \(\rho\) mesons. However, when \(M_{D}\) is manually adjusted to the mass of \(f_{0}(1710)\), which we assume as 1712 MeV (the average of the \(T\)-matrix pole results of [92] and [93]), the decay \(G_{D}\to\rho\rho\) becomes the largest channel, exceeding even the dominant pseudoscalar channel \(G_{D}\to KK\) (see Appendix A, Table 8).
As discussed in [45], the holographic prediction for the total rate \(G_{D}\to 4\pi\) is somewhat reduced by a destructive interference from \(G_{D}\to\rho\pi\pi\), rendering the partial width of \(G_{D}\to 4\pi\) similar to and slightly less than \(G_{D}\to KK\)[46]. Remarkably, data from radiative \(J/\psi\) decays [94] for \(f_{0}(1740)\) (or \(f_{0}(1770)\) in [24]) seem to be fairly consistent with this result.
Dilatonic scalar glueball \(1\gamma\)-decays
From the interaction terms (105) we can also derive the interactions including photons by using VMD. Replacing one vector meson by a photon we find
\[\mathcal{L}_{G_{D}\mathcal{V}v^{m}}= 2d_{3}^{m\mathcal{V}}\eta^{\rho\sigma}\operatorname{tr}\,\left(F_{ \mu\rho}^{m}F_{\nu\sigma}^{\mathcal{V}}\right)\left(\eta^{\mu\nu}-\frac{\partial ^{\mu}\partial^{\nu}}{\square}\right)G_{D}, \tag{106}\]
with
\[d_{3}^{m\mathcal{V}}\equiv \kappa\int\mathrm{d}z\,K^{-1/3}\psi_{2m-1}H_{D}\] \[=\left\{0.46895,\dots\right\}\frac{1}{M_{\mathrm{KK}}\sqrt{N_{c}}}. \tag{107}\]
The other coupling \(d_{2}^{m\mathcal{V}}\) vanishes for an on-shell photon, since at zero virtuality its radial mode is constant and drops in the replacement \(\psi^{\prime}\to\mathcal{J}^{\prime}=0\).
In radiative decays, only the transverse amplitude remains, which reads
\[\left|\mathcal{M}_{T}^{(G_{D}\to\mathcal{V}v^{m})}\right|=\frac{d_{3}^{m \mathcal{V}}\left(m_{v}^{4}-4m_{v}^{2}M_{D}^{2}+3M_{D}^{4}\right)}{2M_{D}^{2} }\operatorname{tr}\,\left(eQT_{v}\right), \tag{108}\]
yielding
\[\Gamma_{G_{D}\to v^{m}\gamma}=2\left|\mathcal{M}_{T}^{(G_{D}\to\mathcal{V}v^{ m})}\right|^{2}\frac{|\mathbf{p}v|}{8\pi M_{D}^{2}}. \tag{109}\]
The results are displayed in Table 5 for two mass parameters corresponding to \(f_{0}(1500)\) and \(f_{0}(1710)\), where ideal mixing was assumed for the \(\omega\) and \(\phi\) mesons. The latter implies that \(\rho\gamma\) and \(\omega\gamma\) decay rates are very close to the ratio \(9:1\); a more realistic value of \(\theta_{V}=35.4^{\circ}\) changes this ratio only by a few percent. The ratio of decay rates \(\phi\gamma\) and \(\omega\gamma\), which would be 2:1 with equal masses, is, however, significantly reduced by the larger \(\phi\) mass.
#### v.2.2 Dilatonic scalar glueball \(2\gamma\)-decays
Replacing the second vector meson by a photon by means of VMD, we obtain the \(2\gamma\) interactions
\[\mathcal{L}_{G_{D}\mathcal{V}\mathcal{V}}= d_{3}^{\mathcal{V}\mathcal{V}}\eta^{\rho\sigma}\operatorname{tr}\, \left(F_{\mu\rho}^{\mathcal{V}}F_{\nu\sigma}^{\mathcal{V}}\right)\left(\eta^{ \mu\nu}-\frac{\partial^{\mu}\partial^{\nu}}{\square}\right)G_{D}, \tag{110}\]
with
\[d_{3}^{\mathcal{V}\mathcal{V}}\equiv \kappa\int\mathrm{d}z\,K^{-1/3}H_{D}=0.0130195\lambda^{1/2}M_{ \mathrm{KK}}^{-1} \tag{111}\]
which gives
\[\left|\mathcal{M}_{T}^{(G_{D}\to\mathcal{V}\mathcal{V})}\right|=\frac{3}{2}d_{3 }^{\mathcal{V}\mathcal{V}}M_{D}^{2}\operatorname{tr}\left(e^{2}Q^{2}\right) \tag{112}\]
The resulting width
\[\Gamma_{G_{D}\to\gamma\gamma}=\frac{1}{8\pi}\left|\mathcal{M}_{T}^{(G_{D}\to \mathcal{V}\mathcal{V})}\right|^{2}\frac{|\mathbf{p}_{\mathcal{V}}|}{M_{D}^{2}} \tag{113}\]
is again displayed in Table 5 for the two mass parameters corresponding to \(f_{0}(1500)\) and \(f_{0}(1710)\), which in both cases is above 1 keV.
This is larger than the old prediction by Kada et al. [53], but an order of magnitude smaller than the VMD based result of Cotanch and Williams [54], who obtained 15.1 keV for a scalar glueball with mass 1700 MeV after correcting their previous result of 2.6 keV in [95] (note that the corresponding preprint has erroneously 2.6 eV instead). Also all other radiative decay rates obtained in [53] are about an order of magnitude larger than ours (not uniformly so, however, but varying between a factor of 7 to 26, thereby deviating from the ratios discussed at the end of Sect. V.1.1).
On the other hand, the two-vector meson decay rates obtained in [95] (44.4 MeV for \(\rho\rho\) and 34.6 MeV for \(\omega\omega\)) are not very far from our results. In fact, our holographic prediction for \(f_{0}(1710)\to\omega\omega\) with \(f_{0}(1710)\) as a (predominantly dilaton) glueball appears to be in the right ballpark considering the measured branching ratios of radiative \(J/\psi\) decays in \(\gamma f_{0}(1710)\to\gamma K\bar{K}\) and \(\gamma f_{0}(1710)\to\gamma\omega\omega\)[68] (which according to [24] may be instead \(f_{0}(1770)\)). The PDG [68] quotes two results for \({\cal B}(K\bar{K})\): a BNL measurement [96] from 1986 with \({\cal B}(K\bar{K})=0.38^{+0.09}_{-0.19}\) and a phenomenological analysis [97] concluding 0.36(12), which both are consistent with the WSS result obtained in [46] as approximately 0.35. Using \({\cal B}(K\bar{K})=0.36(12)\) and the total decay width of \(f_{0}(1710)\)[68] of 123(18) MeV lead to a partial decay width for \(f_{0}(1710)\to\omega\omega\) of about 15(8) MeV, for which the holographic prediction from \(G_{D}\) amounts to \(16.6\ldots 22.0\) MeV.
No experimental results for single-photon decays of \(f_{0}(1710)\) appear to be available, but in [52] the BELLE collaboration reports a measurement of \(f_{0}(1710)\to\gamma\gamma\) with the result \(\Gamma_{\gamma\gamma}{\cal B}(K\bar{K})=12^{+3+227}_{-2-8}\) eV, with the stated conclusion that the \(f_{0}(1710)\) meson was unlikely to be a glueball because of a width larger than that expected ("much less than 1 eV") for a pure glueball state. However the holographic prediction for \(\Gamma_{\gamma\gamma}{\cal B}(K\bar{K})\approx 690\ldots 520\) eV is 3-2 \(\sigma\) above the upper limit of the BELLE result.4 Ironically, the BELLE result for the two-photon rate appears to be rather too small for a pure (predominantly dilaton) glueball interpretation of \(f_{0}(1710)\) within the WSS model.5 The central value of the BELLE result for \(\Gamma_{\gamma\gamma}\) of only a few tens of eV would thus seem to indicate that VMD does not apply for radiative decays of \(f_{0}(1710)\).
Footnote 4: Older upper limits for \(\Gamma_{\gamma\gamma}{\cal B}(K\bar{K})\) are 480 eV from ARGUS [98], 200 eV from CELLO [99], and 560 eV from TASSO [100]. (The latter two are quoted by the PDG [68] with lower values, 110 eV and 280 eV, respectively, corresponding however to the assumption of helicity 2 which leads to smaller upper limits.)
Footnote 5: Assuming a tensor glueball \(f_{2}(1720)\), [53] predicted \(\Gamma_{\gamma\gamma}{\cal B}(K\bar{K})\approx 95\) eV.
In Appendix B we also evaluate radiative decays of the exotic glueball of the WSS model. The two-photon decay width of \(G_{E}\) is considerably smaller than that of \(G_{D}\), \(87\ldots 65\) eV, when the mass
\begin{table}
\begin{tabular}{l c} \hline \hline & \(\Gamma_{G_{D}}\)[keV] \\ \hline \(f_{0}\,(1500)\to\rho\gamma\) & 184 \\ \(f_{0}\,(1500)\to\omega\gamma\) & 19.9 \\ \(f_{0}\,(1500)\to\phi\gamma\) & 14.1 \\ \(f_{0}\,(1500)\to\gamma\gamma\) & 1.74\(\ldots\) 1.32 \\ \hline \(f_{0}\,(1710)\to\rho\rho\) & \((53.5\ldots 71.0)\cdot 10^{3}\) \\ \(f_{0}\,(1710)\to\omega\omega\) & \((16.6\ldots 22.0)\cdot 10^{3}\) \\ \(f_{0}\,(1710)\to\rho\gamma\) & 276 \\ \(f_{0}\,(1710)\to\omega\gamma\) & 30.1 \\ \(f_{0}\,(1710)\to\phi\gamma\) & 29.4 \\ \(f_{0}\,(1710)\to\gamma\gamma\) & 1.98\(\ldots\) 1.50 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Radiative scalar glueball decay with \(G_{D}\) identified alternatively with \(f_{0}(1500)\) and \(f_{0}(1710)\) with masses 1506 MeV and 1712 MeV, respectively, for \(\lambda=16.63\ldots 12.55\).
of \(G_{E}\) is extrapolated to that of \(f_{0}(1710)\). However, the decay pattern of \(G_{E}\) does not fit to either \(f_{0}(1500)\) or \(f_{0}(1710)\) when extrapolating to their masses.
### Tensor Glueball Decays
The holographic mode functions associated with tensor glueballs are reviewed in Appendix A.3 together with the results of hadronic two-body decays.
Radiative decays of tensor glueballs can be derived from the interaction terms with two vector mesons, which are given by
\[\mathcal{L}_{G_{T}v^{m}v^{n}}= \,\mathrm{tr}\,\left[t_{2}M_{\mathrm{KK}}^{2}v_{\mu}^{m}v_{\nu}^{ n}G_{T}^{\mu\nu}+t_{3}F_{\mu\rho}^{m}F_{\nu}^{n\rho}G_{T}^{\mu\nu}\right], \tag{5.13}\]
with
\[t_{2}^{mn}= \,\int\mathrm{d}zK\psi_{2m-1}^{\prime}\psi_{2n-1}^{\prime}T=2 \sqrt{3}d_{2}^{mn}. \tag{5.14}\]
\[t_{3}^{mn}= \,\int\mathrm{d}zK^{-1/3}\psi_{2m-1}\psi_{2n-1}T=2\sqrt{3}d_{3}^{mn} \tag{5.15}\]
and \(d_{2,3}^{mn}\) as given in (5.2). (Note that due to a different normalization of the tensor field, the tensor coupling constants differ from those in [45] by a factor \(\sqrt{2}\); all other glueball coupling constants are defined as in [45].)
The decay rate of a tensor glueball into two vector mesons reads
\[\Gamma_{G_{T}\to vv}=\frac{1}{S} \bigg{\{}\frac{t_{2}^{2}}{120}\frac{M_{\mathrm{KK}}^{4}}{m_{v}^{4 }}(M_{G}^{4}+12m_{v}^{2}M_{G}^{2}+56m_{v}^{4})\] \[\,\,+\frac{2}{3}t_{2}t_{3}M_{\mathrm{KK}}^{2}(M_{G}^{2}-m_{v}^{2})\] \[\,\,+\frac{t_{3}^{2}}{10}(M_{G}^{4}-3m_{v}^{2}M_{G}^{2}+6m_{v}^{4 })\bigg{\}}\frac{|\mathbf{p}_{v}|}{8\pi M_{G}^{2}}, \tag{5.16}\]
where \(S\) is again the symmetry factor for identical particles.
#### v.2.1 Tensor glueball \(1\gamma\)-decays
Through VMD (5.13) leads to a coupling of the tensor glueball with one photon and one vector meson with interaction Lagrangian
\[\mathcal{L}_{G_{T}v^{n}\mathcal{V}}= 2t_{3}^{\mathcal{V}n}G_{T}^{\mu\nu}\eta^{\rho\sigma}\,\mathrm{tr }\,\left(F_{\mu\rho}^{\mathcal{V}}F_{\nu\sigma}^{n}\right), \tag{5.17}\]
with
\[t_{3}^{\mathcal{V}n}= \,\int\mathrm{d}zK^{-1/3}\psi_{2n-1}T=2\sqrt{3}d_{3}^{\mathcal{V}n} \tag{5.18}\]
and \(d_{3}^{\mathcal{V}n}\) as given in (5.6).
This yields
\[\Gamma_{G_{T}\to v\gamma}=\frac{1}{15M_{G}^{4}}\left(\,\mathrm{tr}\,\left(eQT_{ v}\right)\right)^{2}\left(M_{G}^{2}-m_{v}^{2}\right)^{2}\left(6M_{G}^{4}+3M_{G}^{2} m_{v}^{2}+2m_{v}^{4}\right)\frac{|\mathbf{p}_{v}|}{8\pi M_{G}^{2}}. \tag{5.19}\]
#### v.2.2 Tensor glueball \(2\gamma\)-decays
Similarly (5.13) leads to
\[\mathcal{L}_{G_{T}\mathcal{PV}}= t_{3}^{\mathcal{PV}}G_{T}^{\mu\nu}\eta^{\rho\sigma}\,\mathrm{tr}\, \left(F_{\mu\rho}^{\mathcal{V}}F_{\nu\sigma}^{\mathcal{V}}\right), \tag{5.20}\]
with
\[t_{3}^{\mathcal{VV}}= \int\mathrm{d}zK^{-1/3}T=2\sqrt{3}d_{3}^{\mathcal{VV}}. \tag{5.21}\]
and \(d_{3}^{\mathcal{VV}}\) as given in (5.10).
The resulting two-photon decay width of the tensor glueball is given by
\[\Gamma_{G_{T}\rightarrow\gamma\gamma}=\frac{1}{5}\left[t_{3}^{\mathcal{VV}}M_ {G}^{2}\,\mathrm{tr}\,\left(e^{2}Q^{2}\right)\right]^{2}\frac{|\mathbf{p}_{ \gamma}|}{8\pi M_{G}^{2}}. \tag{5.22}\]
The resulting partial widths are listed in Table 6 for three values of the mass of the tensor glueball, the unrealistically small WSS model mass value 1487 MeV as well as two higher values motivated by pomeron physics [15] and QCD lattice studies [11], respectively, assuming ideal mixing of \(\omega\) and \(\phi\) mesons. With increasing mass of the glueball, the partial decay widths for \(\rho\gamma\), \(\omega\gamma\), and \(\phi\gamma\) gradually approach the ratios \(9:1:2\) for degenerate vector meson masses; again, a more realistic value of \(\theta_{V}\) changes the \(\omega\gamma\) and \(\phi\gamma\) only slightly.
The radiative decay widths obtained for the tensor glueball turn out to be comparable with those for the dilatonic scalar glueball for equal glueball mass, rising approximately linear with glueball mass (due to the rescaling described in Sect. IV.2).
Our prediction of the two-photon width of \(\sim\) 2-3 keV is significantly larger than the old prediction of Kada et al. [53] who have values in the range of hundreds of eV, and also higher than the more recent prediction in [101], where \(\Gamma_{f_{2}(1950)\rightarrow\gamma\gamma}=960(50)\) eV was obtained. Cotanch and Williams [54], on the other hand, have also results above 1 keV, \(\Gamma_{G_{T}(2010)\rightarrow\gamma\gamma}=1.72\) keV and \(\Gamma_{G_{T}(2300)\rightarrow\gamma\gamma}=1.96\) keV, by using VMD. Also their results for single-photon decays are comparable with ours, even though their results for decays into two vector mesons are significantly smaller than ours. A particular point of disagreement is their result for a relatively large \(\omega\phi\) decay mode, which in the WSS model is absent. As noted in [102], this is possible only by allowing for a rather strong deviation from the large-\(N_{c}\) limit.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \(\Gamma_{G_{T}^{\text{WSS}}}\)[keV] & \(\Gamma_{G_{T}(2000)}\)[keV] & \(\Gamma_{G_{T}(2400)}\)[keV] \\ \hline \(G_{T}\rightarrow\rho\rho\) & - & \((270\ldots 358)\)\(\cdot 10^{3}\) & \((382\ldots 507)\)\(\cdot 10^{3}\) \\ \(G_{T}\rightarrow\omega\omega\) & - & \((88.2\ldots 117)\)\(\cdot 10^{3}\) & \((127\ldots 169)\)\(\cdot 10^{3}\) \\ \(G_{T}\to K^{*}K^{*}\) & - & \((240\ldots 318)\)\(\cdot 10^{3}\) & \((417\ldots 552)\)\(\cdot 10^{3}\) \\ \(G_{T}\rightarrow\phi\phi\) & - & - & \((76.7\ldots 102)\)\(\cdot 10^{3}\) \\ \hline \(G_{T}\rightarrow\rho\gamma\) & 260 & 522 & 716 \\ \(G_{T}\rightarrow\omega\gamma\) & 28.3 & 57.5 & 79.1 \\ \(G_{T}\rightarrow\phi\gamma\) & 24.7 & 81.1 & 127 \\ \hline \hline \(G_{T}\rightarrow\gamma\gamma\) & \(1.84\ldots 1.39\) & \(2.47\ldots 1.86\) & \(2.97\ldots 2.24\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Radiative tensor glueball decays and decays into two vector mesons for \(\lambda=16.63\ldots 12.55\). Besides the pristine results for the WSS model mass of 1487 MeV, their extrapolations to glueball masses of 2000 and 2400 MeV are given.
### Pseudoscalar Glueball Decays
In the WSS model, the pseudoscalar glueball is represented by a Ramond-Ramond 1-form field \(C_{1}\), which has a kinetic mixing with the singlet \(\eta_{0}\) given by [50]
\[\eta_{0}\to\eta_{0}+\zeta_{2}\,G_{PS}=\eta_{0}+0.01118\sqrt{N_{f}/N_{c}}\,\lambda \,G_{PS}, \tag{5.23}\]
with \(G_{PS}\) remaining unchanged to leading order in \(\sqrt{N_{f}/N_{c}}\) (formally treated as a small quantity because of the probe brane approximation). In contrast to the conventional mixing scenarios of Ref. [16; 103] mass mixing is absent here, while the mass of the pseudoscalar glueball is raised by a factor \((1+\zeta_{2}^{2})\) from 1789 MeV to \((1819.7\ldots 1806.5)\) MeV for \(\lambda=16.63\ldots 12.55\). Lattice QCD (in the quenched approximation), however, typically finds values around 2600 MeV, so we also consider the latter in our extrapolations.6
Footnote 6: Note that historically the pseudoscalar glueball was expected to be the lightest glueball, with \(\eta(1405)\) a prominent candidate after \(\iota(1440)\)[104] was split into \(\eta(1405)\) and \(\eta(1475)\). This is still occasionally considered a possibility, see for example [105] and [106].
Through (5.23) the pseudoscalar glueball acquires the same interactions as \(\eta_{0}\), and the same form of transition form factors, only with correspondingly modified coupling constants. Thus the formulae given in III.2 for the decays of pseudoscalars in vector mesons or photons remain essentially unchanged, but the higher mass of the pseudoscalar glueball permits also decays into pairs of vector mesons.
The resulting interaction Lagrangian reads
\[{\cal L}_{G_{PS}vv/v{\cal V}/{\cal V}{\cal V}}=G_{PS}\epsilon^{\mu\nu\rho\sigma }\,{\rm tr}\,\left[k_{1}^{vv}\partial_{\mu}v_{\nu}\partial_{\rho}v_{\sigma}+2 k_{1}^{v{\cal V}}\partial_{\mu}v_{\nu}\partial_{\rho}{\cal V}_{\sigma}+k_{1}^{{ \cal V}{\cal V}}\partial_{\mu}{\cal V}_{\nu}\partial_{\rho}{\cal V}_{\sigma}\right] \tag{5.24}\]
with7
Footnote 7: The couplings differ by a factor of 2 from [50] since we use SU(3) generators \(T^{a}=\lambda^{a}/2\).
\[k_{1}^{v^{1}v^{1}} = 19.6184N_{c}^{-1}\lambda^{-1/2}M_{\rm KK}^{-1}, \tag{5.25}\] \[k_{1}^{v^{1}{\cal V}} = 0.493557N_{c}^{-1/2}M_{\rm KK}^{-1},\] (5.26) \[k_{1}^{{\cal V}{\cal V}} = 0.0145232\lambda^{1/2}M_{\rm KK}^{-1}. \tag{5.27}\]
The various resulting partial widths are listed in Table 7.
In the WSS model, all other hadronic decay channels of the pseudoscalar glueball, such as those considered in [107; 108], turn out to be very weak compared to two-vector-meson decays [50]. The relative strength of the latter entails correspondingly important radiative decay modes, and a two-photon partial width in the keV range. Note, however, that these results have been obtained from the first term in a formal expansion in \(\sqrt{N_{f}/N_{c}}\), which is not a small parameter in real QCD. It might nevertheless be meaningful, since the parameter \(\zeta_{2}\) in (5.23) is reasonably small, \(0.19\ldots 0.14\) for \(\lambda=16.63\ldots 12.55\).
## VI Glueball contributions to hadronic light-by-light scattering and the muon \(g-2\)
In order to calculate the contribution of the glueball exchange diagram in the light-by-light scattering amplitude, which enters the muon-photon vertex at two loop order, the above results for the vertices of a glueball with two on-shell photons need to be generalized to nonzero photon virtualities.
In the case of the dilatonic scalar glueball \(G_{D}\), this involves two interaction terms that are obtained by replacing \(v_{\mu}\) in (5.1) by \(eQA_{\mu}^{e.m.}\) and the holographic profile functions \(\psi(z)\) in (5.2) by the bulk-to-boundary propagator \({\cal J}(Q,z)\) defined in (3.29), yielding two form factors,
\[d_{2}^{\cal VV}(Q_{1}^{2},Q_{2}^{2}) \equiv \kappa\int dz\,K\partial_{z}{\cal J}(Q_{1},z)\partial_{z}{\cal J }(Q_{2},z)H_{D}(z),\] \[d_{3}^{\cal VV}(Q_{1}^{2},Q_{2}^{2}) \equiv \kappa\int dz\,K^{-1/3}{\cal J}(Q_{1},z){\cal J}(Q_{2},z)H_{D}(z), \tag{6.1}\]
in place of the coupling constants \(d_{2}\) and \(d_{3}\).
The exotic scalar glueball \(G_{E}\) has more complicated interactions with two vector fields, written out in (B1), with five coupling constants (B2). The latter are generalized in a completely analogous manner to form factors \(c_{i}^{\cal VV}(Q_{1}^{2},Q_{2}^{2})\) with \(i=2,3,4\), and \(\hat{c}_{j}^{\cal VV}(Q_{1}^{2},Q_{2}^{2})\) with \(j=2,3\).
Following the notation of [109], the result for the matrix element of a scalar glueball with two electromagnetic currents \(j_{\rm em}^{\mu}(x)\) can be written in terms of two transition form factors \({\cal F}_{1,2}^{S}\) defined by
\[{\cal M}^{\mu\nu}(p\to q_{1},q_{2}) = i\int d^{4}xe^{iq_{1}\cdot x}\langle 0\,|j_{\rm em}^{\mu}(x)j_{ \rm em}^{\nu}(0)|\,G_{S}(p)\rangle \tag{6.2}\] \[= \frac{{\cal F}_{1}^{S}(q_{1}^{2},q_{2}^{2})}{M_{S}}T_{1}^{\mu\nu} +\frac{{\cal F}_{2}^{S}(q_{1}^{2},q_{2}^{2})}{M_{S}^{3}}T_{2}^{\mu\nu}\]
with
\[T_{1}^{\mu\nu} = q_{1}\cdot q_{2}g^{\mu\nu}-q_{2}^{\mu}q_{1}^{\nu},\] \[T_{2}^{\mu\nu} = q_{1}^{2}q_{2}^{2}g^{\mu\nu}+q_{1}\cdot q_{2}q_{1}^{\mu}q_{2}^{ \nu}-q_{1}^{2}q_{2}^{\mu}q_{2}^{\nu}-q_{2}^{2}q_{1}^{\mu}q_{1}^{\nu}. \tag{6.3}\]
For the dilatonic scalar glueball we obtain
\[{\cal F}_{1}^{D} = -2\frac{d_{3}^{\cal VV}(Q_{1}^{2},Q_{2}^{2})\,{\rm tr}\,Q^{2}}{M_{ D}}\left[(q_{1}^{2}+q_{2}^{2})+(q_{1}\cdot q_{2})+2M_{D}^{2})\right]-\frac{d_{2}^{ \cal VV}(Q_{1}^{2},Q_{2}^{2})M_{\rm KK}^{2}\,{\rm tr}\,Q^{2}}{M_{D}}, \tag{6.4}\] \[{\cal F}_{2}^{D} = -2d_{3}^{\cal VV}(Q_{1}^{2},Q_{2}^{2})\,{\rm tr}\,Q^{2}M_{D}+\frac {d_{2}^{\cal VV}(Q_{1}^{2},Q_{2}^{2})M_{\rm KK}^{2}\,{\rm tr}\,Q^{2}M_{D}}{q_{ 1}^{2}q_{2}^{2}}\left[(q_{1}\cdot q_{2})+M_{D}^{2})\right], \tag{6.5}\]
\begin{table}
\begin{tabular}{l c c} \hline \hline & \(\Gamma_{G_{E}^{\rm Wiss}}\)[keV] & \(\Gamma_{G_{PS}(2600)}\)[keV] \\ \hline \(G_{PS}\to\rho\rho\) & \((36.8\ldots 45.0)\cdot 10^{3}\) & \((190\ldots 248)\cdot 10^{3}\) \\ \(G_{PS}\to\omega\omega\) & \((11.3\ldots 13.8)\cdot 10^{3}\) & \((62.2\ldots 81.3)\cdot 10^{3}\) \\ \(G_{PS}\to\phi\phi\) & - & \((29.2\ldots 38.2)\cdot 10^{3}\) \\ \(G_{PS}\to K^{*}K^{*}\) & \((2.69\ldots 1.81)\cdot 10^{3}\) & \((188\ldots 246)\cdot 10^{3}\) \\ \hline \(G_{PS}\to\rho\gamma\) & \(272\ldots 263\) & \(536\ldots 528\) \\ \(G_{PS}\to\omega\gamma\) & \(29.8\ldots 28.9\) & \(59.2\ldots 58.3\) \\ \(G_{PS}\to\phi\gamma\) & \(35.6\ldots 34.1\) & \(95.4\ldots 94.0\) \\ \hline \(G_{PS}\to\gamma\gamma\) & \(1.75\ldots 1.30\) & \(2.49\ldots 1.86\) \\ \hline \end{tabular}
\end{table}
Table 7: Radiative pseudoscalar glueball decay and decays into two vector mesons \(\lambda=16.63\ldots 12.55\). Besides the WSS model result for the pseudoscalar mass, \(M_{G}=1813\pm 7{\rm MeV}\), an extrapolation to 2600 MeV (motivated by lattice results) is considered.
and for the exotic scalar glueball
\[{\cal F}_{1}^{E} = -2\frac{\,{\rm tr}\,Q^{2}}{M_{E}}\left[c_{3}^{\mathcal{V}\mathcal{V} }(Q_{1}^{2},Q_{2}^{2})((q_{1}^{2}+q_{2}^{2})+(q_{1}\cdot q_{2})+M_{E}^{2})- \dot{c}_{3}^{\mathcal{V}\mathcal{V}}(Q_{1}^{2},Q_{2}^{2})M_{E}^{2}\right. \tag{6.6}\] \[\left.+c_{2}^{\mathcal{V}\mathcal{V}}(Q_{1}^{2},Q_{2}^{2})M_{\rm KK }^{2}-\frac{3}{2}\left(c_{4}^{\mathcal{V}\mathcal{V}}(Q_{1}^{2},Q_{2}^{2})+c_{ 4}^{\mathcal{V}\mathcal{V}}(Q_{2}^{2},Q_{1}^{2})\right)\right],\] \[{\cal F}_{2}^{E} = -2\,{\rm tr}\,Q^{2}M_{E}\left[c_{3}^{\mathcal{V}\mathcal{V}}(Q_{1 }^{2},Q_{2}^{2})-c_{2}^{\mathcal{V}\mathcal{V}}(Q_{1}^{2},Q_{2}^{2})M_{\rm KK }^{2}\frac{(q_{1}\cdot q_{2})}{q_{1}^{2}q_{2}^{2}}\right.\] (6.7) \[\left.+c_{2}^{\mathcal{V}\mathcal{V}}(Q_{1}^{2},Q_{2}^{2})\frac{M _{E}^{2}M_{\rm KK}^{2}}{q_{1}^{2}q_{2}^{2}}-\frac{3}{2}M_{\rm KK}^{2}\frac{c_{ 4}^{\mathcal{V}\mathcal{V}}(Q_{1}^{2},Q_{2}^{2})q_{1}^{2}+c_{4}^{\mathcal{V} \mathcal{V}}(Q_{2}^{2},Q_{1}^{2})q_{2}^{2}}{q_{1}^{2}q_{2}^{2}}\right],\]
where \(q_{1}\cdot q_{2}=-\frac{1}{2}(q_{1}^{2}+q_{2}^{2}+M_{D/E})\) and \(\,{\rm tr}\,Q^{2}=2/3\) for \(N_{f}=3\).
We have used these results to estimate the glueball contribution to the muon anomalous magnetic moment \(a_{\upmu}=(g-2)_{\upmu}/2\) in a narrow-width approximation by inserting the above expressions in the two-loop expression for the muon-photon vertex.
In the scenario where the exotic scalar glueball is discarded from the spectrum and \(G_{D}\) is identified with the ground-state scalar glueball, we obtain for \(M_{D}=1506\) MeV and \(M_{D}=1712\) MeV corresponding to the glueball candidates \(f_{0}(1500)\) and \(f_{0}(1710)\)
\[a_{\upmu}^{G_{D}(1506)} = -1.62\times 10^{-12},\] \[a_{\upmu}^{G_{D}(1712)} = -1.01\times 10^{-12}. \tag{6.8}\]
While the former result is approximately identical to the unmodified WSS result, since \(M_{D}^{\rm WSS}=1487\) MeV, the latter depends on the specific extrapolations laid out in Sect. IV.2. Had we only raised the mass, it would have been somewhat larger, \(-1.35\times 10^{-12}\), but in this case the rather good agreement of the hadronic decay pattern obtained for \(G_{D}(1712)\) with the experimental results for the glueball candidate \(f_{0}(1710)\) (or \(f_{0}(1770)\) according to [24]) would have deteriorated.
If the exotic scalar glueball is not discarded from the spectrum but identified with the ground-state scalar glueball, its mass needs to be raised substantially to match the predictions from lattice QCD. Its decay pattern and in particular its large width then does not fit to either \(f_{0}(1500)\) and \(f_{0}(1710)\); it might instead be identified with the broad "fragmented" glueball \(G(1865)\) proposed in [8; 24; 25]. Raising the mass of \(G_{E}\) artificially to this glueball, we obtain for its \(a_{\upmu}\) contribution
\[a_{\upmu}^{G_{E}(1865)}=-0.10\times 10^{-12}, \tag{6.9}\]
which is an order of magnitude smaller in accordance with the much smaller two-photon rate of \(G_{E}\). Since in this case the narrow-width approximation is rather questionable, we have also considered the space-like Breit-Wigner function proposed in [110]. However, this changes the result (6.9) only by about 2%.
In [110] the authors consider scalar resonances including \(f_{0}(1500)\), which is assumed to have a sizeable photon coupling while being a glueball-like state, with a coupling constant similar to the one obtained for \(f_{0}(980)\), leading to \(\Gamma^{f_{0}(1500)\to\gamma\gamma}\approx 0.79\,{\rm keV}\). The assumed transition form factors therein yield \(a_{\upmu}=-(1.3\ldots 2)\times 10^{-12}\). This is comparable to our results, even though the two-photon rate obtained with \(G_{D}\) is about twice as large.
In the WSS model, tensor glueballs have two-photon decay rates comparable to \(G_{D}\) with similar values of \(\Gamma_{\gamma\gamma}/M_{G}\). We have not evaluated their contribution to \(a_{\upmu}\), but we expect that they will be smaller than those of \(G_{D}\) by some power of the ratio \(M_{G_{T}}/M_{G_{D}}\).
We have however evaluated the contribution of pseudoscalar glueballs, which contribute with a positive sign. With the WSS model mass of 1789 MeV we find \(a_{\upmu}^{G_{PS}^{\rm WSS}}=0.39\times 10^{-12}\), and when
extrapolated to a value typically found in quenched lattice QCD calculations of 2600 MeV this reduces to
\[a_{\upmu}^{G_{PS}(2600)}=0.19\times 10^{-12}. \tag{10}\]
This is about an order of magnitude smaller than the pseudoscalar contribution called \(G/\eta^{\prime\prime}\) in the bottom-up holographic model of [111], \(a_{\upmu}^{\eta^{\prime\prime}}\approx 2\times 10^{-12}\). In this more realistic model, the pseudoscalar glueball mixes not only with \(\eta_{0}\) but also with excited \(\eta(^{\prime})\) mesons (which are absent in our simple extension of the WSS model to massive pseudoscalars).
###### Acknowledgements.
We would like to thank Claude Amsler for useful discussions. We are also indebted to Jonas Mager for his assistance in the numerical evaluation of the contributions to the anomalous magnetic moment of the muon. F. H. and J. L. have been supported by the Austrian Science Fund FWF, project no. P 33655-N and the FWF doctoral program Particles & Interactions, project no. W1252-N27.
## Appendix A Hadronic decays of the scalar and tensor glueballs
In the following we review the hadronic decays of scalar and tensor glueballs in the WSS model as worked out in [45; 46; 47], including additional subdominant decay channels neglected therein, in particular \(G\to a_{1}\pi\). The latter has been emphasized in the phenomenological analysis of [112], where it was providing the largest partial decay width of a pure glueball (177 MeV for a glueball mass of 1600 MeV). While their results for decays of a scalar glueball into two vector mesons are remarkably compatible with the WSS result for \(G_{D}\) when the mass is raised to 1500-1700 MeV, the WSS prediction for \(G\to a_{1}\pi\) turns out to be fairly small, \(\lesssim 1\) MeV, in stark contrast to the model of [112].8
Footnote 8: For \(f_{0}(1500)\) the experimental value from [113] is 12(5)% of \(\Gamma_{4\pi}\), i.e., \(\sim 7\) MeV; for \(f_{0}(1710)\) no corresponding experimental results seem to be available.
We also review the dependence on the so far unconstrained extra coupling to be associated with the quark mass term that we have added to the chiral WSS model (parametrized by \(x\) in Table 4). As discussed in [47], this correlates the flavor asymmetries in the decay pattern in two pseudoscalars with the \(\eta\eta^{\prime}\) partial width. Good agreement of the decay pattern of \(G_{D}\) with \(f_{0}(1710)\) (or \(f_{0}(1770)\)) is obtained only for small or vanishing \(\eta\eta^{\prime}\) decay rates. Here a new experimental result has been published in [114]: \({\cal B}(f_{0}(1710)\to\eta\eta^{\prime})/{\cal B}(f_{0}(1710)\to\pi\pi)<1.61 \times 10^{-3}\), contradicting [24; 25] where this ratio is \(\sim 1\) for \(f_{0}(1710)\) and \(\sim 0.1\) for \(f_{0}(1770)\).
### Dilatonic scalar glueball
The scalar glueball fluctuation which in [45] is referred to as (predominantly) dilatonic scalar glueball, reads
\[\delta G_{\mu\nu} = \frac{r^{2}}{{\cal N}_{D}\,L^{2}}T_{4}(r)\left(\eta_{\mu\nu}- \frac{\partial_{\mu}\partial_{\nu}}{\square}\right)G_{D}(x^{\sigma}),\] \[\delta G_{11,11} = -3\frac{r^{2}}{{\cal N}_{D}\,L^{2}}T_{4}(r)G_{D}(x^{\sigma}), \tag{11}\]
with an undetermined normalization parameter \({\cal N}_{D}\). To be a solution of the Einstein equations, the radial function \(T_{4}(r)\) has to satisfy the differential equation
\[\frac{\mathrm{d}}{\mathrm{d}r}\left(r^{7}-r\,r_{KK}^{6}\right)\frac{\mathrm{d}} {\mathrm{d}r}T_{4}(r)+L^{4}M_{D}^{2}r^{3}T_{4}(r)=0, \tag{10}\]
with boundary conditions \(T_{4}(r_{\mathrm{KK}})=1\) and \(T_{4}^{\prime}(r_{\mathrm{KK}})=0\), and therefore is normalizable for a discrete set of mass eigenvalues \(M_{D}\). In the following, we will only consider the lightest mode with \(M_{D}=1.567M_{\mathrm{KK}}=1487\,\mathrm{MeV}\).
The kinetic and mass term for \(G_{D}\) reads
\[\mathcal{L}_{4}|_{G_{D}^{2}}=\mathcal{C}\int\mathrm{d}r\frac{3r^{3}T_{4}(r)^{2 }}{L^{3}{\cal N}_{D}^{2}}G_{D}\left(\Box-M_{D}^{2}\right)G_{D} \tag{11}\]
with the constant
\[\mathcal{C}= \left(\frac{L}{2}\right)^{4}\Omega_{4}\frac{1}{2\kappa_{11}^{2}} \left(2\pi\right)^{2}R_{4}R_{11}. \tag{12}\]
The radial integration for the lightest mode yields the constant
\[\mathcal{C}_{D}=\int\mathrm{d}r\frac{3r^{3}T_{4}(r)^{2}}{L^{3}}=0.22547\left[ T_{4}(r_{\mathrm{KK}})\right]^{2}\frac{r_{\mathrm{KK}}^{4}}{L^{3}}. \tag{13}\]
To get a canonically normalized kinetic term
\[\mathcal{L}_{4}|_{G_{D}^{2}}=\frac{1}{2}G_{D}\left(\Box-M_{D}^{2}\right)G_{D}, \tag{14}\]
we have to set
\[{\cal N}_{D}=0.0335879\lambda^{\frac{1}{2}}N_{C}M_{\mathrm{KK}}. \tag{15}\]
Inducing the fluctuation (10) in the D8 brane action (5) we obtain the derivative coupling of two pseudoscalar mesons to \(G_{D}\) as
\[\mathcal{L}_{G_{D}\Pi\Pi}=d_{1}\operatorname{tr}\partial_{\mu}\Pi\partial_{\nu }\Pi\left(\eta^{\mu\nu}-\frac{\partial^{\mu}\partial^{\nu}}{\Box}\right)G_{D} \tag{16}\]
where
\[d_{1}=\frac{17.2261}{\sqrt{\lambda}M_{\mathrm{KK}}N_{c}} \tag{17}\]
(see [45] for further couplings).
Already in the chiral WSS model, a mass term arises for the singlet component of \(\Pi\) through the \(U(1)_{A}\) anomaly [42]. The latter requires a redefinition of the Ramond-Ramond 2-form field strength \(F_{2}\) which is associated with a \(\theta\) term. The bulk action is thus given by
\[S_{C_{1}}=-\frac{1}{4\pi(2\pi l_{s})^{6}}\int\mathrm{d}^{10}x\sqrt{-g}|\tilde {F}_{2}|^{2}, \tag{18}\]
where
\[\tilde{F}_{2}=\frac{6\pi U_{\mathrm{KK}}}{U^{4}M_{\mathrm{KK}}}\left(\theta+ \frac{\sqrt{2N_{f}}}{f_{\pi}}\eta_{0}\right)\mathrm{d}u\wedge dx^{4}, \tag{19}\]
from which one obtains the Witten-Veneziano mass as [42]
\[m_{0}^{2}=\frac{N_{f}}{27\pi^{2}N_{c}}\lambda^{2}M_{\rm KK}. \tag{101}\]
Inducing the metric fluctuations gives rise to an additional coupling between the scalar glueballs and \(\eta_{0}\). For the dilatonic glueball it is given by [46; 47]
\[{\cal L}_{\eta_{0}}\supset\frac{3}{2}m_{0}^{2}\eta_{0}^{2}d_{0}G_{D}, \tag{102}\]
with
\[d_{0}=3U_{\rm KK}\int_{U_{\rm KK}}^{\infty}{\rm d}uH_{D}(U)U^{-4}\approx\frac{ 17.915}{\sqrt{\lambda}N_{c}M_{\rm KK}}. \tag{103}\]
Massive quarks can be introduced by worldsheet instantons [62; 63; 115] or tachyon condensation [65; 66; 116], which give
\[{\cal L}_{m}^{\cal M}\propto\int{\rm d}^{4}x{\rm Tr}\left({\cal M}U(x)+h.c. \right), \tag{104}\]
where
\[U(x)={\rm P}\exp i\int{\rm d}zA_{z}(z,x)=e^{i\Pi^{a}\lambda^{a}/f_{\pi}}. \tag{105}\]
Expanding the mass term with \({\cal M}={\rm diag}(\hat{m},\hat{m},m_{s})\) leads to
\[{\cal L}_{m}^{\cal M}=-\frac{1}{2}m_{\pi}^{2}\pi_{0}^{2}-m_{\pi}^ {2}\pi^{+}\pi^{-}-m_{K}^{2}(K_{0}\bar{K}_{0}+K_{+}K_{-})\] \[-\frac{1}{2}m_{1}^{2}\eta_{0}^{2}-\frac{1}{2}m_{8}^{2}\eta_{8}+ \frac{2\sqrt{2}}{3}(m_{K}^{2}-m_{\pi}^{2})\eta_{0}\eta_{8}, \tag{106}\]
with
\[m_{\pi}^{2}=2\hat{m}\mu,\quad m_{K}^{2}=(\hat{m}+m_{s})\mu,\] \[m_{1}^{2}=\frac{2}{3}m_{K}^{2}+\frac{1}{3}m_{\pi}^{2},\quad m_{8 }^{2}=\frac{4}{3}m_{K}^{2}-\frac{1}{3}m_{\pi}^{2}, \tag{107}\]
and \(\mu\) being the overall scale. We also note a sign error in the \(\eta_{0}\eta_{8}\) mixing term in [46]. With
\[\eta=\eta_{0}\cos\theta_{P}-\eta_{0}\sin\theta_{P},\quad\eta^{\prime}=\eta_{8 }\sin\theta_{P}+\eta_{0}\cos\theta_{P} \tag{108}\]
the mass term is diagonalized by
\[\theta_{P}=\frac{1}{2}\arctan\frac{2\sqrt{2}}{1-\frac{3}{2}m_{0}^{2}/(m_{K}^{ 2}-m\pi^{2})} \tag{109}\]
leading to
\[m_{\eta,\eta^{\prime}}^{2}=\frac{1}{2}m_{0}^{2}+m_{K}^{2}\mp\sqrt{\frac{m_{0}^ {4}}{4}-\frac{1}{3}m_{0}^{2}(m_{K}^{2}-m_{\pi}^{2})+(m_{K}^{2}-m_{\pi}^{2})^{2}} \tag{110}\]
for the \(\eta\) and \(\eta^{\prime}\) meson, respectively.
As in [46; 47], we assume a scalar glueball coupling to the quark mass terms of the form (correcting a typo in [47])
\[\mathcal{L}_{G_{D}q\bar{q}}=-3d_{m}G_{D}\mathcal{L}_{m}^{\mathcal{M}} \tag{101}\]
with \(d_{m}\) being of the same order as \(d_{0}\), i.e.
\[d_{m}=xd_{0},\quad x=\mathcal{O}(1). \tag{102}\]
This leads to a \(G_{D}\eta\eta^{\prime}\) interaction given by
\[\mathcal{L}_{G_{D}\eta\eta^{\prime}}=-\frac{3}{2}(1-x)d_{0}\sin(2\theta_{P})m _{0}^{2}G_{D}\eta\eta^{\prime}. \tag{103}\]
With these modifications we obtain the coupling of the dilaton glueball to \(\eta\eta\) as
\[\mathcal{L}_{G_{D}\eta\eta}=\frac{3}{2}d_{0}m_{0}^{2}(1-x)\sin\theta_{P}^{2}G _{D}\eta\eta+\frac{3}{2}d_{0}xm_{\eta}^{2}G_{d}\eta\eta+\frac{d_{1}}{2}\partial _{\mu}\eta\partial_{\nu}\eta\left(\eta^{\mu\nu}-\frac{\partial^{\mu}\partial ^{\mu}}{\Box}\right)G_{D}. \tag{104}\]
For the coupling to the \(\eta^{\prime}\) meson we get \(\cos\theta_{P}^{2}\) instead of \(\sin\theta_{P}^{2}\).
The partial decay width for \(G_{D}\) decaying into two identical pseudoscalar mesons becomes
\[\Gamma_{G_{D}\to PP}=\frac{n_{P}d_{1}^{2}M_{D}^{3}}{512\pi}\left(1-4\frac{m_{P }^{2}}{M_{D}^{2}}\right)^{1/2}\left(1+\alpha\frac{m_{P}^{2}}{M_{D}^{2}}\right) ^{2}, \tag{105}\]
where \(P\) refers to pions (\(n_{P}=3\)), kaons (\(n_{P}=4\)) or \(\eta^{(^{\prime})}\) (\(n_{P}=1\)) mesons, and
\[\alpha=4\left(3\frac{d_{0}}{d_{1}}x-1\right) \tag{106}\]
for pions and kaons, and
\[\alpha=4\left[3\frac{d_{0}}{d_{1}}\left(x+\frac{m_{0}^{2}}{m_{P}^{2}}\sin^{2 }\theta_{P}(1-x)\right)-1\right]. \tag{107}\]
for \(\eta\eta\), and with the replacement \(\sin\theta_{P}\to\cos\theta_{P}\) for \(\eta^{\prime}\eta^{\prime}\).
There is also a trilinear coupling of a dilatonic scalar glueball with one axial vector and one pseudoscalar meson, which has been neglected in [45], given by
\[\mathcal{L}_{G_{D}\Pi a^{m}}= -2d_{6}^{m}M_{\rm KK}\,{\rm tr}\,\left(\partial_{\mu}\Pi a_{\nu} ^{m}\right)\left(\eta^{\mu\nu}-\frac{\partial^{\mu}\partial^{\nu}}{\Box} \right)G_{D}, \tag{108}\]
with
\[d_{6}^{m} \equiv\sqrt{\frac{\kappa}{\pi}}\int{\rm d}z\,\psi_{2m}^{\prime}H_ {D}\] \[=\left\{11.768,7.809,2.350,\ldots\right\}\frac{1}{M_{\rm KK}N_{c} \sqrt{\lambda}}. \tag{109}\]
Restricting ourselves to two-body decays, for which the relevant vertices for vector mesons are given in Sect. V.1, the resulting partial decay widths are collected in Table 8.
### Exotic scalar glueball
The lighter exotic scalar glueball fluctuation with mass \(M_{E}=0.901M_{\rm KK}=855\) MeV, which we have discarded from the spectrum when identifying the dilatonic scalar glueball with the ground-state glueball of QCD, is given by (4.5) with eigenvalue equation (4.6). This mode involves the metric component \(h_{\tau\tau}\), which has no analogous in other holographic QCD models, and has therefore been termed "exotic" in [40]. Its canonical normalization is obtained from
\[{\cal L}_{4}|_{G_{E}^{2}} = {\cal C}_{E}\int{\rm d}r\frac{r^{3}S_{4}(r)^{2}}{2L^{3}{\cal N}_{E }^{2}}G_{E}\left(\Box-M_{E}^{2}\right)G_{E}\] (A31) \[= \frac{1}{2}G_{E}\left(\Box-M_{E}^{2}\right)G_{E},\]
with
\[{\cal C}_{E}=\int{\rm d}r\frac{r^{3}S_{4}(r)^{2}}{L^{3}}=0.09183 \left[S_{4}(r_{\rm KK})\right]^{2}\frac{r_{\rm KK}^{4}}{L^{3}}\] (A32)
and
\[{\cal N}_{E}=0.008751\lambda^{\frac{1}{2}}N_{C}M_{\rm KK}.\] (A33)
Derivative couplings of pseudoscalars to \(G_{E}\) are given by
\[{\cal L}_{G_{E}}\supset-{\rm tr}\,\left\{c_{1}\left[\partial_{\mu} \Pi\partial_{\nu}\Pi\frac{\partial^{\mu}\partial^{\nu}}{M_{E}^{2}}G_{E}+\frac{ 1}{2}\left(\partial_{\mu}\Pi\right)^{2}\left(1-\frac{\Box}{M_{E}^{2}}\right)G_ {E}\right]+\breve{c}_{1}\partial_{\mu}\Pi\partial^{\mu}\Pi G_{E}\right\}\] (A34)
with \(c_{1}\) and \(\breve{c}_{1}\) as in [45].
In the Witten-Veneziano mass term for \(\eta_{0}^{2}\), inducing the metric fluctuations leads to additional couplings between the scalar glueballs and \(\eta_{0}\). For the exotic scalar glueball it is given by
\[{\cal L}_{\eta_{0}}\supset-\frac{5}{2}m_{0}^{2}\eta_{0}^{2}\breve{c}_{0}G_{E},\] (A35)
with
\[\breve{c}_{0}=\frac{3}{4}U_{\rm KK}\int_{U_{\rm KK}}^{\infty}{ \rm d}uH_{E}(U)U^{-4}\approx\frac{15.829}{\sqrt{\lambda}N_{c}M_{\rm KK}}\] (A36)
\begin{table}
\begin{tabular}{l c c c c c} & \(\Gamma_{G_{D}}^{WSS}\) [MeV] & \(\Gamma_{G_{D}(1506)}\) [MeV] & \(\Gamma_{G_{D}(1712)}\)[MeV] & \(\Gamma_{G_{D}(1865)}\)[MeV] \\ \hline \(G_{D}\to\pi\pi\) & \(12.4\ldots 16.5|15.2\ldots 20.1\) & \(12.6\ldots 16.7|15.4\ldots 20.4\) & \(14.6\ldots 19.3|170.\ldots 22.5\) & \(16.1\ldots 21.3|18.3\ldots 24.2\) \\ \(G_{D}\to KK\) & \(4.16\ldots 5.51|50.5\ldots 67.0\) & \(4.43\ldots 5.87|50.4\ldots 66.8\) & \(7.49\ldots 9.93|49.4\ldots 65.4\) & \(9.87\ldots 13.1|48.8\ldots 64.7\) \\ \(G_{D}\to\eta\eta\eta\) & \(1.85\ldots 3.71|14.1\ldots 18.7\) & \(1.93\ldots 3.82|14.1\ldots 18.7\) & \(2.77\ldots 4.96|13.9\ldots 18.4\) & \(3.38\ldots 5.75|13.7\ldots 18.1\) \\ \(G_{D}\to\eta\eta^{\prime}\) & - & \(0.29\ldots 0.30|0\) & \(4.35\ldots 4.54|0\) & \(4.19\ldots 4.38|0\) \\ \(G_{D}\to a_{1}\pi\) & \(0.14\ldots 0.18\) & \(0.17\ldots 0.23\) & \(0.66\ldots 0.87\) & \(1.08\ldots 1.43\) \\ \(G_{D}\to\rho\rho\) & - & - & \(53.5\ldots 71.0\) & \(90.1\ldots 119\) \\ \(G_{D}\to\omega\omega\) & - & - & \(16.6\ldots 22.0\) & \(28.7\ldots 38.1\) \\ \(G_{D}\to K^{*}K^{*}\) & - & - & - & \(42.6\ldots 56.4\) \\ \hline Sum & \(18.6\ldots 25.9|79.9\ldots 106\) & \(19.4\ldots 26.9|80.0\ldots 106\) & \(100\ldots 133|151\ldots 200\) & \(196\ldots 260|243\ldots 322\) \\ \end{tabular}
\end{table}
Table 8: Hadronic two-body decays of the dilatonic scalar glueball \(G_{D}\) with WSS model mass and extrapolated to the masses of \(f_{0}(1500)\), \(f_{0}(1710)\), and \(M=1865\) MeV, for \(\lambda=16.63\ldots 12.55\). In decays into two pseudoscalar mesons, the two sets of values correspond to \(x=0\) and \(x=1\) in the coupling to the quark mass term (A22).
previously studied in [47].
Assuming the coupling of the exotic scalar glueball to quark masses to be of the form
\[\mathcal{L}_{G_{E}q\bar{q}}=5\breve{c}_{m}G_{E}\mathcal{L}_{m}^{\mathcal{M}} \tag{100}\]
with \(\breve{c}_{m}\) being of the same order as \(\breve{c}_{0}\), i.e.
\[\breve{c}_{m}=x\breve{c}_{0},\quad x=\mathcal{O}(1), \tag{101}\]
we get
\[\mathcal{L}_{G_{E}\eta\eta^{\prime}}=\frac{5}{2}(1-x)\breve{c}_{0}\sin(2\theta _{P})m_{0}^{2}G_{E}\eta\eta^{\prime}. \tag{102}\]
All together we obtain the coupling of the exotic scalar glueball to \(\eta\eta\) as
\[\mathcal{L}_{G_{E}\eta\eta}=\frac{5}{2}\breve{c}_{0}m_{0}^{2}(x-1 )\sin\theta_{P}^{2}G_{E}\eta\eta-\frac{5}{2}\breve{c}_{0}xm_{\eta}^{2}G_{E}\eta\eta\] \[-\frac{c_{1}}{2}\partial_{\mu}\eta\partial_{\nu}\eta\left(\frac{ 1}{2}\eta^{\mu\nu}\left(1-\frac{\Box}{M_{E}^{2}}\right)+\frac{\partial^{\mu} \partial^{\nu}}{M_{E}^{2}}\right)G_{E}-\frac{\breve{c}_{1}}{2}\partial_{\mu} \eta\partial^{\mu}\eta G_{E}. \tag{103}\]
For pions and kaons we have
\[\left|\mathcal{M}_{G_{E}\to PP}\right|=\frac{1}{4}\left|20\breve{c}_{0}m_{P}^{ 2}x+2\breve{c}_{1}(M_{E}^{2}-2m_{P}^{2})+c_{1}M_{E}^{2}\right| \tag{104}\]
and for \(\eta\)
\[\left|\mathcal{M}_{G_{E}\rightarrow\eta\eta}\right|=\frac{1}{4}\left|-20 \breve{c}_{0}m_{0}^{2}(x-1)\sin\theta_{P}^{2}+20\breve{c}_{0}m_{P}^{2}x+2 \breve{c}_{1}(M_{E}^{2}-2m_{P}^{2})+c_{1}M_{E}^{2}\right|, \tag{105}\]
from which the \(\eta^{\prime}\) amplitude is obtained by the replacement \(\sin\theta_{P}\rightarrow\cos\theta_{P}\).
In both cases the decay width is given by
\[\Gamma_{G_{E}\to PP}=\frac{n_{P}}{2}\frac{1}{8\pi}\left| \mathcal{M}_{G_{E}\to PP}\right|^{2}\frac{\left|\mathbf{p}_{P}\right|}{M_{E} ^{2}}. \tag{106}\]
The coupling of the exotic scalar glueball to one axial vector meson and one pseudoscalar meson is given by
\[\mathcal{L}_{G_{E}\Pi a^{m}}= 2c_{6}^{m}M_{\text{KK}}\,\text{tr}\,\left(\partial_{\mu}\Pi a_{ \nu}^{m}\right)\frac{\partial^{\mu}\partial^{\nu}}{M_{E}^{2}}G_{E}, \tag{107}\]
with
\[c_{6}^{m} \equiv\sqrt{\frac{\kappa}{\pi}}\int\text{d}z\,\psi_{2m}^{\prime} \overline{H_{E}}\] \[=\left\{57.659,72.057,65.190,\ldots\right\}\frac{1}{M_{\text{KK} }N_{c}\sqrt{\lambda}}. \tag{108}\]
Restricting ourselves to two-body decays, for which the relevant vertices for vector mesons are given separately in Appendix B, the resulting partial decay widths are collected in Table 1X.
### Tensor glueball
The tensor glueball fluctuations read
\[h_{\mu\nu}=q_{\mu\nu}\frac{r^{2}}{L^{2}\,\mathcal{N}_{T}}T(r)G_{T}(x^{\sigma}), \tag{100}\]
where \(q_{\mu\nu}\) is a symmetric, transverse traceless polarization tensor, which we normalize such that \(q_{\mu\nu}q^{\mu\nu}=1\), differing from [45].
\(T_{4}(r)\) satisfies the same eigenvalue equation as in the case of the dilatonic scalar glueball, (101), but it acquires a different normalization. The Lagrangian reads
\[\mathcal{L}_{4}|_{G_{T}^{2}}= \mathcal{C}\int\mathrm{d}r\frac{r^{3}T_{4}(r)^{2}}{4L^{3}\mathcal{ N}_{T}^{2}}G_{T}\left(\Box-M^{2}\right)G_{T}\] \[= \frac{1}{2}G_{T}\left(\Box-M^{2}\right)G_{T}, \tag{101}\]
with
\[\mathcal{C}_{T}= \int\mathrm{d}r\frac{r^{3}T(r)^{2}}{2L^{3}}=0.112735(r_{\mathrm{ KK}})^{2}\frac{r_{\mathrm{KK}}^{4}}{L^{3}} \tag{102}\]
and
\[\mathcal{N}_{T}=0.00969598\lambda^{\frac{1}{2}}N_{C}M_{\mathrm{KK}}=\frac{1}{ 2\sqrt{3}}\mathcal{N}_{D}. \tag{103}\]
This leads to
\[\mathcal{L}_{G_{T}\mathrm{III}}=t_{1}\,\mathrm{tr}\,\left(\partial_{\mu}\Pi \partial_{\nu}\Pi\right)G_{T}^{\mu\nu} \tag{104}\]
with
\[t_{1}=\frac{1}{\pi}\int\mathrm{d}zK^{-1}T=\frac{59.6729}{\sqrt{\lambda}M_{ \mathrm{KK}}N_{c}}=2\sqrt{3}d_{1}. \tag{105}\]
\begin{table}
\begin{tabular}{l c c c c c} & \(\Gamma_{G_{E}}^{WSS}\) [MeV] & \(\Gamma_{G_{E}(1506)}\)[MeV] & \(\Gamma_{G_{E}(1712)}\)[MeV] & \(\Gamma_{G_{E}(1865)}\)[MeV] \\ \hline \(G_{E}\to\pi\pi\) & \(72.2\dots 95.7|84.9\dots 113\) & \(135\dots 179|142\dots 189\) & \(154\dots 205|161\dots 213\) & \(169\dots 224|175\dots 231\) \\ \(G_{E}\to KK\) & - & \(120\dots 158|229\dots 304\) & \(152\dots 202|255\dots 338\) & \(176\dots 233|273\dots 362\) \\ \(G_{E}\to\eta\eta\) & - & \(31.3\dots 45.4|57.7\dots 76.4\) & \(40.0\dots 56.9|651\dots 86.3\) & \(45.9\dots 64.6|69.8\dots 92.5\) \\ \(G_{E}\to\eta\eta^{\prime}\) & - & \(0.21\dots 0.22|0\) & \(3.12\dots 3.26|0\) & \(3.01\dots 3.14|0\) \\ \(G_{E}\to a_{1}\pi\) & - & \(0.06\dots 0.08\) & \(0.55\dots 0.73\) & \(1.36\dots 1.80\) \\ \(G_{E}\to\rho\rho\) & - & - & \(0.77\dots 1.02\) & \(2.91\dots 3.86\) \\ \(G_{E}\to\omega\omega\) & - & - & \(0.19\dots 0.26\) & \(0.84\dots 1.12\) \\ \(G_{E}\to K^{*}K^{*}\) & - & - & - & \(0.15\dots 0.20\) \\ \hline Sum & \(72.2\dots 95.7|84.9\dots 113\) & \(286\dots 383|430\dots 570\) & \(351\dots 469|483\dots 640\) & \(398\dots 530|521\dots 691\) \\ \end{tabular}
\end{table}
Table 9: Hadronic two-body decays of the exotic scalar glueball \(G_{E}\) with WSS model mass 855 MeV and extrapolated to the masses of \(f_{0}(1500)\), \(f_{0}(1710)\), and the scalar glueball at 1865 MeV proposed in [24], for \(\lambda=16.63\dots 12.55\). In decays into two pseudoscalar mesons, the two sets of values correspond to \(x=0\) and \(x=1\) in the coupling to the quark mass term (101).
Here no additional couplings arise from the mass terms of the pseudoscalars, because the tensor glueball fluctuations are traceless.
There is also a coupling of the tensor glueball to one axial vector and one pseudoscalar meson,
\[\mathcal{L}_{G_{T}\Pi a^{m}}=-2t_{6}M_{\text{KK}}\,\text{tr}\,\left(\partial_{ \mu}\Pi a^{m}_{\nu}\right)G^{\mu\nu}_{T} \tag{100}\]
with
\[t_{6}=\sqrt{\frac{\kappa}{\pi}}\int\text{d}z\psi^{\prime}_{2m}T=\left\{40.764,27.050,8.140\right\}\frac{1}{M_{\text{KK}}N_{c}\sqrt{\lambda}}. \tag{101}\]
Restricting ourselves to two-body decays, for which the relevant vertices for vector mesons are given in Sect. V.2, the resulting partial decay widths are collected in Table 10.
## Appendix B Radiative Decays of the Exotic Scalar Glueball
The exotic glueball interactions contain the vertices
\[\mathcal{L}_{G_{E}v^{m}v^{n}}= -\text{ tr}\,\left\{c_{2}^{mn}M_{\text{KK}}^{2}\left[v_{\mu}^{m}v_ {\nu}^{n}\frac{\partial^{\mu}\partial^{\nu}}{M_{E}^{2}}G_{E}+\frac{1}{2}v_{\mu }^{m}v^{n\mu}\left(1-\frac{\Box}{M_{E}^{2}}\right)G_{E}\right]\right.\] \[+\left.c_{3}^{mn}\left[\eta^{\rho\sigma}F_{\mu\rho}^{m}F_{\nu \sigma}^{n}\frac{\partial^{\mu}\partial^{\nu}}{M_{E}^{2}}G_{E}-\frac{1}{4}F_{ \mu\nu}^{m}F^{n\mu\nu}\left(1+\frac{\Box}{M_{E}^{2}}\right)G_{E}\right]+3c_{4}^ {mn}\frac{M_{\text{KK}}^{2}}{M_{E}^{2}}v_{\mu}^{n}F^{m\mu\nu}\partial_{\nu}G_ {E}\right.\] \[\left.+c_{2}^{mn}M_{\text{KK}}^{2}v_{\mu}^{m}v^{n\mu}G_{E}+\frac{ 1}{2}c_{3}^{mn}F_{\mu\nu}^{m}F^{n\mu\nu}G_{E}\right\}, \tag{102}\]
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \(\Gamma_{G_{T}}^{WSS}[\text{MeV}]\) & \(\Gamma_{G_{T}(2000)}[\text{MeV}]\) & \(\Gamma_{G_{T}(2400)}[\text{MeV}]\) \\ \hline \(G_{T}\to\pi\pi\) & \(19.9\dots 26.3\) & \(27.7\dots 36.8\) & \(33.8\dots 44.7\) \\ \(G_{T}\to KK\) & \(6.66\dots 8.83\) & \(19.2\dots 25.4\) & \(29.2\dots 38.6\) \\ \(G_{T}\to\eta\eta\) & \(1.02\dots 1.35\) & \(3.97\dots 5.26\) & \(6.48\dots 8.58\) \\ \(G_{T}\to a_{1}\pi\) & \(0.53\dots 0.71\) & \(5.12\dots 6.78\) & \(8.00\dots 10.6\) \\ \(G_{T}\to\rho\rho\) & - & \(270\dots 358\) & \(382\dots 507\) \\ \(G_{T}\to\omega\omega\) & - & \(88.2\dots 117\) & \(127\dots 169\) \\ \(G_{T}\to K^{*}K^{*}\) & - & \(240\dots 318\) & \(417\dots 552\) \\ \(G_{T}\to f_{1}\eta\) & - & \(0.98\dots 1.71\) & \(3.97\dots 6.89\) \\ \(G_{T}\to\eta^{\prime}\eta^{\prime}\) & - & - & \(0.92\dots 1.22\) \\ \(G_{T}\to\phi\phi\) & - & - & \(76.7\dots 102\) \\ \hline Total & \(28.1\dots 37.2\) & \(655\dots 869\) & \(1084\dots 1437\) \\ \hline \hline \end{tabular}
\end{table}
Table 10: Hadronic two-body decays of the tensor glueball \(G_{T}\) with WSS model 1487 MeV mass and extrapolated to masses of 2000 and 2400 MeV, for \(\lambda=16.63\dots 12.55\). In decays involving \(f_{1}\) we additionally vary \(\theta_{f}=20.4^{\circ}\dots 26.4^{\circ}\). Partial decay widths much smaller than 1 MeV are left out.
with coupling constants
\[c_{2}^{mn} =\kappa\int\mathrm{d}zK\psi_{2m-1}^{\prime}\psi_{2n-1}^{\prime} \overline{H}_{E}=\frac{\{7.116,\ldots\}}{M_{\mathrm{KK}}N_{c}\sqrt{\lambda}},\] \[c_{3}^{mn} =\kappa\int\mathrm{d}zK^{-1/3}\psi_{2m-1}\psi_{2n-1}\overline{H}_ {E}=\frac{\{69.769,\ldots\}}{M_{\mathrm{KK}}N_{c}\sqrt{\lambda}}\] \[c_{4}^{mn} =\kappa\int\mathrm{d}z\,\frac{20zK}{\left(5K-2\right)^{2}}\psi_{ 2m-1}\psi_{2n-1}^{\prime}H_{E}=\frac{\{-10.5798,\ldots\}}{M_{\mathrm{KK}}N_{c} \sqrt{\lambda}},\] \[\breve{c}_{2}^{mn} =\frac{\kappa}{4}\int\mathrm{d}zK\psi_{2m-1}^{\prime}\psi_{2n-1}^ {\prime}H_{E}=\frac{\{2.966,\ldots\}}{M_{\mathrm{KK}}N_{c}\sqrt{\lambda}}\] \[\breve{c}_{3}^{mn} =\frac{\kappa}{4}\int\mathrm{d}zK^{-1/3}\psi_{2m-1}\psi_{2n-1}H_ {E}=\frac{\{18.122,\ldots\}}{M_{\mathrm{KK}}N_{c}\sqrt{\lambda}}, \tag{101}\]
where \(\overline{H}_{E}=\left[\frac{1}{4}+\frac{3}{5K-2}\right]H_{E}\).
Calculating the amplitude for different polarizations we get
\[\left|\mathcal{M}_{T}^{\left(G_{E}\to v^{1}v^{1}\right)}\right| =\frac{1}{2}\left[c_{3}\left(M_{E}^{2}-4m_{v}^{2}\right)-6c_{4}M_{ \mathrm{KK}}^{2}-4\breve{c}_{2}M_{\mathrm{KK}}^{2}-2\breve{c}_{3}\left(M_{E} ^{2}-2m_{v}^{2}\right)\right]\] \[\left|\mathcal{M}_{L}^{\left(G_{E}\to v^{1}v^{1}\right)}\right| =\frac{c_{2}M_{\mathrm{KK}}^{2}\left(M_{E}^{2}-4m_{v}^{2}\right) +2\breve{c}_{2}M_{\mathrm{KK}}^{2}\left(M_{E}^{2}-2m_{v}^{2}\right)+6c_{4}M_{ \mathrm{KK}}^{2}m_{v}^{2}+4\breve{c}_{3}m_{v}^{4}}{2m_{v}^{2}}. \tag{102}\]
#### b.2.1 Exotic scalar glueball \(1\gamma\)-decays
For the decay in one vector meson and one photon, we use
\[\mathcal{L}_{G_{E}\mathcal{V}v^{m}}=-\operatorname{tr} \left\{c_{3}^{m\mathcal{V}}\left[2\eta^{\rho\sigma}F_{\mu\rho}^{m}F_{ \nu\sigma}^{\mathcal{V}}\frac{\partial^{\mu}\partial^{\nu}}{M_{E}^{2}}G_{E}- \frac{1}{2}F_{\mu\nu}^{m}F^{\mathcal{V}\mu\nu}\left(1+\frac{\square}{M_{E}^{2} }\right)G_{E}\right]\right.\] \[\qquad\qquad+3c_{4}^{\nu n}\frac{M_{\mathrm{KK}}^{2}}{M_{E}^{2}} v_{\mu}^{n}F^{\mathcal{V}\mu\nu}\partial_{\nu}G_{E}\left.+\breve{c}_{3}^{m \mathcal{V}}F_{\mu\nu}^{m}F^{\mathcal{V}\mu\nu}G_{E}\right\}, \tag{103}\]
with
\[c_{3}^{m\mathcal{V}}= \kappa\int\mathrm{d}zK^{-1/3}\psi_{2m-1}\overline{H}_{E}=\frac{ \{1.551,\ldots\}}{M_{\mathrm{KK}}N_{c}^{\frac{1}{2}}}\] \[c_{4}^{\mathcal{V}m}= \kappa\int\mathrm{d}z\,\frac{20ZX}{(5K-2)^{2}}\psi_{2m-1}^{\prime }H_{E}=\frac{\{-0.262,\ldots\}}{M_{\mathrm{KK}}N_{c}^{\frac{1}{2}}}\] \[\breve{c}_{3}^{m\mathcal{V}}= \frac{\kappa}{4}\int\mathrm{d}zK^{-1/3}\psi_{2m-1}H_{E},=\frac{ \{0.425,\ldots\}}{M_{\mathrm{KK}}N_{c}^{\frac{1}{2}}}\]
to obtain
\[\left|\mathcal{M}_{T}^{\left(G_{E}\to v^{m}\mathcal{V}\right)}\right|=\frac{ \left(M_{E}^{2}-m_{v}^{2}\right)}{2M_{E}^{2}}\left|3c_{4}^{\mathcal{V}n}M_{ \mathrm{KK}}^{2}+2\breve{c}_{3}^{m\mathcal{V}}M_{E}^{2}+c_{3}^{m\mathcal{V}} \left(m_{v}^{2}-M_{E}^{2}\right)\right|\,\operatorname{tr}\,\left(eQT_{v^{m}} \right). \tag{104}\]
#### b.2.2 Exotic scalar glueball \(2\gamma\)-decays
The two-photon decay rate is obtained from
\[\mathcal{L}_{G_{E}\mathcal{V}\mathcal{V}}= -\text{ tr}\,\left\{c_{3}^{\mathcal{V}\mathcal{V}}\left[F_{\mu\rho}^{ \mathcal{V}}F_{\nu}^{\mathcal{V}\rho}\frac{\partial^{\mu}\partial^{\nu}}{M_{E}^ {2}}G_{E}-\frac{1}{4}F_{\mu\nu}^{\mathcal{V}}F^{\mathcal{V}\mu\nu}\left(1+\frac {\square}{M_{E}^{2}}\right)G_{E}\right]\right.\] \[\left.+\frac{1}{2}\hat{c}_{3}^{\mathcal{V}\mathcal{V}}F_{\mu\nu}^ {\mathcal{V}}F^{\mathcal{V}\mu\nu}G_{E}\right\} \tag{100}\]
with
\[c_{3}^{\mathcal{V}\mathcal{V}} =\kappa\int\mathrm{d}zK^{-1/3}\overline{H}_{E}=\frac{237.587\kappa }{M_{\text{KK}}N_{c}\lambda^{1/2}}=0.0355\frac{\lambda^{\frac{1}{2}}}{M_{\text{ KK}}}, \tag{101}\] \[\hat{c}_{3}^{\mathcal{V}\mathcal{V}} =\frac{\kappa}{4}\int\mathrm{d}zK^{-1/3}H_{E}=\frac{71.18\kappa }{M_{\text{KK}}N_{c}\lambda^{1/2}}=0.0106\frac{\lambda^{\frac{1}{2}}}{M_{\text {KK}}}, \tag{102}\]
yielding
\[\left|\mathcal{M}_{T}^{(G_{E}\rightarrow\mathcal{V}\mathcal{V})}\right|=\frac {M_{E}^{2}}{2}\left(c_{3}^{\mathcal{V}\mathcal{V}}-2\hat{c}_{3}^{\mathcal{V} \mathcal{V}}\right)\,\text{tr}\,\left(e^{2}Q^{2}\right). \tag{103}\]
In Table 11 the results for the partial widths for the radiative and two-vector decays of the exotic scalar glueball are given when the above amplitudes are substituted in the respective formulae for the dilaton scalar glueball, (100), (101), and (119). Again, these are evaluate for the WSS model mass, which is only 855 MeV for the exotic scalar glueball, as well as for three higher masses, corresponding to the glueball candidates \(f_{0}(1500)\), \(f_{0}(1710)\), and the one proposed in [24]. While the total decay width of \(G_{E}\) is much larger than that of \(G_{D}\) at equal mass, see Table 4, the radiative and two-vector widths of \(G_{E}\) are much smaller than those of \(G_{D}\), see V.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \(\Gamma_{G_{E}^{\text{WSS}}}[\text{keV}]\) & \(\Gamma_{G_{E}(1506)}[\text{keV}]\) & \(\Gamma_{G_{E}(1712)}[\text{keV}]\) & \(\Gamma_{G_{E}(1865)}[\text{keV}]\) \\ \hline \(G_{E}\rightarrow\rho\rho\) & - & - & \(771\ldots 1022\) & \(2910\ldots 3857\) \\ \(G_{E}\rightarrow\omega\omega\) & - & - & \(194\ldots 257\) & \(843\ldots 1117\) \\ \(G_{E}\to K^{*}K^{*}\) & - & - & - & \(149\ldots 197\) \\ \hline \(G_{E}\rightarrow\rho\gamma\) & 0.047 & 13.4 & 20.7 & 26.4 \\ \(G_{E}\rightarrow\omega\gamma\) & 0.003 & 1.4 & 2.23 & 2.86 \\ \(G_{E}\rightarrow\phi\gamma\) & - & 0.30 & 0.98 & 1.72 \\ \hline \(G_{E}\rightarrow\gamma\gamma\) & 0.043\(\ldots\)0.033 & 0.076\(\ldots\)0.058 & 0.087\(\ldots\)0.066 & 0.095\(\ldots\)0.071 \\ \hline \hline \end{tabular}
\end{table}
Table 11: Radiative and two-vector decays of the exotic scalar glueball \(G_{E}\) with WSS model mass 855 MeV and extrapolated to the masses of \(f_{0}(1500)\), \(f_{0}(1710)\) and the scalar glueball at 1865 MeV proposed in [24]. |
2301.04355 | SIR-Model for Households | Households play an important role in disease dynamics. Many infections
happening there due to the close contact, while mitigation measures mainly
target the transmission between households. Therefore, one can see households
as boosting the transmission depending on household size. To study the effect
of household size and size distribution, we differentiated the within and
between household reproduction rate. There are basically no preventive
measures, and thus the close contacts can boost the spread. We explicitly
incorporated that typically only a fraction of all household members are
infected. Thus, viewing the infection of a household of a given size as a
splitting process generating a new, small fully infected sub-household and a
remaining still susceptible sub-household we derive a compartmental ODE-model
for the dynamics of the sub-households. In this setting, the basic reproduction
number as well as prevalence and the peak of an infection wave in a population
with given households size distribution can be computed analytically. We
compare numerical simulation results of this novel household-ODE model with
results from an agent--based model using data for realistic household size
distributions of different countries. We find good agreement of both models
showing the catalytic effect of large households on the overall disease
dynamics. | Philipp Doenges, Thomas Götz, Tyll Krueger, Karol Niedzielewski, Viola Priesemann, Moritz Schaefer | 2023-01-11T08:46:21Z | http://arxiv.org/abs/2301.04355v1 | # SIR-model for households
###### Abstract
Households play an important role in disease dynamics. Many infections happening there due to the close contact, while mitigation measures mainly target the transmission between households. Therefore, one can see households as boosting the transmission depending on household size. To study the effect of household size and size distribution, we differentiated the within and between household reproduction rate. There are basically no preventive measures, and thus the close contacts can boost the spread. We explicitly incorporated that typically only a fraction of all household members are infected. Thus, viewing the infection of a household of a given size as a splitting process generating a new, small fully infected sub-household and a remaining still susceptible sub-household we derive a compartmental ODE-model for the dynamics of the sub-households. In this setting, the basic reproduction number as well as prevalence and the peak of an infection wave in a population with given households size distribution can be computed analytically. We compare numerical simulation results of this novel household-ODE model with results from an agent-based model using data for realistic household size distributions of different countries. We find good agreement of both models showing the catalytic effect of large households on the overall disease dynamics.
COVID-19, Epidemiology, Disease dynamics, SIR-model, Social Structure
**MSC codes. 92D30, 93-10**
## 1 Introduction
The spread of an infectious diseases strongly depends on the interaction of the considered individuals. Traditional SIR-type models assume a homogeneous mixing of the population and typically neglect the increased transmission within closed subcommunities like households or school-classes. However, literature indicates, that household transmission plays an important role [3, 4, 9].
In case of the COVID-pandemic, several studies have quantified the secondary attack rate within households, i.e. the probability household-members get infected, given that one household member is infected [3, 7, 9, 10, 11, 12]. The secondary attack rate depends on the virus variant, immunity and vaccination status, cultural differences and mitigation measures within a household (like e.g. early quarantine). In December 2021, when both the Delta and Omicron variants were spreading, it ranged between about 19 % (Delta variant in Norway) [9] to 39 % (Omicron in Spain) [3]. Hence, interestingly, the secondary attack rate within a household is far from 100 % despite the close contacts.
There are several models reported that try to include the contribution of in-household transmission to the overall disease dynamics. The Reed-Frost model describing in-household infections as a Bernoulli-process has be used by [4, 5]. An average model assuming households always get completely infected and ignoring the temporal dynamics has been proposed in [2]. Their findings for the effective reproduction number agree with our results, see (3.2). Extensions of differential equation based SIR-models in case of a uniform household distribution have be proposed by [6, 8]. However, in real populations, the household distribution is far from uniform, see [14]. Ball and co-authors provided in [1] an overview of challenges posed by integrating household effects into epidemiological models, in particular in the context of compartmental differential equations models.
In this paper we develop an extended ODE SIR-model for disease transmission within and between households of different sizes. We will treat the infection of a given household as a splitting process generating two new sub-households or household fragments of smaller size - representing the susceptible (\(S\)) or fully infected (\(I\)) members. Each of the susceptible household fragments can get infected and split later in time, while infected household segments recover with a certain rate and are then immune (recovered or removed, \(R\)). The dynamics of the susceptible, infected and recovered household segments is modeled in Section 2. In Section 3, we compute the basic reproduction number for our household model combining the attack rate _inside_ a single households and the transmission rate _between_ individual households. The prevalence, as the limit of the recovered part of the population can be computed analytically -- at least in case of small maximal household size, see Section 4. The peak of a single epidemic wave in a population with given household distribution is considered in Section 5. Again, for small maximal household size, we will be able to compute analytically the maximal number of infected. Numerical simulations based on realistic household size distributions for different countries and a comparison with an agent-based model demonstrate the applicability of the presented model. As outlook with focus on non-pharmaceutical interventions we consider an extension of our model quarantining infected sub-households based on a certain detection rate for a single infected.
## 2 Household Model
Instead of real households we consider effective sub-households or household fragments, still called households throughout this paper, being either fully susceptible, infected or recovered. Let \(S_{j},I_{j}\) and \(R_{j}\) denote the number of the respective households consisting of \(j\) persons, where \(1\leq j\leq K\) and \(K\) denoting the maximal household size. Then \(H_{j}=S_{j}+I_{j}+R_{j}\) equals to the total number of current sub-households of size \(j\). Furthermore, we introduce \(H=\sum_{j=1}^{K}H_{j}\) as the total number of households and \(h_{j}=H_{j}/H\). The total population is given by \(N=\sum_{j=1}^{K}jH_{j}\). For further reference we also introduce the first two moments of the household size distribution
\[\mu_{1}:=\sum_{j=1}^{K}jh_{j}\quad\text{and}\quad\mu_{2}:=\sum_{j=1}^{K}j^{2}h_ {j}\;.\]
If an infection is brought into a susceptible sub-household of size \(j\), secondary infections will occur inside the household. We assume that each of the remaining \(j-1\) household members can get infected with probability \(a\), called the _in-household attack rate_. On average, we expect
\[E_{j}=a(j-1)+1\]
infections (including the primary one) inside a household of size \(j\). Existing field studies, see [7, 12] indicate an in-household attack rate in the range of 16%-30% depending on the overall epidemiological situation, household size and vaccinations. For the sake of tractability and simplicity, our model assumes a constant attack rate \(a\) independent of the household size.
Let \(b_{j,k}\) denote the probability, that a primary infection in a household of size \(j\) generates in total \(k\) infections inside this household, where \(1\leq k\leq j\). The secondary infections give rise to a splitting of the initial household of size \(j\) into a new, fully infected sub-household of size \(k\) and another still susceptible sub-household of size \(j-k\).
An infected household of size \(k\) recovers with a rate \(\gamma_{k}\) and contributes to the overall force of infection by an out-household infection rate \(\beta_{k}\). We assume that
the out-household reproduction number is independent of the household size, i.e.
\[\mathcal{R}^{*}=\frac{\beta_{k}}{\gamma_{k}}=\text{constant independent of }k\;. \tag{1}\]
Now, the dynamical system governing the dynamics of the susceptible, infected and recovered households of size \(k\) reads as
\[S^{\prime}_{k} =Y\left[-kS_{k}+\sum_{j=k+1}^{K}jS_{j}\cdot b_{j,j-k}\right]\;, \tag{2a}\] \[I^{\prime}_{k} =-\gamma_{k}I_{k}+Y\sum_{j=k}^{K}jS_{j}\cdot b_{j,k}\;,\] (2b) \[R^{\prime}_{k} =\gamma_{k}I_{k}\;, \tag{2c}\]
where
\[Y:=\frac{1}{N}\sum_{k=1}^{K}\beta_{k}\cdot kI_{k}\]
denotes the total force of infection.
For the recovery rate of an infected household of size \(k\) we can consider the two extremal cases and an intermediate case.
1. _Simultaneous_ infections: All members of the infected household get infected at the same time and recover at the same time, hence the recovery rate \(\gamma_{k}=\gamma_{1}\) is independent of the household size. Assumption (1) leads to a constant out-household infection rate \(\beta_{k}=\beta\).
2. _Sequential_ infections: All members of the household get infected one after another and the total recovery time for the entire household equals to \(k\) times the individual recovery time. Hence the recovery rate \(\gamma_{k}=\gamma_{1}/k\) and by (1) we get \(\beta_{k}=\beta_{1}/k\).
3. _Parallel_ infections: The recovery times \(T_{i}\), \(i=1,\dots,k\) for each of the \(k\) infected individuals are modeled as independent exponentially distributed random variables. Hence the entire household is fully recovered at time \(\max(T_{1},\dots,T_{k})\). The recovery rate \(\gamma_{k}\) equals to the inverse of the expected recovery time, i.e. \[\gamma_{k}=\frac{1}{E[\max(T_{1},\dots,T_{k})]}=\frac{1}{E[T_{1}]\cdot\theta_ {k}}=\frac{\gamma_{1}}{\theta_{k}}\;,\] where \(\theta_{k}=1+\frac{1}{2}+\dots+\frac{1}{k}\sim\log k+g\) denotes the \(k\)-th harmonic number and \(g\approx 0.5772\dots\) denotes the _Euler-Mascheroni_ constant.
All these cases can be subsumed to
\[\gamma_{k}=\frac{\gamma_{1}}{\eta_{k}},\quad\beta_{k}=\frac{\beta_{1}}{\eta_ {k}}\;,\]
where \(\eta_{k}\) models the details of the temporal dynamics inside an infected household.
In Figure 1 we compare these three cases in the scenario of a population with maximal household size \(K=6\). Each household size represents \(1/6\) of the entire population. Initially, \(1\%\) of the population is infected. The out-household reproduction number is assumed to be \(\mathcal{R}^{*}=1.33\), in-household attack rate equals \(a=0.2\) and the recovery rate \(\gamma_{1}\) for an infected individual equals \(0.1\). Shown are the incidences, i.e. the daily new infections over time. The three cases differ
in the timing and the height of the peak of the infection; for the simultaneous infections (\(\gamma_{k}\) constant) the disease spreads fastest and for the sequential infections (\(\gamma_{k}=\gamma_{1}/k\)) the spread is significantly delayed. The cases of parallel infections, i.e. \(\gamma_{k}\simeq\gamma_{1}/(\log k+g)\), lies between theses two extremes.
Modeling the infections inside the household by a Bernoulli-process with in-household attack rate (infection probability) \(a\in[0,1]\), the total number of infected persons inside the household follows a binomial distribution
\[b_{j,k}:=\binom{j-1}{k-1}a^{k-1}(1-a)^{j-k}\;. \tag{3}\]
For the binomial in-household infection (3) it holds, that \(E_{j}=a(j-1)+1\).
In model (2) the total population \(N=\sum_{k=1}^{K}k\cdot(S_{k}+I_{k}+R_{k})\) is conserved.
We have
\[N^{\prime}=Y\left[-\sum_{k=1}^{K}k^{2}S_{k}+\sum_{k=1}^{K}k\sum_{j=k+1}^{K}jS_ {j}b_{j,j-k}+\sum_{k=1}^{K}k\sum_{j=k}^{K}jS_{j}b_{j,k}\right]\]
We consider the last two summands separately and reverse the order of summation. Hence
\[\sum_{k=1}^{K}k\sum_{j=k}^{K}jS_{j}b_{j,k}=\sum_{j=1}^{K}jS_{j}\sum_{k=1}^{j} kb_{j,k}=\sum_{j=1}^{K}jE_{j}S_{j}\]
Figure 1: Simulation of one epidemic wave in case of the three different scalings of the recovery rates for households of size \(k\). Shown is the incidence, i.e. daily new infections over time. The green curve corresponds equal recovery rates for all households, i.e. \(\gamma_{k}=\gamma_{1}\). The red curve corresponds to \(\gamma_{k}=\gamma_{1}/k\) (sequential infections). The blue curve corresponds to recovery rates obtained from the maximum of exponential distributions, i.e. \(\gamma_{k}\simeq\gamma_{1}/(\log k)\).
and analogously
\[\sum_{k=1}^{K}k\sum_{j=k+1}^{K}jS_{j}b_{j,j-k} =\sum_{j=2}^{K}jS_{j}\sum_{k=1}^{j-1}kb_{j,j-k}\] \[=\sum_{j=2}^{K}jS_{j}\sum_{l=1}^{j-1}(j-l)b_{j,l}=\sum_{j=2}^{K}jS_ {j}\sum_{l=1}^{j}(j-l)b_{j,l}\] \[=\sum_{j=2}^{K}jS_{j}\left(j\sum_{l=1}^{j}b_{j,l}-\sum_{l=1}^{j} lb_{j,l}\right)=\sum_{j=2}^{K}j^{2}S_{j}-jE_{j}S_{j}\] \[=\sum_{j=1}^{K}j^{2}S_{j}-jE_{j}S_{j}-\left(S_{1}-E_{1}S_{1} \right)\;.\]
Due to \(E_{1}=1\), the last term vanishes. Summing all the contributions, we finally get
\[N^{\prime}=Y\left[\sum_{j=1}^{K}-(j^{2}S_{j})+jE_{j}S_{j}+(j^{2}S_{j}-jE_{j}S_ {j})\right]=0\;.\]
## 3 Basic Reproduction Number
To compute the basic reproduction number for the household-model (2), we follow the next generation matrix approach by Watmough and van den Driessche, see [15]. We split the state variable \(x=(S_{1},\ldots,R_{K})\in\mathbb{R}^{3K}\) into the infected compartment \(\xi=(I_{1},\ldots I_{K})\in\mathbb{R}^{K}\) and the remaining components \(\chi\in\mathbb{R}^{2K}\). The infected compartment satisfies the differential equation
\[\xi_{k}^{\prime}=\mathcal{F}_{k}(\xi,\chi)-\mathcal{V}_{k}(\xi,\chi)\]
where \(\mathcal{F}_{k}=Y\sum_{j=k}^{K}jS_{j}\,b_{j,k}\), \(\mathcal{V}_{k}=\gamma_{k}\xi_{k}\) and \(Y=\frac{1}{N}\sum_{j=1}^{K}\beta_{j}j\xi_{j}\). Now, the Jacobians at the disease free equilibrium \((0,\chi^{*})\) are given by
\[F_{kj} =\frac{\partial\,\mathcal{F}_{k}}{\partial\,\xi_{j}}(0,\chi^{*}) =\frac{1}{N}j\beta_{j}\sum_{m=k}^{K}mH_{m}\,b_{m,k}\] \[V_{kk} =\frac{\partial\,\mathcal{V}_{k}}{\partial\,\xi_{k}}(0,\chi^{*}) =\gamma_{k}\quad\text{and }V_{kj}=0\text{ for }k\neq j\]
The next generation matrix \(G\in\mathbb{R}^{K\times K}\) is given by
\[G=FV^{-1}=\frac{1}{N}\left(\frac{j\beta_{j}}{\gamma_{j}}\sum_{m=k}^{K}mH_{m} \,b_{m,k}\right)_{k,j}\]
Using the relation (1) we obtain
\[G=\mathcal{R}^{*}\left(j\sum_{m=k}^{K}\frac{mH_{m}}{N}\,b_{m,k}\right)_{k,j}= \mathcal{R}^{*}\begin{pmatrix}\sum_{m=1}^{K}\frac{mH_{m}}{N}\,b_{m,1}\\ \sum_{m=2}^{K}\frac{mH_{m}}{N}\,b_{m,2}\\ \vdots\\ \frac{KH_{K}}{N}\,b_{KK}\end{pmatrix}\cdot\left(1,2,\ldots,K\right)\]
where the last term represents the next generation matrix as a dyadic product \(G=\mathcal{R}^{*}\cdot dc^{T}\).
The basic reproduction number \(\mathcal{R}=\rho(G)\) is defined as the spectral radius \(\rho(G)\) of the next generation matrix \(G\in\mathbb{R}^{K\times K}\). As a dyadic product, \(G\) has rank one and hence there is only one non-zero eigenvalue.
**Lemma 3.1**: _Let \(c,d\in\mathbb{R}^{n}\) be two vectors with \(c^{T}d\neq 0\). The non-zero eigenvalue of the dyadic product \(A=dc^{T}\in\mathbb{R}^{n\times n}\) is given by \(c^{T}d=\operatorname{tr}(A)\)._
Let \(v\) be an eigenvector of \(A\) to the eigenvalue \(\lambda\). Then \(\lambda v=Av=(dc^{T})v=(c^{T}v)d\), where \(c^{T}v\in\mathbb{R}\). If \(\lambda\neq 0\), then \(v=\frac{c^{T}v}{\lambda}d\). Set \(\mu=\frac{c^{T}v}{\lambda}\in\mathbb{R}\). Now, \(Av=A(\mu d)=\mu(dc^{T})d=(c^{T}d)\mu d=(c^{T}d)v\); hence \(\lambda=c^{T}d\).
As an immediate consequence we obtain
**Theorem 3.2**: _The basic reproduction number of system (2) is given by_
\[\mathcal{R}=\mathcal{R}^{*}\sum_{i=1}^{K}i\sum_{m=i}^{K}\frac{mH_{m}}{N}b_{m,i }=\mathcal{R}^{*}\sum_{m=1}^{K}\frac{mH_{m}}{N}E_{m}\;. \tag{1}\]
**Corollary 3.3**: _In the particular situation, when the expected number of infections inside a household of size \(m\) is given by \(E_{m}=a(m-1)+1\), the basic reproduction number equals_
\[\mathcal{R}=\mathcal{R}^{*}\left[1+a\left(\frac{\mu_{2}}{\mu_{1}}-1\right) \right]\;. \tag{2}\]
Assuming, that the infection of any member of a household results in the infection of the entire household, i.e. in-household attack rate \(a=1\), our result \(\mathcal{R}=\frac{\mu_{2}}{\mu_{1}}\mathcal{R}^{*}\) agrees with the result obtained by Becker and Dietz in [2, Sect. 3.2].
## 4 Computing the prevalence
Let \(z=\frac{1}{N}\sum_{k=1}^{K}kR_{k}\) denote the fraction of recovered individuals. Then \(z\) satisfies the ODE
\[z^{\prime}=\frac{Y}{\mathcal{R}^{*}}\;.\]
In the sequel we will derive an implicit equation for the prevalence \(\lim_{t\to\infty}z(t)\) in two special cases:
1. for maximal household size \(K=3\). The procedure used here allows for immediate generalization but the resulting expression gets lengthy and provide only minor insight into the result.
2. for in-household attack rate \(a=1\) and arbitrary household sizes. In this setting the equations (2a) for the susceptible households decouple and allow and complete computation of the equation for the prevalence.
We will start with the second case. Let us consider \(a=1\), i.e. inside households infections are for sure and \(E_{k}=k\). Then the ODE for the susceptible households reads as \(S^{\prime}_{k}=-YkS_{k}\) and we can insert the recovered \(z\)
\[S^{\prime}_{k}=-k\mathcal{R}^{*}S_{k}z\;.\]
After integration with respect to \(t\) from \(0\) to \(\infty\), we get
\[\ln\frac{S_{k}(\infty)}{S_{k}(0)}=k\mathcal{R}^{*}\left(z(0)-z(\infty)\right)\;.\]
Assuming initially no recovered individuals, i.e. \(z(0)=0\) and considering the total population at the the end of time, i.e. \(N=Nz(\infty)+\sum_{k}kS_{k}(\infty)\), we arrive at the system
\[N=Nz(\infty)+\sum_{k=1}^{K}kS_{k}(0)e^{-k\mathcal{R}^{*}z(\infty)}\;.\]
Scaling with \(N\), i.e. introducing \(s_{k,0}=S_{k}(0)/N\), we get
\[1=z+\sum_{k=1}^{K}k\,s_{k,0}\,e^{-k{\mathcal{R}}^{*}z}\;. \tag{4.1}\]
In the limit \({\mathcal{R}}^{*}z\ll 1\) we obtain the approximation
\[z\sim\frac{1-\sum_{k}ks_{k,0}}{1-{\mathcal{R}}^{*}\sum_{k}k^{2}s_{k,0}}\;.\]
Figure 1 shows the numerical solution of the prevalence equation (4.1) in case of \(K=2\) for reproduction numbers \({\mathcal{R}}^{*}\in[0,3]\) and initially \(2\%\) of the entire population being infected. The three different graphs correspond to different initial values for the susceptible households: both single and double households contain \(49\%\) of the population (blue), \(89\%\) of susceptibles live in single households and only \(9\%\) in double households (red) or just \(9\%\) in single households and \(89\%\) in double households (green).
In case of arbitrary in-household attack rates \(a\in[0,1]\), we consider the situation for \(K=3\). Setting \(z=Z/N\), the relevant equations read as
\[S_{3}^{\prime} =-3{\mathcal{R}}^{*}z^{\prime}\,S_{3}\] \[S_{2}^{\prime} ={\mathcal{R}}^{*}z^{\prime}\left(-2S_{2}+3b_{3,1}S_{3}\right)\] \[S_{1}^{\prime} ={\mathcal{R}}^{*}z^{\prime}\left(-S_{1}+2b_{2,1}S_{2}+3b_{3,2}S _{3}\right)\;.\]
Solving the equations successively starting with \(S_{3}(t)=S_{3}(0)e^{-3{\mathcal{R}}^{*}z(t)}\) and using variation of constants, we arrive at
\[S_{2}(t) =S_{2}(0)e^{-2{\mathcal{R}}^{*}z(t)}+3b_{3,1}S_{3}(0)\left(1-e^{- {\mathcal{R}}^{*}z(t)}\right)e^{-2{\mathcal{R}}^{*}z(t)}\] \[S_{1}(t) =S_{1}(0)e^{-{\mathcal{R}}^{*}z(t)}+2b_{2,1}\left(S_{2}(0)+3b_{3, 1}S_{3}(0)\right)\left(1-e^{-{\mathcal{R}}^{*}z(t)}\right)e^{-{\mathcal{R}}^{ *}z(t)}\] \[\qquad+\tfrac{3}{2}\left(b_{3,2}-2b_{2,1}b_{3,1}\right)S_{3}(0) \left(1-e^{-2{\mathcal{R}}^{*}z(t)}\right)e^{-{\mathcal{R}}^{*}z(t)}\;.\]
Figure 1: Prevalence in case of maximal household size \(K=2\) vs. reproduction number. If double households dominate (green), the prevalence is larger than in the case of mostly single households (red), since households speed up the infection dynamics.
For the prevalence \(z=\lim_{t\to\infty}z(t)\) we obtain the implicit equation
\[1=z+\left[s_{1,0}+2b_{2,1}s_{2,0}+3(b_{2,1}b_{3,1}+\tfrac{1}{2}b_{3, 2})s_{3,0}\right]e^{-\mathcal{R}^{*}z}\\ +2(1-b_{2,1})\left[s_{2,0}+3b_{3,1}s_{3,0}\right]e^{-2\mathcal{R} ^{*}z}\\ +3\left[1+(b_{2,1}-2)b_{3,1}-\tfrac{1}{2}b_{3,2}\right]s_{3,0}e^{ -3\mathcal{R}^{*}z}\]
or in short
\[1=z+c_{1}e^{-\mathcal{R}^{*}z}+c_{2}e^{-2\mathcal{R}^{*}z}+c_{3}e^{-3\mathcal{ R}^{*}z}\;,\]
where the coefficients \(c_{1},c_{2}\) and \(c_{3}\) are the above, rather lengthy expressions involving the initial conditions and the in-household infection probabilities \(b_{j,k}\). In case of \(K=2\), i.e. setting \(s_{3,0}=0\), we get
\[1=z+(s_{1,0}+2b_{2,1}s_{2,0})\,e^{-\mathcal{R}^{*}z}+2\left(1-b_{2,1}\right)s_ {2,0}e^{-2\mathcal{R}^{*}z}\;. \tag{10}\]
In case of arbitrary household size \(K>3\), the resulting equation for the prevalence will have the same structure.
For the binomial infection distribution with \(a=1\) this reduces to
\[1=z+s_{1,0}e^{-\mathcal{R}^{*}z}+2s_{2,0}e^{-2\mathcal{R}^{*}z}+3s_{3,0}e^{-3 \mathcal{R}^{*}z}\]
and in case of \(a=0\) we arrive at
\[1=z+(s_{1,0}+2s_{2,0}+3s_{3,0})\,e^{-\mathcal{R}^{*}z}\;.\]
In case of small initial infections, i.e. \(s_{1,0}+2s_{2,0}+3s_{3,0}=c_{1}+c_{2}+c_{3}\approx 1\), Eqn. (11) allows for the trivial, disease free solution \(z=0\). However, above the threshold \(\mathcal{R}_{c}=\frac{1}{c_{1}+2c_{2}+3c_{3}}\) the non-trivial endemic solution shows up. Expanding the exponential for \(\mathcal{R}^{*}z\ll 1\), we arrive at
\[1\approx z+(c_{1}+c_{2}+c_{3})-\mathcal{R}^{*}z\left(c_{1}+2c_{2}+3c_{3} \right)+\frac{\mathcal{R}^{*}{}^{2}z^{2}}{2}\left(c_{1}+4c_{2}+9c_{3}\right)\]
and hence
\[z\left(\mathcal{R}^{*}(c_{1}+2c_{2}+3c_{3})-1\right)\approx\frac{\mathcal{R} ^{*}{}^{2}z^{2}}{2}\left(c_{1}+4c_{2}+9c_{3}\right)\;.\]
Besides the trivial solution \(z=0\), this approximation has the second solution
\[z\simeq\frac{2\left(\mathcal{R}^{*}(c_{1}+2c_{2}+3c_{3})-1\right)}{\mathcal{ R}^{*}{}^{2}\left(c_{1}+4c_{2}+9c_{3}\right)}\;. \tag{11}\]
If \(\mathcal{R}^{*}>\mathcal{R}_{c}=(c_{1}+2c_{2}+3c_{3})^{-1}\), this second root is positive.
The following Figure 2(left) shows the numerical solution of the prevalence equation (10) in case of \(K=2\) for reproduction numbers \(\mathcal{R}^{*}\in[0,3]\) and initially \(2\%\) of the entire population being infected while both single and double households contain \(49\%\) of the susceptible population. The different curves show the results depending on the in-household attack rate \(a\). The graph on the right depict the case \(K=3\) and \(s_{1,0}=s_{2,0}=s_{3,0}=\frac{1}{6}\) for three different in-household attack rates \(a\). The approximation (11) is shown by the dashed curves. For \(\mathcal{R}^{*}<\mathcal{R}_{c}\), the trivial disease free solution \(z=0\) is the only solution.
## 5 Computing the peak of the infection
Let \(J=\sum_{k=1}^{K}kI_{k}\) denote the total number of infected. Then \(J\) satisfies
\[J^{\prime} =\sum_{k=1}^{K}kI_{k}^{\prime}=Y\left[-\frac{N}{\mathcal{R}^{*}}+ \sum_{k=1}^{K}k\sum_{j=k}^{K}jS_{j}b_{j,k}\right]=Y\left[-\frac{N}{\mathcal{R}^ {*}}+\sum_{j=1}^{K}jS_{j}\sum_{k=1}^{j}kb_{j,k}\right]\] \[=Y\left[-\frac{N}{\mathcal{R}^{*}}+\sum_{j=1}^{K}jS_{j}E_{j} \right]\;.\]
For \(K=2\) we have to consider the problem
\[S_{1}^{\prime} =Y\left[-S_{1}+2S_{2}b_{2,1}\right]\] \[S_{2}^{\prime} =Y\left[-2S_{2}\right]\] \[J^{\prime} =Y\left[-\frac{N}{\mathcal{R}^{*}}+S_{1}+2E_{2}S_{2}\right]\;.\]
Note, that \(E_{2}=b_{2,1}+2b_{2,2}\) and \(b_{2,1}+b_{2,2}=1\), hence \(b_{2,1}=2-E_{2}\). For sake of shorter notation and easier interpretation, we introduce the scaled compartments \(s_{1}=S_{1}/N\), \(s_{2}=S_{2}/N\) and \(j=J/N\). Writing \(s_{1}\) as a function of \(s_{2}\), we get
\[\frac{ds_{1}}{ds_{2}}=E_{2}-2+\frac{s_{1}}{2s_{2}}\]
with the solution
\[s_{1}=c\sqrt{s_{2}}-2(2-E_{2})s_{2}\;,\]
where \(c=2(2-E_{2})\sqrt{s_{2,0}}+s_{1,0}/\sqrt{s_{2,0}}\). For \(j\) we have the equation
\[\frac{dj}{ds_{2}}=\frac{-1/\mathcal{R}^{*}+s_{1}+2E_{2}s_{2}}{-2s_{2}}=\frac{ 1}{2\mathcal{R}^{*}s_{2}}-\frac{ds_{1}}{ds_{2}}-E_{2}\]
with the solution given by
\[j=\frac{1}{2\mathcal{R}^{*}}\ln\frac{s_{2}}{s_{2,0}}-s_{1}(s_{2})-E_{2}(s_{2}- s_{2,0})\;.\]
Figure 2: Visualization of the prevalence vs. out–household reproduction number \(\mathcal{R}^{*}\) and for different in–household attack rate \(a\). (Left:) Maximal household size \(K=2\). (Right:) Maximal household size \(K=3\). Also shown are the asymptotic approximation (4.4) and the critical value \(\mathcal{R}_{c}\).
The maximum of \(j\) is either attained in case of an under-critical epidemic at initial time at the necessary condition \(j^{\prime}=0\) has to hold. Inserting \(s_{1}\) as a function of \(s_{2}\) we arrive at the quadratic equation
\[4(E_{2}-1)x^{2}+cx-\frac{1}{\mathcal{R}^{*}}=0\]
for \(x=\sqrt{s_{2}}\) with the positive root
\[x=\sqrt{s_{2}}=\frac{c}{8(E_{2}-1)}\left(\sqrt{1+\frac{16}{\mathcal{R}^{*}} \frac{E_{2}-1}{c^{2}}}-1\right)\;.\]
This solution is only meaningful, if \(s_{1}+2s_{2}\leq 1\), i.e.
\[\frac{1}{\mathcal{R}^{*}}\leq 1+2(E_{2}-1)x^{2} \tag{10}\]
which is an implicit equation for a threshold value of \(\mathcal{R}^{*}\).
To obtain the value of \(j\) at the maximum, we plug this root into the above solution for \(j\) and arrive at
\[j_{max}=1-\frac{1}{2\mathcal{R}^{*}}\left(1+\ln\frac{s_{2,0}}{x^{2}}\right)- \frac{cx}{2}\;.\]
Hence, the peak of infection \(j_{max}\) is a function of the out-household reproduction number \(\mathcal{R}^{*}=\beta_{k}/\gamma_{k}\), the expected infections in double households \(E_{2}\) and the initial conditions \(s_{2,0}\), \(s_{1,0}\). In particular, it is independent of the scaling of the recovery periods \(\gamma_{k}\), as can be seen also in the more complex setting presented in Figure 1.
The following Figure 1 shows the peak of infection \(j_{max}\) versus the out-household reproduction number \(\mathcal{R}\). Solutions are only plotted, if the condition (10) is satisfied. The left figure shows the situation for \(E_{2}=1.5\) and different initial conditions. The right figure shows the variation with respect to \(E_{2}=1+a\) keeping the initial conditions fixed as \(s_{1,0}=0.5\) and \(s_{2,0}=0.25\).
## 6 Simulations and comparison with agent-based model
The analysis of our household model presented in the previous sections, shows that larger households have a strong effect on the spread of the epidemic. To illustrate this, we simulate a single epidemic wave in populations with close to realistic household distribution. For comparison we choose the household size distributions for Bangladesh (BGD), Germany (GER) and Poland (POL) as published by the UN statistics division in 2011, see [14]. Using a sample of almost \(1\,000\,000\) individuals, we obtain the distribution shown in Table 1.
Figure 1: Peak height \(j_{\max}\) of a single epidemic wave vs. out–household reproduction number \(\mathcal{R}^{*}\). (Left:) Variation with respect to different initial household size distributions. (Right:) Variation with respect to different in–household attack rates.
_Remark 6.1_.: Note, that in Table 1 we have redistributed for Poland and Bangladesh the fraction of population living in households of size \(6\) or bigger to household of size equal \(7\) to match the total population. How one treats the population represented by the tail in the household distribution can make a substantial difference in the simulation outcomes. To visualize that, we show in Figure 1 simulations for out-household reproduction number \(\mathcal{R}^{*}=0.9\), in-household attack rate \(a=0.2\) and three different versions of how to treat the tail in the Polish household size distribution. The blue curve represents the population distribution given in Table 1. The green curve uses the extended census data including households up to size \(10+\). All households of size \(10+\) are treated as households of size _equal_ to \(10\). The red curve show a simplified treatment of the tail, where all households of size \(6+\) are treated as being of size _equal_ to \(6\).
The simulation redistributing the households of size \(6+\) to size \(7\) (blue) agrees within the simulation accuracy with the detailed distribution (green). The simplified treatment (red) shows already a significant difference. This difference will be even more pronounced for smaller reproduction numbers \(\mathcal{R}^{*}\) since the smaller households first get subcritical with decreasing \(\mathcal{R}^{*}\). This result clearly visualizes the need for careful treatment of the tail in the household size distribution, since big households have an overproportional impact on the dynamics. Unfortunately, most publicly available data truncates the household size distribution at size \(6\). From a modeling and simulation point of view, there is a need for more detailed data on the household size distribution including information for households of size bigger than \(6\).
As initial condition for our simulation we assume \(100\) infected single households. The recovery rates inside the households are described using the model of parallel infections, i.e. \(\gamma_{k}=\gamma_{1}/(1+1/2+\cdots+1/k)\). For the out-household reproduction number \(\mathcal{R}^{*}\) we consider the range \(0.6\leq\mathcal{R}^{*}\leq 2.3\) and the in-household attack is chosen as \(a=0.25\). The ODE-system (2) is solved using a standard RK4(5)-method.
Figures 2 and 3 show a comparison of the three countries. The prevalance and the maximum number of infected shown in Fig. 2 clearly visualize that large households are drivers of the infection. For moderate out-household reproduction number \(\mathcal{R}^{*}\simeq 1\), Bangladesh, with an average household size \(\mu_{1}=4.4\), faces a relative prevalence being approx. \(50\%\) higher than Germany with an average household size of \(\mu_{1}=2.1\) persons. Also the peak number of infected is almost twice as high in Bangladesh compared to Germany for \(\mathcal{R}^{*}\in[1,1.25]\). Fig. 3 shows the simulation of a single infection wave for moderate out-household reproduction number \(\mathcal{R}^{*}=1.1\) and in-household attack rate \(a=0.25\). The graph illustrates the faster and more severe progression of the epidemics in case of larger households. The peak of the wave occurs in Bangladesh \(60\) days earlier and affects almost double the number of individuals compared to Germany.
To validate our model, we compared it to a stochastic, microscopic agent-based model developed at the Interdisciplinary Centre for Mathematical and Computational Modelling at the University of Warsaw. Complete details of this model are given in [13]. In this model agents have certain states (susceptible, infected, recovered, hospitalized, deceased, etc.) and infection events occur in certain context, i.e. in households, on the streets or workplaces and several more. Besides the household-context, the street-context is used to capture infection events outside households. Since the ODE-model (2) is a variant of an SIR-model, the agent-based model also just uses the SIR-states and ignores all other states. The agent-based model uses for each infected individual a recovery time that is sampled from an exponential distribution with mean 10 days. Based on the household distribution for Poland, Figure 4 provides a comparison of the computed prevalence vs. the out-household reproduction number for our model 2 and the agent-based model (blue squares). The solid lines show the results of the ODE-model for different in-household attack rates. The relative prevalence plotted in the figure is defined as the total number of recovered individuals for time \(t\to\infty\); here we use \(T=2\,000\) days for practical reasons.
Figure 5 shows the results for the peak of the infection wave. The graph on the left shows the peak of the incidences, i.e. the maximum number of daily new infections and the graph on the right shows the time, when this peak occurs.
For all three presented criteria: prevalence, peak height and peak time, our ODE-household model (2) matches quite well with the agent-based simulation. Best agreement can be seen for in-household attack rate \(a\) in the range between \(0.16\) and \(0.20\).
## 7 Effect of household quarantine
In this section we will extend our basic model (2) to households put under quarantine after an infection is detected. Therefore, we introduce the additional compartments \(Q_{k}\) of quarantined households
Figure 4: Simulation of one infection wave, plotted is the relative prevalence \(z\) versus the out–household reproduction number \(\mathcal{R}^{*}\). Initially \(100\) infected single households in a population according to Table 1. The solid lines show the results of the ODE model for different in–household attack rates. The results of the agent–based model are shown by the blue squares (exponentially distributed recovery time).
of size \(k\). The extended household model reads as
\[S^{\prime}_{k} =Y\left[-kS_{k}+\sum_{j=k+1}^{K}jS_{j}\cdot b_{j,j-k}\right] \tag{11a}\] \[I^{\prime}_{k} =-\gamma_{k}I_{k}-q_{k}I_{k}+Y\sum_{j=k}^{K}jS_{j}\cdot b_{j,k}\] (11b) \[Q^{\prime}_{k} =q_{k}I_{k}\] (11c) \[R^{\prime}_{k} =\gamma_{k}I_{k} \tag{11d}\]
Here \(q_{k}>0\) denotes the detection and quarantine rate for a household of size \(k\).
Let \(q>0\) be the probability of infected individual to get detected. The probability that a household consisting of \(k\) infected gets detected equals \(d_{k}(q)=1-(1-q)^{k}\). Given recovery and detection rates \(\gamma_{k}\) and \(q_{k}\) for a household of size \(k\), the probability, that the household gets detected before recovery equals \(q_{k}/(\gamma_{k}+q_{k})\), which equals \(d_{k}(q)\). Therefore
\[\frac{q_{k}}{\gamma_{k}}=\frac{d_{k}(q)}{1-d_{k}(q)}\sim kq\]
for \(q\ll 1\).
To compute the basic reproduction number, we repeat the computations for the next generation matrix. The vector \(\xi\) of infected compartments remains unaltered and the new quarantine compartments \(Q_{1},\ldots,Q_{K}\) are included in the vector \(\chi\). The gain term \(\mathcal{F}_{k}\) for the infected compartments remains unaltered as well, but the loss term reads as \(\mathcal{V}_{k}=(\gamma_{k}+q_{k})\xi_{k}\). Accordingly, its Jacobian modifies to \(V_{kk}=\gamma_{k}+q_{k}\) and finally the next generation matrix \(G\) reads as
\[G =\frac{\beta}{N}\left(\frac{j}{\gamma_{j}+q_{j}}\sum_{m=i}^{K}mH_{ m}\,b_{m,i}\right)_{i,j}\] \[=\mathcal{R}^{*}\begin{pmatrix}\sum_{m=1}^{K}\frac{mH_{m}}{N}\,b_ {m,1}\\ \sum_{m=2}^{K}\frac{mH_{m}}{N}\,b_{m,2}\\ \vdots\\ \frac{KH_{2}}{N}\,b_{KK}\end{pmatrix}\cdot\left(\tfrac{1}{1+q_{k}/\gamma_{k}}, \tfrac{2}{1+q_{2}/\gamma_{2}},\ldots,\tfrac{K}{1+q_{K}/\gamma_{K}}\right)\]
Figure 5: Simulation of one infection wave, initially \(100\) infected single households in a population according to Table 1. Left: Peak number of daily new infections versus the out-household reproduction number \(\mathcal{R}^{*}\). Right: Time, when the peak occurs vs. \(\mathcal{R}^{*}\). The solid lines show the results of the ODE model for different in-household attack rates. The results of the agent-based model are shown by the blue squares.
Its non-zero eigenvalue equals
\[\mathcal{R}_{q}:=\mathcal{R}^{*}\sum_{k=1}^{K}\frac{k}{1+q_{k}/\gamma_{k}}\sum_{m= k}^{K}\frac{mH_{m}}{N}b_{m,k}=\mathcal{R}^{*}\sum_{m=1}^{K}\frac{mH_{m}}{N}\sum_{k=1}^ {m}\frac{kb_{m,k}}{1+q_{k}/\gamma_{k}}\;.\]
Due to denominator \(1+q_{k}/\gamma_{k}\) we cannot interpret the last sum as the expected infected cases in a household of size \(m\). In the asymptotic case of small quarantine rates \(q_{k}\ll 1\), we can use the series expansion \((1+q_{k}/\gamma_{k})^{-1}\sim 1-q_{k}/\gamma_{k}\sim 1-kq\) and get
\[\mathcal{R}_{q}\sim\mathcal{R}^{*}\sum_{m=1}^{K}\frac{mH_{m}}{N}\left(E_{m}-q \sum_{k=1}^{m}k^{2}b_{m,k}\right)\;.\]
So, the second moment of the in-household infection distribution comes into play.
If the expectation \(E_{m}\) and the the second moment \(\sum_{k=1}^{m}k^{2}b_{m,k}\) of the in-household infections scale linear with the household size \(m\), then the above reproduction number depends on also on the third moment of the household size distribution. This leads to a straight-forward extension of the result (15).
In case of a binomial distribution of the in-household infections, it holds that \(E_{m}=a(m-1)+1\) and \(\sum_{k=1}^{m}k^{2}b_{m,k}=a(m-1)\left(3+a(m-2)\right)+1\). For the reproduction number we arrive at the expression
\[\mathcal{R}_{q}\sim\mathcal{R}^{*}\left[1-\frac{q}{\gamma}+a\left(1-\frac{3q} {\gamma}(1-a)\right)\left(\frac{\mu_{2}}{\mu_{1}}-1\right)-\frac{q}{\gamma}a^{ 2}\left(\frac{\mu_{3}}{\mu_{1}}-1\right)\right]\;,\]
where \(\mu_{3}:=\sum_{k=1}^{K}k^{3}h_{k}\) denotes the third moment of the household size distribution.
The effectiveness of quarantine measures can be assessed by relating the reduction in prevalence to the quarantined individuals. For a given detection probability \(q>0\), let \(Z(q)\) denote the prevalence observed under these quarantine measures and let \(Q(q)\) denote the total number of individuals quarantined. We define the effectiveness of the quarantine measures as
\[\eta_{q}:=\frac{|Z(q)-Z(0)|}{Q(q)}\;.\]
The numerator equals to the reduction in prevalence and hence \(\eta\) can be interpreted as infected cases saved per quarantined case.
In Figure 1 we compare the effectiveness in the setting of household distributions resembling Germany (predominantly small households) and Bangladesh (large households are dominant) for out-household reproduction numbers in the range between \(0.9\) and \(1.2\). The results indicate, that quarantine measures are less effective in societies with larger households since infection chains inside households are not affected by the quarantine. On the other hand, quarantine measures seem most effective in presence of small households and for low reproduction numbers.
## 8 Conclusion and Outlook
The findings of the compartmental household model (11) are in rather good agreement with microscopic agent-based simulations. Despite its several simplifying assumptions, the prevalence and the peak of the infection wave are reproduced quite well.
However, field data suggests that the in-household attack rate \(a\) is significantly higher for 2-person households than for larger households, see [7, 12]. Analyzing the data given in [7] the attack rate in two-person households is by a factor of \(1.25\)-\(1.5\) larger than in households with three or more members. According to [12] this may be caused by spouse relationship to the index case reflecting intimacy, sleeping in
the same room and hence longer or more direct exposure to the index case. An extension of the model (3) to attack rates \(a_{k}\) depending on the household size \(k\) is straightforward. Carrying over the analytical computation of the reproduction number (2) would lead to a variant of (2) including weighted moments of the household size distribution.
When considering quarantine for entire households, the model shows some interesting outcomes. Defining the ratio of reduction in prevalence and number of quarantined persons as the effectivity of a quarantine, the model suggests that quarantine is more effective in populations where small households dominate. Whether this finding is supported by field data is a topic for future literature research.
Vaccinations are not yet included in the model. From the point of view of public health, the question arises whether the limited resources of vaccines should be distributed with preference to larger households. Given the catalytic effect of large households on the disease dynamics, one could suggest to vaccinate large households first. However, a detailed analysis of this question will be subject of follow-up research.
When extending our ODE SIR-model to an SIRS-model including possible reinfection a recombination of the recovered sub-households is required. Again, this extension will be left for future work. A relaxation of the distribution of the recovered sub-household distribution to the overall household distribution seems a tractable approach to tackle this issue.
|
2305.19195 | PanoGen: Text-Conditioned Panoramic Environment Generation for
Vision-and-Language Navigation | Vision-and-Language Navigation (VLN) requires the agent to follow language
instructions to navigate through 3D environments. One main challenge in VLN is
the limited availability of photorealistic training environments, which makes
it hard to generalize to new and unseen environments. To address this problem,
we propose PanoGen, a generation method that can potentially create an infinite
number of diverse panoramic environments conditioned on text. Specifically, we
collect room descriptions by captioning the room images in existing
Matterport3D environments, and leverage a state-of-the-art text-to-image
diffusion model to generate the new panoramic environments. We use recursive
outpainting over the generated images to create consistent 360-degree panorama
views. Our new panoramic environments share similar semantic information with
the original environments by conditioning on text descriptions, which ensures
the co-occurrence of objects in the panorama follows human intuition, and
creates enough diversity in room appearance and layout with image outpainting.
Lastly, we explore two ways of utilizing PanoGen in VLN pre-training and
fine-tuning. We generate instructions for paths in our PanoGen environments
with a speaker built on a pre-trained vision-and-language model for VLN
pre-training, and augment the visual observation with our panoramic
environments during agents' fine-tuning to avoid overfitting to seen
environments. Empirically, learning with our PanoGen environments achieves the
new state-of-the-art on the Room-to-Room, Room-for-Room, and CVDN datasets.
Pre-training with our PanoGen speaker data is especially effective for CVDN,
which has under-specified instructions and needs commonsense knowledge. Lastly,
we show that the agent can benefit from training with more generated panoramic
environments, suggesting promising results for scaling up the PanoGen
environments. | Jialu Li, Mohit Bansal | 2023-05-30T16:39:54Z | http://arxiv.org/abs/2305.19195v1 | # PanoGen: Text-Conditioned Panoramic Environment Generation for Vision-and-Language Navigation
###### Abstract
Vision-and-Language Navigation requires the agent to follow language instructions to navigate through 3D environments. One main challenge in Vision-and-Language Navigation is the limited availability of photorealistic training environments, which makes it hard to generalize to new and unseen environments. To address this problem, we propose PanoGen, a generation method that can potentially create an infinite number of diverse panoramic environments conditioned on text. Specifically, we collect room descriptions by captioning the room images in existing Matterport3D environments, and leverage a state-of-the-art text-to-image diffusion model to generate the new panoramic environments. We use recursive outpainting over the generated images to create consistent 360-degree panorama views. Our new panoramic environments share similar semantic information with the original environments by conditioning on text descriptions, which ensures the co-occurrence of objects in the panorama follows human intuition, and creates enough diversity in room appearance and layout with image outpainting. Lastly, we explore two ways of utilizing PanoGen in VLN pre-training and fine-tuning. We generate instructions for paths in our PanoGen environments with a speaker built on a pre-trained vision-and-language model for VLN pre-training, and augment the visual observation with our panoramic environments during agents' fine-tuning to avoid overfitting to seen environments. Empirically, learning with our PanoGen environments achieves the new state-of-the-art on the Room-to-Room, Room-for-Room, and CVDN datasets. Besides, we find that pre-training with our PanoGen speaker data is especially effective for CVDN, which has under-specified instructions and needs commonsense knowledge to reach the target. Lastly, we show that the agent can benefit from training with more generated panoramic environments, suggesting promising results for scaling up the PanoGen environments to enhance agents' generalization to unseen environments.
[https://pano-gen.github.io](https://pano-gen.github.io)
## 1 Introduction
Vision-and-Language Navigation (VLN) requires an agent to make sequential decisions based on both language instructions and visual environments. In indoor instruction-guided navigation, many datasets have been proposed. These datasets aim to enhance agents' ability to understand detailed navigation instructions [2], dialogue style instructions [48], instructions in different languages [22], and high-level object-finding instructions [39]. Though many efforts have been proposed to help the agent learn diverse instruction inputs, all these datasets are built on the same 3D room environments from Matterport3D, which only contains 60 different room environments for agents' training. This is because diverse photorealistic 3D room environments with a large number of sampled human-height
viewpoints are very hard to collect. This limited availability of training environments poses challenges for the agent to learn the navigation policy well, and generalize to unseen new room environments.
Several works in VLN have been proposed to address this challenge. [47] first proposes to perform dropout on environments during training to avoid the agent overfitting to seen environments. [30] and [27] further propose to edit the existing environments by mixing up environments and changing the style and appearance of the environments. However, these approaches are still limited by the 3D environments in Matterport3D, and do not create new environments with different objects and layouts, which is important for agents' generalization to unseen environments. [9] takes one step further and introduces more unannotated 3D room environments from Habitat Matterport3D dataset (HM3D) [42], with machine-generated instructions to augment the training environments and enhance agents' generalization performance. While 900 environments are introduced, their method still relies on existing manually-captured 3D environments in HM3D, which cannot be further scaled up due to the very expensive environment collection process. Hence, in this paper, we aim to explore the possibility of generating unlimited panoramic environments without any human annotation, and investigate how to effectively utilize the generated panoramic environments for improving navigation agents' ability to generalize to unseen environments.
To this end, we propose PanoGen, a generation method that can potentially create infinite diverse panoramic environments conditioned on text. As shown in Figure 1, we first collect descriptions of room environments by using a state-of-the-art vision-language model BLIP-2 [28] to annotate the view images in the Matterport3D dataset. Then, we use text-to-image diffusion models to generate diverse room images based on the text captions. As the agent navigates in egocentric panorama observation, learning from disjoint single-view images (middle column in Figure 1) will confuse the agent, and it cannot learn the spatial relationship between objects due to inconsistent views. Hence, to keep the observations coherent in the same panorama, we additionally propose a recursive image outpainting approach, which generates missing observations beyond the original image boundaries (right column in Figure 1). Specifically, we choose one generated image in the panorama as the starting point, and gradually rotate the camera angle right, left, up, and down, and then outpaint the unseen observation based on text descriptions. Lastly, we explore and compare two training methods to effectively utilize the generated panoramic environments. In the first method, we train a speaker to automatically generate instructions for the generated panoramic environments, based on a pre-trained vision-and-language model mPLUG [23], which has the state-of-the-art performance for zero-shot video captioning. We pre-train the VLN agent with both the original training data and the generated instruction-trajectory pairs from our panoramic environments. In the second method, instead of learning from speaker data, we directly fine-tune the VLN agents on both the original environments and our panoramic environments by randomly replacing some of the original observations with our panoramic environments to avoid overfitting to training environments.
Figure 1: Overview of our PanoGen. We first generate captions for all the room panoramas in the Matterport3D dataset. Each panorama is discretized into 36 views, we show 15 views here for a better view of each discretized image. Then, we generate panoramic environments with recursive outpainting over a single image generated from the text caption.
We conduct experiments on Room-to-Room (R2R) [2], Room-for-Room (R4R) [17], and CVDN [48] datasets. We measure agents' ability in following fine-grained instructions of various lengths (R2R, and R4R), and under-specified dialogue instructions (CVDN). Empirical results demonstrate that training with our PanoGen environments could improve the SotA agents by 2.7% in success rate and 1.9% in SPL on the R2R test leaderboard, and improve the goal progress by 1.59 meters on the CVDN test leaderboard, achieving a relative gain of 28.5% compared with previous SotA agents. The large improvement on the CVDN dataset suggests that our PanoGen introduces diverse commonsense knowledge of the room environments, and helps navigation when given under-specified dialogue instructions. Lastly, we analyze the impact of the number of our PanoGen environments used for VLN training, and demonstrate the potential of scaling up our PanoGen environments for further enhancing agents' generalization ability.
## 2 Related Work
**Vision-and-Language Navigation** Vision-and-Language Navigation is the task that requires an agent to navigate through the visual environment based on language instructions. Many datasets [2; 17; 48; 7; 35; 45; 37; 22; 39] have been proposed for this task. In this paper, we focus on indoor navigation, specifically Room-to-Room dataset (R2R), Room-four-Room dataset (R4R), and Cooperative Vision-and-Dialog Navigation dataset (CVDN). To solve this challenging task, early works build their model based on a sequence-to-sequence LSTM model [47; 31; 25; 26; 51; 29]. [32] first utilizes pre-trained vision-and-language transformers for learning an agent to pick the correct path. [16] enhances the transformer-based agent with a recurrent state to model navigation history, and [8] proposes a hierarchical architecture to encode both spatial and temporal information in navigation history. Besides, some works explore utilizing graph information to build better global visual representation [10; 49; 58], and other works propose better proxy tasks for learning better temporal knowledge [40] and decision making ability [8; 24] during pre-training. In this paper, we build our approach over the state-of-the-art agent DUET [10].
**Environment Scarcity in Vision-and-Language Navigation** One main challenge in Vision-and-Language Navigation is to learn from limited available training environments and generalize to unseen new environments. Though many datasets have been proposed for indoor instruction-guided navigation with diverse instruction inputs [2; 22; 17; 48; 39], their navigation environments are all from Matterport3D [6], which only contains 61 environments for training, and 29 environments for unseen validation and testing. Previous works aim to address this challenge from multiple aspects. One line of work tries to mitigate the environment bias during training by dropping out environment features [47] and learning environment-agnostic visual representation [26; 52]. Another line of research aims to generate or introduce more environments for VLN training. Specifically, [27] and [30] edit the existing environments by mixing up environments or changing room style and object appearances. However, these approaches are still limited by the existing Matterport3D environments. [9] introduces 900 environments from HM3D [42] with generated instructions to address the environment scarcity and learn a more generalizable agent. However, it's expensive to collect 3D environments, and thus their approach is hard to be further scaled up. Different from them, our proposed PanoGen generates new panoramic environments without the need of any human annotation, and could potentially generate unlimited panoramic environments. Lastly, [20] generates future views based on history observations in a given trajectory so as to build a world model for more informed planning, and [21] synthesizes views at different locations in the original environments as data augmentation for VLN training. Our work focuses on introducing a diverse set of new environments (in appearance), while maintaining consistency in the panorama, using text-conditioned recursive image outpainting.
**Text-to-Image Generation** Text-to-Image generation has been one of the main research areas in both NLP and CV. With the advances of GANs [13], previous works aim to generate high resolution images with stacked generators [55; 56], and improve the perceptual quality of an image by adding regularization losses [5]. However, GANs are hard to optimize [14; 33] and face the problem of mode collapse [34]. Recently, [43] proposes to utilize the latent diffusion models to generate images with high resolution. By first encoding the image into a low dimension latent space that is perceptually equivalent to the pixel space, the latent diffusion model is computationally more efficient while maintaining good generation quality. Diffusion model shows better performance on several image generation tasks like image inpainting [57], image outpainting [54], and image synthesis [38]. [46]
first extends the diffusion models to 3D indoor scene synthesis by conditioning it on geometry and editing prompts. Different from them, we propose to generate coherent panorama environments with diverse objects based on detailed text descriptions.
## 3 PanoGen: Generating Panoramic Environments
In this section, we describe our PanoGen method, which generates panoramic environments from text descriptions for Vision-and-Language Navigation. Our PanoGen addresses two main challenges for panoramic environment generation. First, the panorama observation of an indoor room environment will contain multiple objects with a complex layout, which makes it hard to use a short description to cover all the details in the panorama. Thus, we first discretize the panorama into 36 views and generate each view separately. However, the separately generated single images will be inconsistent with each other. Hence, to generate consistent panoramic environments, we propose a recursive image outpainting approach (Sec. 3.2). Second, the generated panoramic environment should align with human commonsense knowledge. For example, when generating a panorama for a bedroom, the panorama can consist of objects like "bed", "dresser", but not "refrigerator". To address this problem, instead of utilizing a large amount of available room image captions from the web for different portions in the panorama, which might have unreasonable co-occurrence of objects, we directly generate text descriptions for the panoramas in the Matterport3D environments (Sec. 3.1).
### Room Description Collection
We first collect room descriptions from the Matterport3D environments (Figure 2 Right). To maximally generate panorama environments with diverse objects and reasonable layouts, we discretize the panorama into 36 views and caption the discretized views separately. The generated captions contain a more detailed description of the objects in the smaller region of the panorama. We then utilize the pre-trained vision-and-language model from BLIP-2 [28] to caption the discretized room images. BLIP-2 is trained by bootstrapping off-the-shelf pre-trained image encoders and large language models. Benefiting from the general cross-modal alignment between text and image learned during pre-training, and the zero-shot generation ability of large language models, BLIP-2 can generate creative and informative captions for images in the Matterport3D dataset.
### Generating Panorama with Text-Conditioned Image Outpainting
While many works [43; 18; 36; 44; 3] have focused on building better models for text-to-image generation, how to transfer the advances in text-to-image generation to text-to-panorama generation is still less explored. [4] combines multiple diffusion generation processes with shared constraints to generate a panorama based on the text description. However, their panorama generation process
Figure 2: Given an image generated based on the room description, we rotate the camera angle and outpaint the unseen observation recursively to generate a consistent 360-degree panorama view.
can only be conditioned on one single text description of the full panorama. One text description is not enough for complex indoor navigation environments with diverse objects. Thus, we propose to generate the panorama view by outpainting recursively based on multiple text descriptions.
As discussed in Sec. 3.1, each discretized image in the Matterport3D environment is paired with a caption generated by BLIP-2. With the detailed descriptions, we use a state-of-the-art text-to-image diffusion model Stable Diffusion [43] to generate the images. However, the panorama view is not continuous if we directly'stitch' the generated images. To address this problem (Figure 2), we first generate one single view with zero camera elevation in the panorama based on its caption. The view with zero camera elevation serves as a better starting point, as it usually contains more important information in the panorama, whereas the positive elevation view usually consists of objects on the ceiling, and the negative elevation view usually faces the ground. Next, we rotate the generated image right, up and down by \(p_{r}\%,p_{u}\%,p_{d}\%\) respectively, and outpaint the unseen observation based on the caption of the nearby view. By conditioning the image generation on both the text caption and nearby view, the generated images are consistent in style and can be stitched into a coherent panorama. For example, as shown in Figure 2, the caption "a bedroom with a bed and a dresser" mentions "bed", and the bed is already partially generated in the image. When outpainting the unseen observation, our generation model will complete the bed and generate coherent surrounding views, instead of generating a new bed with different appearance. We repeat this process until we generate all 36 views in the panorama. We generate one panorama for each panorama in the R2R training environments. In total, our PanoGen have 7,644 panoramas, stitched from 275,184 images.
## 4 Utilizing Panoramic Environments for VLN Training
In this section, we first introduce the problem setup of VLN in Sec. 4.1, and the general training procedures for VLN in Sec. 4.2. Then, we introduce the two ways that we utilize the generated panoramic environments. Specifically, we first explore generating new instruction-trajectory pairs in the panoramic environments and utilize these data in VLN pre-training (Sec. 4.3). Then, we directly utilize the generated panoramic environments to replace the original observation during VLN fine-tuning to avoid agents' overfitting to training environments (Sec. 4.4).
### Problem Setup
In Vision-and-Language Navigation, the agent needs to navigate through the environment based on natural language instructions. Formally, at each time step \(t\), the agent takes in the full language instruction \(I\), a panoramic observation \(P_{t}\) of the current location, and navigation history observations \(\{P_{i}\}_{i=1}^{t-1}\). The agent needs to pick the next step from a set of navigable locations \(\{g_{t,k}\}_{k=1}^{K}\). We follow the setup in [10], where navigable locations include both adjacent locations which can be reached in one step, and global viewpoints which are observed but not visited in navigation history. Navigation finishes when the agent predicts 'STOP' action or reaches the maximum navigation steps.
### VLN Training Procedures
Training a Vision-and-Language Navigation agent contains two stages: pre-training and fine-tuning.
**Pre-training.** In the pre-training stage, the agent is pre-trained with three proxy tasks: Masked Language Modeling (MLM), Instruction and Trajectory Matching (ITM), and Single Action Prediction (SAP). Specifically, in Masked Language Modeling, the agents need to predict the randomly masked words given both the unmasked instructions and the full trajectory observations. In Instruction and Trajectory Matching, the agent needs to learn to pick the correct instruction and trajectory pair from one positive pair and four negative pairs. Two of the negatives come from other trajectories in the same batch, and another two of the negatives shuffle the original trajectories so that the agent could learn the order of the observations in the trajectory. In Next Action Prediction, the agent mimics the navigation task by doing single action prediction based on full instruction and navigation history.
**Fine-tuning.** In the fine-tuning stage, we follow [10] and train the agent with the supervision from the pseudo interactive demonstration. Specifically, at each time step, the ground truth navigation trajectory is sampled based on the current policy learned by the agent. This sampling process enables the agent to explore the environment and generalize better.
### Generating Instructions for Paths in Panoramic Environments
Given the high quality and coherent panoramic environments from our PanoGen, we investigate how to utilize these environments for mitigating the data scarcity problem in VLN agents' training and enhancing VLN agents' generalization ability.
As our PanoGen environments are generated conditioned on text captions, it shares similar semantics with the original environments, but the room layout and appearance will be different from the original environments. Thus, the instruction for traversing the original environment will not be aligned well with the new panoramic environment. To address this problem, we train a speaker to generate instructions for the new panoramic environments.
Previous works [47; 12; 27; 15] train a small LSTM-based speaker from scratch on VLN datasets to generate instructions for unannotated paths in the Matterport3D environments. However, as these speakers are not pre-trained on larger image-text datasets, they lack general visual grounding knowledge and therefore it's hard to generate instructions with diverse entity mentions that do not appear in the small training data. Marky [50] improves the speaker by utilizing multilingual T5 (mT5) [53], which is a text-to-text encoder-decoder transformer. mT5 enables the speaker to generate instructions with a large multi-lingual vocabulary, and has prior text domain knowledge. However, mT5 only has text domain knowledge, and will need to learn the visual grounding knowledge from scratch on the limited training data. Hence, to introduce general cross-modal alignment information to the agent, we propose to build our speaker based on mPLUG [23], a vision-language model that can do both multi-modal reasoning and generation.
mPLUG is a transformer pretrained on image and text pairs. To adapt it to navigation trajectory, which is a sequence of panorama images, we first simplify the panorama representation to the single view representation which the agent is facing. The single views are first encoded with CLIP-ViT/B-16. The encoded image patches are then flattened and concatenated as the input for instruction generation. We fine-tune mPLUG on the Room-to-Room (R2R) training data, and use it to automatically generate instructions for all the paths in the R2R training data with our new panoramic observation. In total, our speaker data has 4,675 instruction-trajectory pairs.
Following [10], we use R2R dataset [2], and Prevalent dataset [15] for agents' pre-training. Moreover, we further pre-train the agent with our speaker data, which introduces our panoramic environments to the agent and improves the agents' generalization ability.
### Learning from Panoramic Environments during Fine-tuning
We further enhance the VLN fine-tuning by incorporating our panoramic environments. Specifically, we randomly replace \(m\%\) of the observations in the trajectory with our panoramic environments during fine-tuning. This observation replacement helps the agent avoid overfitting to the limited training environments. As our panoramic environments are generated conditioned on text descriptions of the room, thus the semantic underlying the panoramic observation is similar to the original environments. This ensures that when the replacement ratio \(m\%\) is not large, after replacing the original environments with our panoramic observations, the instruction and the observations still have reasonable alignment (discussed in Sec. 6.4).
## 5 Experimental Setup
### Dataset and Evaluation Metrics
We evaluate our agent on three datasets: Room-to-Room dataset (R2R) [2], Cooperative Vision-and-Dialog Navigation dataset (CVDN) [48], and Room-for-Room dataset (R4R) [17]. The training set contains 61 different room environments, while the unseen validation set and test set contains 11, and 18 room environments that are unseen during training. Details of the dataset can be found in Appendix.
We evaluate our model on the following metrics: (1) Success Rate (SR), which measures whether the agent stops within 3 meters to the target. (2) Success Rate Weighted by Path Length (SPL) [1], which penalizes long paths that explore the environment to reach the target instead of following the instructions. (3) Goal Progress (GP), which calculates the distance that the agent moves toward the
target location. (4) Navigation Error (NE), which is the distance between the stop location and the target. (5) Trajectory Length (TL), which counts the total navigation length of the agent. We consider SPL as the main metric for R2R and R4R dataset, and GP as the main metric for CVDN dataset.
### Implementation Details
In panoramic environment generation, we caption all the view images in the training environments in R2R dataset with BLIP-2-FlanT5-xxL. We utilize stable-diffusion-v2.1 base model to generate the single view based on caption only, and use stable-diffusion-v1.5-inpainting model to outgaint the unseen observation for the rotated views. In speaker data generation, we build our speaker model based on mPLUG-base, which has 350M parameters and utilizes ViT/B-16 as the visual backbone. For navigation training, we adopt the agent architecture from DUET [10]. We follow the training hyperparameters in DUET. Different from DUET, we utilize CLIP-ViT/B-16 to extract the visual features. We report reproduced baseline performance with CLIP-ViT/B-16 features for a fair comparison. The best model is selected based on performance on validation unseen set.
## 6 Results and Analysis
In this section, we first present state-of-the-art performance on test leaderboard on Room-to-Room dataset and CVDN dataset in Sec. 6.1. Then, we show some qualitative examples of our generated panoramic in Sec. 6.2. Besides, we further use ablation studies to demonstrate that utilizing our speaker data in the panoramic environments during pre-training, and random replacing the observation with new panorama during fine-tuning are effective for enhancing VLN agents' performance in unseen environments in Sec. 6.3 and Sec. 6.4. Lastly, we investigate how the number of the panorama environments influence the performance in Sec. 6.5.
### Test Set Results
We show our method's performance on both the Room-to-Room (R2R) and the Cooperative Vision-and-Dialog Navigation (CVDN) dataset. We adopt DUET [10] architecture for our navigation agent. Different from DUET which uses ViT-B/16 [11] pre-trained on ImageNet to extract features, we use CLIP-ViT-B/16 [41] as it shows better performance on R2R dataset ("DUET" vs. "DUET-CLIP" in Table 1). As shown in Table 1, fine-tuning with panoramic environments from our PanoGen improves previous SotA agent DUET [10] by 2.7% in success rate, and 2.9% in SPL on Room-to-Room test leaderboard. This demonstrates the effectiveness of utilizing our generated panoramic environments to improve agents' generalization to unseen environments. Moreover, pre-training with our speaker data improves the goal progress by 1.59 meters on CVDN test leaderboard, a relative
\begin{table}
\begin{tabular}{c|c c c c|c c c c|c c} \hline \hline \multicolumn{1}{c|}{**Models**} & \multicolumn{6}{c|}{**Room-to-Room dataset**} & \multicolumn{3}{c}{**CVDN**} \\ \hline & \multicolumn{4}{c|}{**Validation Unseen**} & \multicolumn{4}{c|}{**Test**} & \multicolumn{2}{c|}{**Val Val Val Test**} \\ \hline & TL & NE \(\downarrow\) & SR \(\uparrow\) & SPL \(\uparrow\) & TL & NE \(\downarrow\) & SR \(\uparrow\) & SPL \(\uparrow\) & GP \(\uparrow\) & GP \(\uparrow\) \\ \hline EnvDrop [47] & 10.70 & 5.22 & 52.0 & 48.0 & 11.66 & 5.23 & 51.0 & 47.0 & - & - \\ PREVALENT [15] & 10.19 & 4.71 & 58.0 & 53.0 & 10.51 & 5.30 & 54.0 & 51.0 & 3.15 & 2.44 \\ RecBERT [16] & 12.01 & 3.93 & 63.0 & 57.0 & 12.35 & 4.09 & 63.0 & 57.0 & - & - \\ NDH-Full [19] & - & - & - & - & - & - & - & - & 5.51 & 5.27 \\ REM\({}^{\spadesuit}\)[30] & 12.44 & 3.89 & 63.6 & 57.9 & 13.11 & 3.87 & 65.2 & 59.1 & - & - \\ HAMT [8] & 11.46 & **2.29** & 66.0 & 61.0 & 12.27 & 3.93 & 65.0 & 60.0 & 5.13 & 5.58 \\ EnvEdit\({}^{\spadesuit}\)[2] & 12.13 & 3.22 & 67.9 & 62.9 & - & - & - & - & - & - \\ SE3DS\({}^{\spadesuit}\)[21] & - & 3.29 & 69.0 & 62.0 & - & 3.67 & 66.0 & 60.0 & & \\ DUET [10] & 13.94 & 3.31 & 72.0 & 60.0 & 14.73 & 3.65 & 69.0 & 59.0 & - & - \\ DUET-CLIP & 12.92 & 3.19 & 72.8 & 63.4 & - & - & - & - & 5.50 & - \\ PanoGen & 13.40 & 3.03 & **74.2** & **64.3** & 14.38 & **3.31** & **71.7** & **61.9** & **5.93** & **7.17** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Test leaderboard performance for Room-to-Room dataset and CVDN dataset. \(\spadesuit\) indicates approaches that augment the training environments. For “EnvEdit”, we report the non-ensemble performance for a fair comparison to all other methods. Best results are in bold, and second best results are underlined.
gain of 28.5% compared with previous SotA agent HAMT [8]. This large improvement demonstrates that learning from our PanoGen environments are especially helpful for following under-specified instructions in unseen environments which need commonsense knowledge of the visual observation to reach the target. On both the Room-to-Room dataset and CVDN dataset, learning with our PanoGen environments achieves the new state-of-the-art performance.
### Qualitative Analysis of Panoramic Environment
We show some panoramic environment generated with our PanoGen in Figure 3. We could see that directly generating discretized views based on caption will be disjoint and inconsistent (Row "Stable Diffusion for Discretized Views"). In comparison, our recursive outpainting approach could generate continuous views that can be stitched together to form a high-quality panorama (Row "PanoGen").
Besides the high quality and coherency, our generated panorama environments is able to preserve the wide range of objects appeared in the original environments, while generating them with new appearance and different room layout. For example, in the left generated panorama, it contains a corridor view, and shows multiple rooms that are connected to the corridor (e.g., bedroom, and bathroom). This layout also follows human's commonsense knowledge, where the bedroom and bathroom can be connected with a corridor.
\begin{table}
\begin{tabular}{c|c c c c|c c c|c c c} \hline \hline
**Models** & \multicolumn{4}{c|}{**Room-to-Room**} & \multicolumn{4}{c|}{**CVDN**} & \multicolumn{4}{c}{**Room-for-Room**} \\ \hline & TL & NE \(\downarrow\) & SR \(\uparrow\) & SPL \(\uparrow\) & TL & GP \(\uparrow\) & TL & NE \(\downarrow\) & SR \(\uparrow\) & SPL \(\uparrow\) \\ \hline DUET [10] & 13.94 & 3.31 & 72 & 60 & - & - & - & - & - & - \\ DUET-CLIP & 12.92 & 3.19 & 72.84 & 63.37 & 24.09 & 5.50 & 21.04 & **6.06** & **46.61** & 41.94 \\ PanoGen+EnvDrop & 13.57 & 3.05 & 73.69 & **63.44** & 25.17 & 5.81 & 22.88 & 6.17 & 46.06 & 40.33 \\ PanoGen+mLUG & 14.58 & **2.85** & **74.20** & 62.81 & 24.66 & **5.93** & 18.32 & 6.12 & 45.78 & **42.52** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation results for utilizing our speaker data during pre-training on validation unseen set for R2R, CVDN, and R4R datasets.
Figure 3: Qualitative analysis of the panoramic environments generated with our PanoGen. “Materport3D” is the original environment for VLN tasks. “Stable Diffusion for Discretized Views” is the concatenation of separately generated discretized views given text captions.
\begin{table}
\begin{tabular}{c|c c c c|c c c|c c c} \hline \hline
**Models** & \multicolumn{4}{c|}{**Room-to-Room**} & \multicolumn{4}{c|}{**CVDN**} & \multicolumn{4}{c}{**Room-for-Room**} \\ \hline & TL & NE \(\downarrow\) & SR \(\uparrow\) & SPL \(\uparrow\) & TL & GP \(\uparrow\) & TL & NE \(\downarrow\) & SR \(\uparrow\) & SPL \(\uparrow\) \\ \hline DUET [10] & 13.94 & 3.31 & 72 & 60 & - & - & - & - & - & - \\ DUET-CLIP & 12.92 & 3.19 & 72.84 & 63.37 & 24.09 & 5.50 & 21.04 & 6.06 & 46.61 & 41.94 \\ PanoGen+Replace & 13.76 & **2.99** & **74.41** & **63.88** & **23.29** & **5.63** & 18.62 & **6.02** & **47.78** & **44.25** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation results for replacing the original environment with our panoramic observation during fine-tuning on validation unseen set for R2R, CVDN and R4R datasets.
### Effectiveness of Speaker Data in Pre-training
In this section, we show the effectiveness of utilizing the speaker data generated for our PanoGen environments for VLN pre-training. As shown in Table 2, pre-training the VLN agent with our speaker data ("PanoGen+mPLUG") improves the baseline ("DUET-CLIP") by 1.36% in success rate, and 0.2 meters in navigation error on R2R dataset. Besides, we observe 0.43 meters absolute improvement in goal progress (a relative gain of 7.8%) on CVDN validation unseen set. The large improvements in CVDN demonstrates that the agent learns useful commonsense knowledge from the diverse visual environments in our PanoGen, and thus generalize well to the unseen environments when navigating based on under-specified instructions in CVDN dataset. Pre-training with our speaker data also shows slight improvement in R4R, improving the SPL by 0.58%.
Lastly, we compare our speaker ("PanoGen+mPLUG") with the widely used speaker from EnvDrop [47] ("PanoGen+EnvDrop"). We find that utilizing either speaker data improves the baseline performance on R2R and CVDN dataset, demonstrating the effectiveness of our PanoGen environments for improving agents' generalization to unseen environments. Besides, pre-training with speaker data generated with mPLUG shows better performance in success rate on R2R and R4R dataset and higher goal progress on CVDN dataset, demonstrating the effectiveness of our speaker.
### Effectiveness of Panorama Replacement in Fine-tuning
We demonstrate that using the environments from our PanoGen as observation augmentation during fine-tuning can improve agents' generalization to unseen environments. As shown in Table 3, randomly replacing the observation in the trajectory with our panoramic environments ("PanoGen+Replace") improves the navigation performance on all the three datasets. Specifically, our approach improves the SR by 1.57% on R2R dataset, 0.13 meters in goal progress on CVDN dataset, and 2.31% in SPL on R4R dataset. The consistent improvements in all the three datasets demonstrate the usefulness of our PanoGen environments.
We further show that it's important to balance the ratio of replaced observations in the full navigation trajectory. As shown in Table 4, replacing 30% of the viewpoints in the navigation trajectory achieves the best performance. The trajectory will not be aligned with the instruction well if we replace a large ratio of viewpoints with our PanoGen environments, and thus the improvements in R2R validation unseen set is smaller.
### Impact of the Number of Panoramic Environments
In this section, we investigate the impact of the number of panoramic environments used during VLN fine-tuning. Specifically, we randomly select 10 scans and 30 scans out of the 61 scans in the R2R training environments. During VLN fine-tuning, if the navigation trajectory belongs to these scans, we replace the original observation with our generated panoramic environments with a fixed probability. As shown in Table 5, we observe that training with more panoramic environments consistently improves the performance (No. 1 - 4). Furthermore, for every panorama in the original R2R training environments, we generate one more panoramic view with our PanoGen. In this case, the number of panoramic environments we generate is twice the number of the original R2R training environments. As shown in Table 5, we observe that the gain
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**No.** & **Ratio** & \multicolumn{4}{c}{**Validation**} **Unseen** \\ \hline & & TL & NE \(\downarrow\) & SR \(\uparrow\) & SPL \(\uparrow\) \\ \hline
1 & 0.0 & 12.92 & 3.19 & 72.84 & 63.37 \\
2 & 0.1 & 13.16 & 3.16 & 72.84 & 63.24 \\
3 & 0.3 & 13.76 & **2.99** & **74.41** & 63.88 \\
4 & 0.5 & 13.03 & 3.19 & 72.84 & 63.84 \\
5 & 0.7 & 12.62 & 3.18 & 72.33 & **63.93** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison of replacing different ratio of the viewpoints in the trajectory with the panoramic environments generated with our PanoGen on Room-to-Room validation unseen set.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**No.** & **\# Scans** & \multicolumn{4}{c}{**Validation Unseen**} \\ \hline & & TL & NE \(\downarrow\) & SR \(\uparrow\) & SPL \(\uparrow\) \\ \hline
1 & 0 & 12.92 & 3.19 & 72.84 & 63.37 \\
2 & 10 & 13.94 & 3.00 & 72.80 & 62.48 \\
3 & 30 & 13.86 & 3.05 & 73.69 & 62.88 \\
4 & 61 & 13.76 & **2.99** & **74.41** & 63.88 \\
5 & 122 & 13.40 & 3.03 & 74.20 & **64.27** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison of replacing the original environments with different number of scans of our panoramic environments.
in SPL is still not saturated yet (No. 5 vs No. 4), and gradually increases when we add more our PanoGen environments for VLN fine-tuning. This suggests that it's promising to generate more panoramic environments with our approach to further enhance agents' generalizability to unseen environments.
## 7 Conclusion
In this paper, we propose PanoGen, a generation approach which can potentially create infinite diverse panoramic environments conditioned on text. Specifically, we propose a recursive image outpainting that reconstructs the missing observations in the panorama gradually based on text caption to generate coherent panorama with diverse objects and room layouts. We then investigate two training methods to effectively utilize the generated panoramic environments during VLN pre-training and fine-tuning. Learning from our PanoGen achieves the new SotA on both CVDN dataset and R2R dataset, demonstrating its effectiveness for enhancing agents' generalization to unseen environments. **Limitations & Broader Impacts.** See Appendix for limitations and broader impacts discussion.
## 8 Acknowledgement
This work was supported by ARO W911NF2110220, ONR N00014-23-1-2356, DARPA KAIROS FA8750-19-2-1004, and DARPA MCS N66001-19-2-403. The views contained in this article are of the authors and not of the funding agency.
|
2308.07357 | Demonstration of CORNET: A System For Learning Spreadsheet Formatting
Rules By Example | Data management and analysis tasks are often carried out using spreadsheet
software. A popular feature in most spreadsheet platforms is the ability to
define data-dependent formatting rules. These rules can express actions such as
"color red all entries in a column that are negative" or "bold all rows not
containing error or failure." Unfortunately, users who want to exercise this
functionality need to manually write these conditional formatting (CF) rules.
We introduce CORNET, a system that automatically learns such conditional
formatting rules from user examples. CORNET takes inspiration from inductive
program synthesis and combines symbolic rule enumeration, based on
semi-supervised clustering and iterative decision tree learning, with a neural
ranker to produce accurate conditional formatting rules. In this demonstration,
we show CORNET in action as a simple add-in to Microsoft Excel. After the user
provides one or two formatted cells as examples, CORNET generates formatting
rule suggestions for the user to apply to the spreadsheet. | Mukul Singh, Jose Cambronero, Sumit Gulwani, Vu Le, Carina Negreanu, Gust Verbruggen | 2023-08-14T14:15:03Z | http://arxiv.org/abs/2308.07357v1 | # Cornet: Learning Spreadsheet Formatting Rules By Example
###### Abstract.
Data management and analysis tasks are often carried out using spreadsheet software. A popular feature in most spreadsheet platforms is the ability to define data-dependent formatting rules. These rules can express actions such as "_color red all entries in a column that are negative_" or "_bold all rows not containing error or failure_". Unfortunately, users who want to exercise this functionality need to manually write these conditional formatting (CF) rules. We introduce Cornet, a system that automatically learns such conditional formatting rules from user examples. Cornet takes inspiration from inductive program synthesis and combines symbolic rule enumeration, based on semi-supervised clustering and iterative decision tree learning, with a neural ranker to produce accurate conditional formatting rules. In this demonstration, we show Cornet in action as a simple add-in to Microsoft's Excel. After the user provides one or two formatted cells as examples, Cornet generates formatting rule suggestions for the user to apply to the spreadsheet.
2019
Mukul Singh, Jose Cambronero Sanchez, Sumit Gulwani, Vu Le, Carina Negreanu, and Gust Verbruggen. Cornet: Learning Spreadsheet +
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019 |
2303.15421 | ACAT: Adversarial Counterfactual Attention for Classification and
Detection in Medical Imaging | In some medical imaging tasks and other settings where only small parts of
the image are informative for the classification task, traditional CNNs can
sometimes struggle to generalise. Manually annotated Regions of Interest (ROI)
are sometimes used to isolate the most informative parts of the image. However,
these are expensive to collect and may vary significantly across annotators. To
overcome these issues, we propose a framework that employs saliency maps to
obtain soft spatial attention masks that modulate the image features at
different scales. We refer to our method as Adversarial Counterfactual
Attention (ACAT). ACAT increases the baseline classification accuracy of
lesions in brain CT scans from 71.39% to 72.55% and of COVID-19 related
findings in lung CT scans from 67.71% to 70.84% and exceeds the performance of
competing methods. We investigate the best way to generate the saliency maps
employed in our architecture and propose a way to obtain them from
adversarially generated counterfactual images. They are able to isolate the
area of interest in brain and lung CT scans without using any manual
annotations. In the task of localising the lesion location out of 6 possible
regions, they obtain a score of 65.05% on brain CT scans, improving the score
of 61.29% obtained with the best competing method. | Alessandro Fontanella, Antreas Antoniou, Wenwen Li, Joanna Wardlaw, Grant Mair, Emanuele Trucco, Amos Storkey | 2023-03-27T17:43:57Z | http://arxiv.org/abs/2303.15421v2 | # ACAT: Adversarial Counterfactual Attention for Classification and Detection
###### Abstract
In some medical imaging tasks and other settings where only small parts of the image are informative for the classification task, traditional CNNs can sometimes struggle to generalise. Manually annotated Regions of Interest (ROI) are sometimes used to isolate the most informative parts of the image. However, these are expensive to collect and may vary significantly across annotators. To overcome these issues, we propose a framework that employs saliency maps to obtain soft spatial attention masks that modulate the image features at different scales. We refer to our method as _Adversarial Counterfactual Attention_ (ACAT). ACAT increases the baseline classification accuracy of lesions in brain CT scans from \(71.39\%\) to \(72.55\%\) and of COVID-19 related findings in lung CT scans from \(67.71\%\) to \(70.84\%\) and exceeds the performance of competing methods. We investigate the best way to generate the saliency maps employed in our architecture and propose a way to obtain them from adversarially generated counterfactual images. They are able to isolate the area of interest in brain and lung CT scans without using any manual annotations. In the task of localising the lesion location out of 6 possible regions, they obtain a score of \(65.05\%\) on brain CT scans, improving the score of \(61.29\%\) obtained with the best competing method.
## 1 Introduction
In computer vision classification problems, it is often assumed that an object that represents a class occupies a large part of an image. However, in other image domains, such as medical imaging or histopathology, only a small fraction of the image contains information that is relevant for the classification task (Kimeswenger et al., 2019). With object-centric images, using wider contextual information (e.g. planes fly in the sky) and global features can aid the classification decision. In medical images, variations in parts of the image away from the local pathology are often normal, and using any apparent signal from such regions is usually spurious and unhelpful in building robust classifiers. Convolutional Neural Networks (CNNs) (Krizhevsky et al., 2012; He et al., 2016; Szegedy et al., 2017; Huang et al., 2017) can struggle to generalise well in such settings, especially when training cannot be performed on a very large amount of data (Pawlowski et al., 2019). This is at least partly because the convolutional structure necessitates some additional 'noisy' statistical response to filters away from the informative'signal' regions. Because the'signal' response region is small, and the noise region is potentially large, this can result in low signal to noise in convolutional networks, impacting performance.
To help localisation of the most informative parts of the image in medical imaging applications, _Region Of Interest_ (ROI) annotations are often collected (Cheng et al., 2011; Papanastasopoulos et al., 2020). However, these annotations require expert knowledge, are expensive to collect, and opinions on ROI of a particular case may vary significantly across annotators (Grunberg et al., 2017).
Alternatively, attention systems could be applied to locate the critical regions and aid classification. Previous work has explored the application of attention mechanisms over image features, either aiming to capture the spatial relationship between features (Bell et al., 2016; Newell et al., 2016; Santoro et al., 2017), the channel relationship (Hu et al., 2018) or both (Woo et al., 2018; Wang et al., 2017). Other authors employed self-attention to model non-local properties of images (Wang et al., 2018; Zhang et al., 2019). However, in our experiments, attention methods applied on the image features failed to improve the baseline accuracy in brain and lung CT scans classification. Other authors employed saliency maps to promote the isolation of the most informative regions during training of a classification network. They
sometimes employed target ground-truth maps to generate these saliency maps (Murabito et al., 2018). Moreover, by fusing salient information with the image branch at a single point of the network (Murabito et al., 2018; Flores et al., 2019; Figueroa-Flores et al., 2020), these approaches may miss important data. Indeed, when the signal is low, key information could be captured by local features at a particular stage of the network, but not by features at a different scale. For this reason, in our architecture, as shown in Figure 1, we employ the saliency maps to obtain soft spatial attention masks that modulate the image features at different stages of the network and also combine the attention masks through an attention fusion layer. This architecture allows to capture information at different scales and to better inform the final decision of the network. Moreover, it makes the model more robust to perturbations of the inputs by reducing the variance of the pre-activations of the network (cfr. Section 4.6).
Finally, we investigate the best technique to generate the saliency maps that are needed for our architecture and we find that the use of counterfactual images, acquired with a technique similar to adversarial attacks (Huang et al., 2017), is able to highlight useful information about a particular patient's case. In particular, for generating counterfactual examples, we employ an autoencoder and a trained classifier to find the minimal movement in latent space that shifts the input image towards the target class, according to the output of the classifier.
The main contributions of this paper are the following: 1) we propose ACAT, a framework that employs saliency maps as attention mechanisms at different scales and show that it makes the network more robust to input perturbations and improves the baseline classification accuracy in two medical imaging tasks (from \(71.39\%\) to \(72.55\%\) on brain CT scans and from \(67.71\%\) to \(70.84\%\) in lung CT scans) and exceeds the performance of competing methods, 2) we show how ACAT can also be used to evaluate saliency generation methods, 3) we investigate how different methods to generate saliency maps are able to isolate small areas of interest in large images and to better accomplish the task we introduce a method to generate counterfactual examples, from which we obtain saliency maps that outperform competing methods in localising the lesion location out of 6 possible regions in brain CT scans (achieving a score of \(65.05\%\) vs. \(61.29\%\) obtained with the best competing method)
## 2 Related Work
An overview of the methods used to generate saliency maps and counterfactual examples can be found in (Guidotti, 2022) and (Linardatos et al., 2020) respectively. Here, we briefly summarise some of the approaches most commonly used in medical imaging.
**Saliency maps** Saliency maps are a tool often employed by researchers for post-hoc interpretability of neural networks. They help to interpret CNN predictions by highlighting pixels that are important for model predictions. Simonyan et al. (2013) compute the gradient of the score of the class of interest with respect to the input image. The Guided Backpropagation method (Springenberg et al., 2014) only backpropagates positive gradients, while the Integrated Gradient method (Sundararajan et al., 2017) integrates gradients between the input image and a baseline black image. In
Figure 1: Architecture of the framework proposed for 3D volumes. The slices of each volume are first processed separately and then combined by applying an attention module over the slices. For each volume we also consider as input the corresponding saliency map. From the saliency branch, we obtain soft spatial attention masks that are used to modulate the image features. The salient attention modules capture information at different scales of the network and are combined through an attention fusion layer to better inform the final classification.
SmoothGrad (Smilkov et al., 2017), the authors propose to smooth the gradients through a Gaussian kernel. Grad-CAM (Selvaraju et al., 2017) builds on the Class Activation Mapping (CAM) (Zhou et al., 2016) approach and uses the gradients of the score of a certain class with respect to the feature activations of the last convolutional layer to calculate the importance of the spatial locations.
**Counterfactuals for visual explanation** Methods that generate saliency maps using the gradients of the predictions of a neural network have some limitations. Some of these methods have been shown to be independent of the model parameters and the training data (Adebayo et al., 2018; Arun et al., 2021) and not reliable in detecting the key regions in medical imaging (Eitel et al., 2019; Arun et al., 2021). For this reason, alternative methods based on the generation of counterfactuals for visual explanation have been developed. They are usually based on a mapping that is learned between images of multiple classes to highlight the areas more relevant for the class of each image. The map is modeled as a CNN and is trained using a Wasserstein GAN (Baumgartner et al., 2018) or a Conditional GAN (Singla et al., 2021). Most close to our proposed approach to generate counterfactuals, is the latent shift method by Cohen et al. (2021). An autoencoder and classifier are trained separately to reconstruct and classify images respectively. Then, the input images are perturbed to create \(\lambda\)-shifted versions of the original image that increase or decrease the probability of a class of interest according to the output of the classifier.
**Saliency maps to improve classification and object detection** Previous work has tried to incorporate saliency maps to improve classification or object detection performance in neural networks. Ren et al. (2013) used saliency maps to weigh features. Murabito et al. (2018) introduced SalClassNet, a framework consisting of two CNNs jointly trained to compute saliency maps from input images and using the learned saliency maps together with the RGB images for classification tasks. In particular, the saliency map generated by the first CNN is concatenated with the input image across the channel dimension and fed to the second network that is trained on a classification task. Flores et al. (2019) proposed to use a network with two branches: one to process the input image and the other to process the corresponding saliency map, which is pre-computed and given as input. The two branches are fused through a modulation layer which performs an element-wise product between saliency and image features. They observe that the gradients which are back-propagated are concentrated on the regions which have high attention. In (Figueroa-Flores et al., 2020) the authors use the same modulation layer, but replace the saliency branch that was trained with pre-computed saliency images with a branch that is used to learn the saliency maps, given the RGB image as input.
**Adversarial examples and adversarial training** Machine learning models have been shown to be vulnerable to adversarial examples (Papernot et al., 2016). These are created by adding perturbations to the inputs to fool a learned classifier. They resemble the original data but are misclassified by the classifier (Szegedy et al., 2013; Goodfellow et al., 2014). Approaches proposed for the generation of adversarial examples include gradient methods (Kurakin et al., 2018; Moosavi-Dezfooli et al., 2016) and generative methods (Zhao et al., 2017). In Qi et al. (2021), the authors propose an adversarial attack method to produce adversarial perturbations on medical images employing a loss deviation term and a loss stabilization term. In general, adversarial examples and counterfactual explanations can be created with similar methods. Adversarial training, in which each minibatch of training data is augmented with adversarial examples, promotes adversarial robustness in classifiers (Madry et al., 2017). Tsipras et al. (2018) observe that gradients for adversarially trained networks are well aligned with perceptually relevant features. However, adversarial training usually also decreases the accuracy of the classifier (Raghunathan et al., 2019; Etmann et al., 2019).
## 3 Methods
We wish to automatically generate and make use of RoI information in the absence of hand-labelled annotations. In order to do so, we employ saliency maps that are given as input and processed by the saliency branch of our architecture (see Figure 1). The saliency features are used to produce attention masks that modulate the image features. The salient attention modules capture information at different scales of the network and are combined through an attention fusion layer to better inform the final classification. In Figure 2, we show the saliency map and the attention masks obtained with a trained network on a brain scan. As we can observe, the saliency map is sparse and covers broad areas of the scan. On the other hand, the attention masks progressively refine the RoI emphasised by the original saliency map, better highlighting the area of interest.
### Saliency based attention
We learn to process saliency maps into multiple levels of attention modules to learn better local features and improve the classification accuracy. We do so through a saliency branch, which has attention modules that learn how to handle the salient information coming into the system and use it to obtain soft spatial attention masks that modulate the image features. In particular, with reference to Figure 1, we consider a network with two branches, one for the original input images and the other for the corresponding saliency maps, which are pre-computed and fixed during training of the network. Given \(S^{i}\in\mathbb{R}^{C\times H\times W}\) features of the
saliency branch at layer \(i\), we first pool the features over the channel dimension to obtain \(S^{i}_{p}\in\mathbb{R}^{1\times H\times W}\). Both average or max-pooling can be applied. However, in preliminary experiments we found max-pooling to obtain a slightly better performance. A convolution with \(3\times 3\) filters is applied on \(S^{i}_{p}\), followed by a sigmoid activation, to obtain soft spatial attention masks based on salient features \(S^{i}_{s}\in\mathbb{R}^{1\times H\times W}\). Finally, the features of the image branch at layer \(i\): \(F^{i}\in\mathbb{R}^{C\times H\times W}\) are softly modulated by \(S^{i}_{s}\) in the following way:
\[F^{i}_{o}=F^{i}\odot S^{i}_{s} \tag{1}\]
where \(\odot\) is the Hadamard product, in which the spatial attention values are broadcasted along the channel dimension, and \(F^{i}_{o}\) are the modulated features for the \(i-th\) layer of the image branch. We also introduce skip connections between \(F^{i}\) and \(F^{i}_{o}\) to prevent gradient degradation and distill information from the attention features, while also giving the network the ability to bypass spurious signal coming from the attention mask.Therefore, the output of the image branch at layer \(i\), is given by: \(G^{i}=F^{i}+F^{i}_{o}\)
The attention mask not only modulates the image features during a forward pass of the network, but can also cancel noisy signal coming from the image features during back-propagation. Indeed, if we compute the gradient of \(G^{i}\) with respect to the image parameters \(\theta\), we obtain:
\[\begin{split}\frac{\partial G^{i}(\theta;\eta)}{\partial\theta}& =\frac{\partial[F^{i}(\theta)+F^{i}(\theta)\odot S^{i}_{s}(\eta) ]}{\partial\theta}\\ &=\frac{\partial F^{i}(\theta)}{\partial\theta}(S^{i}_{s}(\eta)+ 1)\end{split} \tag{2}\]
where \(\eta\) are the saliency parameters.
#### 3.1.1 Fusion of attention masks
Previous work attempting to exploit saliency maps in classification tasks, has fused salient information with the image branch at a single point of the network, either directly concatenting attribution maps with the input images (Murabito et al., 2018) or after a few layers of pre-processing (Flores et al., 2019; Figueroa-Flores et al., 2020). On the other hand, we position our salient attention modules at different stages of the network, in order to capture information at different scales. This is particularly important in low signal-to-noise tasks, where the key information could be captured by local features at a particular stage of the network, but not by features at a different scale. For this reason, we use three attention modules, after early, middle and late layers of the network. Given \(S^{c}_{s}\), \(S^{m}_{s}\) and \(S^{l}_{s}\) the corresponding spatial attention masks, we also reduce their height and width to \(H^{\prime}\) and \(W^{\prime}\) through average pooling, obtaining \(S^{e}_{s,p}\), \(S^{m}_{s,p}\) and \(S^{l}_{s,p}\) respectively. Then, we concatenate them along the channel dimension, obtaining \(S_{s,p}\in\mathbb{R}^{3\times H^{\prime}\times W^{\prime}}\). An attention fusion layer \(L_{f}\) takes \(S_{s,p}\) as input and generates a fused spatial mask \(S_{f}\in\mathbb{R}^{1\times H^{\prime}\times W^{\prime}}\) by weighting the three attention masks depending on their relative importance. This final attention mask is applied before the fully-connected classification layers, so that if critical information was captured in early layers of the network, it can better inform the final decision of the network. In practice, \(L_{f}\) is implemented as a \(1\times 1\) convolution. In Section 4.5 we perform ablation studies to evaluate the contribution of each component of our network and demonstrate that all the components described are required to achieve the best results.
### Generation of saliency maps
In order to detect regions of interest in medical images, we generate counterfactual examples for each datum and use the difference with the original image to generate a saliency
Figure 2: Image with lesion indicated by the red arrow (a) and pixels in the \(95^{th}\) percentile of the saliency map (b) and spatial attention masks obtained after early (c), middle (d) and late (e) convolutional layers. The attention masks progressively tweak the original saliency map focusing more precisely on the area of interest.
map highlighting important information. In particular, given a dataset \(\mathcal{D}=(x^{i};i=1,2,\ldots,N_{D})\) of size \(N_{D}\) consisting of input images \(x^{i}\), along with corresponding class labels \(\mathcal{T}=(y^{i};i=1,2,\ldots,N_{D})\), counterfactual explanations describe the change that has to be applied to an input for the decision of a black-box model to flip. Let \(f\) be a neural network that outputs a probability distribution over classes, and let \(\hat{y}^{i}\) be the class designated maximum probability by \(f\). A counterfactual explanation displays how \(x^{i}\) should be modified in order to be classified by the network as belonging to a different class of interest \(\bar{y}^{i}\) (counterfactual class). In order to generate saliency maps, we can consider the difference between the original image and the counterfactual image of the opposite class. For example, to compute the saliency map of a brain scan with a stroke lesion, we could generate a counterfactual example that is classified by \(f\) as not having a stroke lesion. In this way, we are able to visualise the pixels with the biggest variation between the two samples, which are the most important for the classification outcome. However, when using saliency maps to improve the classification capability of our network, at test time we don't have access to class labels. For this reason, to compute saliency maps in a class-agnostic way, we consider the counterfactual examples of both classes (positive and negative) and then compute the absolute difference between the original image and each counterfactual image to get two attribution maps. These are then normalised in \([0,1]\) and averaged to obtain the final saliency map that can be used in the classification pipeline.
As discussed, gradient-based counterfactual changes to image pixels can just produce adversarial attacks. We alleviate this by targeting gradients of a latent autoencoder. Therefore, in addition to the network \(f\), trained to classify images in \(\mathcal{D}\), we exploit an autoencoder, trained to reconstruct the same inputs. \(x^{j}\in\mathcal{D}\) can be mapped to latent space through the encoder \(E\): \(E(x^{j})=z^{j}\). This can then be mapped back to image space via decoder D: \(x^{\prime j}=D(z^{j})\). Suppose without loss of generality that the counterfactual example we are interested in belongs to a single target class. The neural network can be applied to this decoder space, we denote the output of \(f(D(z^{j}))\) as a normalised probability vector \(d(z^{j})=(d_{1}(z^{j}),\ldots,d_{k}(z^{j}))\in\mathbb{R}^{K}\), where K is the number of classes. Suppose that \(f(x^{j})\) outputs maximum probability for class \(l\) and we want to shift the prediction of \(f\) towards a desired class \(m\), with \(l,m\in\mathbb{N}:l,m\in[1,K]\). To do so, we can take gradient steps in the latent space of the autoencoder from initial position \(z^{j}\) to shift the class distribution towards the desired target vector \(t=(t_{1},\ldots,t_{k})\in\mathbb{R}^{K}\), where \(t_{i}=\mathbf{1}_{i=m}\), for \(i=1,\ldots,K\). In order to do so, we would like to minimise the cross-entropy loss between the output of our model, given \(D(z^{j})\) as input, and the target vector. I.e. we target
\[L(d(z^{j}),t)=-\sum_{k=1}^{K}t_{k}\log(d_{k}(z^{j})). \tag{3}\]
Moreover, we aim to keep the counterfactual image as close as possible to the original image in latent space, so that the transformation only captures changes that are relevant for the class shift. Otherwise, simply optimising Eq. (3) could
Figure 3: (a) Ischaemic stroke lesion appears darker than normal brain. Sample saliency maps averaged over slices obtained with our approach (b), the latent shift method (c), the Gradient method (d) and Grad-Cam (e).
lead to substantial changes in the image that compromise its individual characteristics. Therefore, we also include, as part of the objective, the \(L_{1}\) norm between the latent spaces of the original image \(x^{j}\) and the counterfactual image: \(||z-E(x^{j})||_{L_{1}}\). Putting things together, we wish to find the minimum of the function:
\[g(z)=L(d(z),t)+\alpha||z-E(x^{j})||_{L_{1}} \tag{4}\]
where \(\alpha\) is a hyperparameter that was set to \(100\) in our experiments. We can minimise this function by running gradient descent for a fixed number of steps (20 in our experiments). Then, for the minimizer of Eq. (4), denoted by \(z^{\prime}\), the counterfactual example is given by \(D(z^{\prime})\).
By defining an optimisation procedure over the latent space that progressively optimises the target classification probability of the reconstructed image, we are able to explain the predictions of the classifier and obtain adequate counterfactuals. A bound on the distance between original and counterfactual images in latent space is also important to keep the generated samples within the data manifold.
## 4 Experiments
### Data
We performed our experiments on two datasets: IST-3 (Sandercock et al., 2011) and MosMed (Morozov et al., 2020). Both datasets were divided into training, validation and test sets with a 70-15-15 split and three runs with different random seeds were performed. More details about the data are provided in Appendix A.
### Experimental setup
The baseline model for the classification of stroke lesions in CT scans of the brain employs the same base multi-task learning (MTL) architecture of Anonymous Author (s), while for classification of lung CT scans, we employed a ResNet-50 architecture (with 4 convolutional blocks). Further details about the architectures are provided in Appendix B. In our framework, the attention branches follow the same architecture of the baseline architectures (removing the classification layers). In the MTL model, the attention layers are added after the first, third and fifth convolutional layer. For the ResNet architecture, attention modules are added after each one of the first three convolutional blocks. The attention fusion layer is always placed after the last convolutional layer of each architecture. Moreover, instead of averaging the slices of each scan, in our framework we consider an attention mask over slices. This is obtained from image features by considering an MLP with one hidden layer. The hidden layer is followed by a leaky ReLU activation and dropout with \(p=0.1\). After the output layer of the MLP, we apply a sigmoid function to get the attention mask. Further training details are provided in Appendix C.
### Classification results
We compare the proposed framework with competing methods incorporating saliency maps into the classification pipeline, methods employing attention from the input image features, a vision transformer and the baseline model trained without saliency maps on the classification of brain and lung CT scans. In the former case, the possible classes are: no lesion, lesion in the left half of the brain, lesion in the right half of the brain or lesion in both sides. In the latter case, we perform binary classification between scans with or without COVID-19 related findings. In methods where saliency maps are needed, for a fair comparison of the different architectures, we always compute them with our approach. In particular, we compare our method with saliency-modulated image classification (SMIC) (Flores et al., 2019), SalClassNet (Murabito et al., 2018), hallucination of saliency maps (HSM) (Figueroa-Flores et al., 2020), spatial attention from the image features (SpAtt), self-attention (SeAtt) and the vision transformer (ViT) (Dosovitskiy et al., 2020). Implementation details are provided in Appendix E.
As we can observe in Table 2, our approach improves the average classification accuracy of the baseline from \(71.39\%\) to \(72.55\%\) on IST-3 and from \(67.71\%\) to \(70.84\%\) on MosMed.
\begin{table}
\begin{tabular}{||c c c c c c||} \hline & No Lesion & IS-1 & IS-2 & IS-3 & IS-4 \\ \hline Baseline & \(81.41\%\) & \(23.66\%\) & \(54.16\%\) & \(\mathbf{72.09\%}\) & \(87.74\%\) \\ SMIC & \(79.24\%\) & \(25.55\%\) & \(54.82\%\) & \(65.71\%\) & \(\mathbf{88.36\%}\) \\ SalClassNet & \(76.71\%\) & \(29.24\%\) & \(54.48\%\) & \(64.95\%\) & \(82.71\%\) \\ HSM & \(80.37\%\) & \(27.28\%\) & \(53.86\%\) & \(71.60\%\) & \(89.10\%\) \\ SpAtt & \(82.56\%\) & \(21.33\%\) & \(51.58\%\) & \(67.86\%\) & \(86.77\%\) \\ SeAtt & \(83.49\%\) & \(27.03\%\) & \(52.05\%\) & \(65.54\%\) & \(84.42\%\) \\ ViT & \(76.79\%\) & \(11.67\%\) & \(41.04\%\) & \(53.12\%\) & \(61.54\%\) \\ ACAT (Ours) & \(\mathbf{84.30\%}\) & \(\mathbf{30.23\%}\) & \(\mathbf{55.02\%}\) & \(68.67\%\) & \(84.93\%\) \\ \hline \end{tabular}
\end{table}
Table 1: Test accuracy by infarct size. Our framework, ACAT, improves the performance of competing methods in the detection of scans with no infarct lesion, small and medium lesions (size 1-2)
Our framework is also the best performing in both cases. SMIC performs slightly worse than the baseline on IST-3 (with \(70.85\%\) accuracy) and better on MosMed (with \(69.27\%\) accuracy). HSM is close to the baseline results on IST-3 but worse than the baseline on MosMed, while SalClassNet is worse than the baseline on both tasks. The methods incorporating attention from the image features have also similar or worse performance than the baseline, highlighting how the use of attention from the saliency maps is key for the method to work. ViT obtains the worse performance on IST3, confirming the results from previous work that vision transformers often require a very large amount of training data to learn good visual representations (Neyshabur, 2020) and are often outperformed by CNNs on medical imaging tasks (Matsoukas et al., 2021). While it is easier to detect large stroke lesions, these can also be detected easily by humans. For this reason, we aim to test the capabilities of these models to flag scans with very subtle lesions. In order to do so, we evaluate their classification accuracy by infarct size (IS). As we can observe in Table 1 our approach obtains the best classification performance on the scans with no infarct lesion, as well as small and medium lesions (size 1-2). This confirms how our saliency based attention mechanism promotes the learning of local features that better detect subtle areas of interest.
### Evaluation of saliency maps
We evaluate quantitatively how the saliency maps generated with our approach described in Section 3.2, the latent shift method (Cohen et al., 2021), the gradient method (Simonyan et al., 2013) and Grad-CAM (Selvaraju et al., 2017) are able to detect the areas related to the stroke lesion. The maps were created employing the baseline model and positive scans which were not used during training. In particular, we generated negative counterfactuals with our approach and the latent shift method and computed the difference between the original image and the generated images to obtain the saliency maps. Grad-CAM is applied using the last convolutional layer of the network. The lesion location, which is used for evaluation, but is not known to the network, is one of the 6 classes: MCA left, MCA right, ACA left, ACA right, PCA left, PCA right. The attribution maps are evaluated as in Zhang et al. (2018), with the formula: \(S=\frac{Hits}{Hits+Misses}\). A hit is counted if the pixel with the greatest value in each CT scan lies in the correct region, a miss is counted otherwise. The saliency maps generated with our approach obtain the highest average score of \(65.05\%\) (with 2.03 standard error), improving the scores of \(58.39\%\) (2.00) and \(61.29\%\) (2.06) obtained with the latent shift and the gradient methods respectively. Grad-CAM has the worst score, with \(11.67\%\) (1.28). Sample saliency maps are showed in Figure 3 with a red color map. The red arrows indicate the lesion regions, which appear as a'shaded' area in the scans.
Furthermore, ACAT improves the lesion detection capabilities of saliency maps further. Indeed, if we re-compute the saliency maps with our approach and using ACAT as classifier to generate the counterfactuals, we obtain a score of \(68.55\%\) (1.94), without using the class labels. In fact, the saliency maps are generated by averaging the absolute differences between the original image and the counterfactual examples of both classes (positive and negative).
### Ablation studies
On IST-3, we compare the performance of ACAT when saliency maps obtained with different approaches are employed. When using saliency maps obtained with our approach we obtain the highest accuracy of \(72.55\%\) (\(0.72\)). The relative ranking of the saliency generation approaches is the same that was obtained with the evaluation of saliency maps with the score presented in Section 4.4, with the gradient method obtaining \(72.16\%\) (\(0.88\)) accuracy, the latent shift method \(72.04\%\) (\(1.07\)) and Grad-CAM \(69.42\%\) (\(1.19\)).
On MosMed, we ablate the components of our architecture. In the proposed approach, attention masks are obtained from the saliency branch at three different stages of the network (early, middle and late) and finally an attention fusion layer weighs the three masks and is applied before the classification layers. Therefore, we progressively removed the fusion layer, the late attention mask and the middle attention mask to test the contribution of each component. While the classification accuracy of the full ACAT architecture was \(70.84\%(1.53)\), by removing the attention fusion layer it decreased to \(69.79\%(2.78)\). Moreover, by also removing the late attention layer it further decreased to \(68.75\%(1.48)\), reaching \(68.23\%(0.85)\) when the middle attention layer was eliminated as well.
\begin{table}
\begin{tabular}{||c c c||} \hline & IST-3 & MosMed \\ \hline Baseline & \(71.39\%\)\((0.23)\) & \(67.71\%\)\((3.48)\) \\ SMIC & \(70.85\%\)\((0.63)\) & \(69.27\%\)\((1.13)\) \\ SalClassNet & \(69.43\%\)\((1.81)\) & \(62.50\%\)\((2.66)\) \\ HSM & \(71.38\%\)\((0.94)\) & \(67.71\%\)\((1.86)\) \\ SpAtt & \(70.96\%\)\((0.10)\) & \(66.67\%\)\((2.98)\) \\ SeAtt & \(71.23\%\)\((0.10)\) & \(67.71\%\)\((1.70)\) \\ ViT & \(57.87\%\)\((0.87)\) & \(66.67\%\)\((2.98)\) \\ ACAT (Ours) & \(\mathbf{72.55}\%\)\((0.82)\) & \(\mathbf{70.84}\%\)\((1.53)\) \\ \hline \end{tabular}
\end{table}
Table 2: Average test accuracy over 3 runs on the classification of brain (IST-3) and lung (MosMed) CT scans. Our framework, ACAT, outperforms competing methods that employ saliency maps to aid classification and other alternative methods.
### ACAT makes the network more robust to input perturbations
We investigate the mechanism through which ACAT helps the improvement of prediction performance. Consider a neural network with M layers. Given \(\phi\) activation function: \(X^{m+1}=\phi(Z^{m+1})\),with \(m\in[1,M]\) and \(Z^{m+1}=W^{m}X^{m}+B^{m}\) pre-activations, \(W^{m}\) and \(B^{m}\) being the weight and bias matrices respectively. We compare the mean variances of the pre-activations of IST-3 test samples in each layer for the baseline model and ACAT trained from scratch. As we can observe in Table 3, ACAT significantly reduces the pre-activation variances \(\sigma^{2,m}\) of the baseline model. As a consequence, perturbations of the inputs will have a smaller effect on the output of the classifier, increasing its robustness and smoothing the optimisation landscape (Ghorbani et al., 2019; Littwin and Wolf, 2018; Santurkar et al., 2018). In fact, if we add random noise sampled from a standard Gaussian distribution to the inputs, the mitigating effect of ACAT on the pre-activations variance is even more pronounced, as displayed in Table 3.
### ACAT is not random regularisation
We employed dropout to test if the improvements obtained with ACAT are only due to regularization effects that can be replicated by dropping random parts of the image features. In particular, we employed dropout with different values of \(p\) on the image features at the same layers where the attention masks are applied in ACAT. The accuracy obtained was lower than in the baseline models. In particular, we obtained \(68.71\%\), \(68.36\%\) average accuracy on IST-3 for \(p=0.2,0.6\) respectively (vs \(71.39\%\) of the baseline) and \(53.13\%\), \(58.86\%\) accuracy on MosMed for the same values of \(p\) (vs \(67.71\%\) of the baseline). The results suggests that spatial attention masks obtained from salient features in ACAT are informative and the results obtained with ACAT cannot be replicated by random dropping of features.
## 5 Conclusion
In this work, we proposed a method to employ saliency maps to improve classification accuracy in two medical imaging tasks (IST-3 and MosMed) by obtaining soft attention masks from salient features at different scales. These attention masks modulate the image features and can cancel noisy signal coming from them. They are also weighted by an attention fusion layer in order to better inform the classification outcome. We investigated the best approach to generate saliency maps that capture small areas of interest in low signal-to-noise samples and we presented a way to obtain them from adversarially generated counterfactual images. A possible limitation of our approach is that a baseline model is needed to compute the attribution masks that are
Figure 4: Input image with masks depicting regions of interests (a) and saliency maps averaged over slices obtained with our approach (b), the latent shift method(c), the Gradient method (d) and Grad-Cam (e)
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{Original inputs} & \multicolumn{2}{c|}{Noised inputs} \\ \cline{2-5} & Baseline & ACAT & Baseline & ACAT \\ \hline \(\sigma^{2,1}\) & \(0.017\) & \(0.035\) & \(0.36\) & \(0.39\) \\ \(\sigma^{2,2}\) & \(17.68\) & \(0.03\) & \(33.92\) & \(0.97\) \\ \(\sigma^{2,3}\) & \(7.22\) & \(0.09\) & \(10.14\) & \(2.62\) \\ \(\sigma^{2,4}\) & \(0.97\) & \(0.04\) & \(17.04\) & \(2.46\) \\ \(\sigma^{2,5}\) & \(1.91\) & \(0.15\) & \(336.04\) & \(15.28\) \\ \(\sigma^{2,6}\) & \(3.05\) & \(0.05\) & \(5958.12\) & \(11.64\) \\ \(\sigma^{2,7}\) & \(0.23\) & \(0.17\) & \(831.92\) & \(77.98\) \\ \hline \end{tabular}
\end{table}
Table 3: Variances of the pre-activations of the 7 layers of the baseline model and of ACAT for original and noised input images. ACAT makes the model more robust by decreasing these variances
later employed during the training of our framework. However, we believe that this approach could still fit in a normal research pipeline, as simple models are often implemented as a starting point and for comparison with newly designed approaches. While our approach has been tested on brain and lung CT scans, we believe that it can generalise to many other tasks and we leave further testing for future work.
## Acknowledgements
This work was supported by the United Kingdom Research and Innovation (grant EP/S02431X/1), UKRI Centre for Doctoral Training in Biomedical AI at the University of Edinburgh, School of Informatics. For the purpose of open access, the author has applied a creative commons attribution (CC BY) license to any author accepted manuscript version arising.
|
2309.00711 | Learning Shared Safety Constraints from Multi-task Demonstrations | Regardless of the particular task we want them to perform in an environment,
there are often shared safety constraints we want our agents to respect. For
example, regardless of whether it is making a sandwich or clearing the table, a
kitchen robot should not break a plate. Manually specifying such a constraint
can be both time-consuming and error-prone. We show how to learn constraints
from expert demonstrations of safe task completion by extending inverse
reinforcement learning (IRL) techniques to the space of constraints.
Intuitively, we learn constraints that forbid highly rewarding behavior that
the expert could have taken but chose not to. Unfortunately, the constraint
learning problem is rather ill-posed and typically leads to overly conservative
constraints that forbid all behavior that the expert did not take. We counter
this by leveraging diverse demonstrations that naturally occur in multi-task
settings to learn a tighter set of constraints. We validate our method with
simulation experiments on high-dimensional continuous control tasks. | Konwoo Kim, Gokul Swamy, Zuxin Liu, Ding Zhao, Sanjiban Choudhury, Zhiwei Steven Wu | 2023-09-01T19:37:36Z | http://arxiv.org/abs/2309.00711v1 | # Learning Shared Safety Constraints
###### Abstract
Regardless of the particular task we want them to perform in an environment, there are often shared _safety constraints_ we want our agents to respect. For example, regardless of whether it is making a sandwich or clearing the table, a kitchen robot should not break a plate. Manually specifying such a constraint can be both time-consuming and error-prone. We show how to learn constraints from expert demonstrations of safe task completion by extending inverse reinforcement learning (IRL) techniques to the space of constraints. Intuitively, we learn constraints that forbid highly rewarding behavior that the expert could have taken but chose not to. Unfortunately, the constraint learning problem is rather ill-posed and typically leads to overly conservative constraints that forbid all behavior that the expert did not take. We counter this by leveraging diverse demonstrations that naturally occur in multi-task settings to learn a tighter set of constraints. We validate our method with simulation experiments on high-dimensional continuous control tasks.
## 1 Introduction
If a friend was in your kitchen and you told them to "make toast" or "clean the dishes," you would probably be rather surprised if they broke some of your plates during this process. The underlying _safety constraint_ that forbids these kinds of behavior is both _a)_ implicit and _b)_ agnostic to the particular task they were asked to perform. Now, let's bring a household robot into the equation, operating within your kitchen. How can we ensure that it adheres to these implicit safety constraints, regardless of its assigned tasks?
One approach might be to write down specific constraints (e.g. joint torque limits) and pass them to the decision-making system of the robot. Unfortunately, more complex constraints like the ones we consider above are both difficult to formalize mathematically and easy for an end-user to forget to specify (as they would be inherently understood by a human helper). This problem is paralleled in the field of reinforcement learning (RL), where defining reward functions that lead to desirable behaviors for the learning agent is a recurring challenge (Hadfield-Menell et al., 2017). For example, it is rather challenging to handcraft the exact function one should be optimized to be a good driver. The standard solution to this sort of "reward design" problem is to instead demonstrate the desired behavior of the agent and then extract a reward function that would incentivize such behavior. Such _inverse reinforcement learning_ (IRL) techniques have found application in fields as diverse as robotics (Silver et al., 2010; Ratliff et al., 2009; Kolter et al., 2008; Ng et al., 2006; Zucker et al., 2011), computer vision (Kitani et al., 2012), and human-computer interaction (Ziebart et al., 2008, 2012). Given the success of IRL techniques and the similarity between reward and constraint design, we propose
extending IRL techniques to the space of constraints. We term such techniques _inverse constraint learning_, or ICL for short.
More formally, we consider a setting in which we have access to demonstrations of an expert policy for a task, along with knowledge about the task's reward. This allows us to look at the difference between the expert and reward-optimal policies for a task. Our first key insight is that _the actions taken by the reward-optimal but not the expert policy are likely to be forbidden, allowing us to extract a constraint._
Unfortunately, the ICL problem is still rather ill-posed. Indeed, prior work in ICL will often learn overly conservative constraints that forbid all behavior the expert did not take (Scobee and Sastry, 2019; Vazquez-Chanlatte et al., 2018; McPherson et al., 2021). However, for tasks in a shared environment with different rewards, there are often safety constraints that should be satisfied regardless of the task (e.g. a plate shouldn't be broken regardless of whether you're serving food on it or cleaning up after a meal). Our second crucial insight is that _we can leverage multi-task data to provide more comprehensive demonstration coverage over the state space, helping our method avoid degenerate solutions._
More explicitly, the contributions of our work are three-fold.
**1. We formalize the inverse constraint learning problem.** We frame ICL as a zero-sum game between a policy player and a constraint player. The policy player attempts to maximize reward while satisfying a potential constraint, while the constraint player picks constraints that maximally penalize the learner relative to the expert. Intuitively, such a procedure recovers constraints that forbid high-reward behavior the expert did not take.
**2. We develop a multi-task extension of inverse constraint learning.** We derive a zero-sum game between a set of policy players, each attempting to maximize a task-specific reward, and a constraint player that chooses a constraint that all policy players must satisfy. Because the constraint player looks at aggregate learner and expert data, it is less likely to select a degenerate solution.
**3. We demonstrate the efficacy of our approach on various continuous control tasks.** We show that with restricted function classes, we are able to recover ground-truth constraints on certain tasks. Even when using less interpretable function classes like deep networks, we can still ensure a match with expert safety and task performance. In the multi-task setting, we are able to identify constraints that a single-task learner would struggle to learn.
We begin with a discussion of related work.
## 2 Related Work
Our work exists at the confluence of various research thrusts. We discuss each independently.
**Inverse Reinforcement Learning.** IRL (Ziebart et al., 2008, 2012; Ho and Ermon, 2016) can be framed as a two-player zero-sum game between a policy player and a reward player (Swamy et al., 2021). In most formulations of IRL, a potential reward function is chosen in an outer loop, and the policy player optimizes it via RL in an inner loop. Similar to IRL, the constraint in our formulation of ICL is chosen adversarially in an outer loop. However, in contrast to IRL, the inner loop of ICL is _constrained_ reinforcement learning: the policy player tries to find the optimal policy that respects the constraint chosen in the outer loop.
**Constrained Reinforcement Learning.** Our approach involves repeated calls to a constrained reinforcement learning (CRL) oracle (Garcia and Fernandez, 2015; Gu et al., 2022). CRL aims to find a reward-maximizing policy over a constrained set, often formulated as a constrained policy optimization problem (Altman, 1999; Xu et al., 2022). Solving this problem via Frank-Wolfe methods is often unstable (Ray et al., 2019; Liang et al., 2018). Various methods have been proposed to mitigate this instability, including variational techniques (Liu et al., 2022), imposing trust-region regularization (Achiam et al., 2017; Yang et al., 2020; Kim and Oh, 2022), optimistic game-solving algorithms (Moskovitz et al., 2023), and PID controller-based methods (Stooke et al., 2020). In our practical implementations, we use PID-based methods for their relative simplicity.
**Multi-task Inverse Reinforcement Learning.** Prior work in IRL has considered incorporating multi-task data (Xu et al., 2019; Yu et al., 2019; Gleave and Habryka, 2018). We instead consider a
setting in which we know task-specific rewards and are attempting to recover a shared component of the demonstrator's objective. Amin et al. (2017) consider a similar setting but require the agent to be able to actively choose tasks or interactively query the expert, while our approach requires neither.
**Inverse Constraint Learning.** We are far from the first to consider the ICL problem. Scobee and Sastry (2019); McPherson et al. (2021) extend the MaxEnt IRL algorithm of Ziebart et al. (2008) to the ICL setting. We instead build upon the moment-matching framework of Swamy et al. (2021), allowing our theory to handle general reward functions instead of the linear reward functions MaxEnt IRL assumes. We are also able to provide performance and constraint satisfaction guarantees on the learned policy, unlike the aforementioned work. Furthermore, we consider the multi-task setting, addressing a key shortcoming of the prior work.
Perhaps the most similar paper to ours is the excellent work of Chou et al. (2020), who also consider the multi-task ICL setting but propose a solution that requires several special solvers that depend on knowledge of the parametric family that a constraint falls into. In contrast, we provide a general algorithmic template that allows one to apply whatever flexible function approximators (e.g. deep networks) and reinforcement learning algorithms (e.g. PPO) they desire. Chou et al.'s method also requires sampling _uniformly_ over the set of trajectories that achieve a higher reward than the expert, a task which is rather challenging to do on high-dimensional problems. In contrast, our method only requires the ability to solve a standard RL problem. Theoretically, Chou et al. (2020) focus on constraint recovery, which we argue below is a goal that requires strong assumptions and is therefore a red herring on realistic problems. This focus also prevents their theory from handling suboptimal experts. In contrast, we are able to provide rigorous guarantees on learned policy performance and safety, even when the expert is suboptimal. We include results in Appendix C that show that our approach is far more performant.
In concurrent work, Lindner et al. (2023) propose an elegant solution approach to ICL: rather than learning a constraint function, assume that _any_ unseen behavior is unsafe and enforce constraints on the learner to play a convex combination of the demonstrated safe trajectories. The key benefit of this approach is that it doesn't require knowing the reward function the expert was optimizing. However, by forcing the learner to simply replay previous expert behavior, the learner cannot meaningfully generalize, and might therefore be extremely suboptimal on any new task. In contrast, we use the side information of a reasonable set of constraints to provide rigorous policy performance guarantees.2
Footnote 2: We also note that, because we scale the learned constraint differently for each task, Lindner et al. (2023)’s impossibility result (Prop. 2) does not apply to our method, thereby elucidating why a naive application of inverse RL on the aggregate data isn’t sufficient for the problem we consider.
We now turn our attention to formalizing inverse constraint learning.
## 3 Formalizing Inverse Constraint Learning
We build up to our full method in several steps. We first describe the foundational algorithmic structures we build upon (inverse reinforcement learning and constrained reinforcement learning). We then describe the single-task formulation before generalizing it to the multi-task setup.
We consider a finite-horizon Markov Decision Process (MDP) (Puterman, 2014) parameterized by \(\langle\mathcal{S},\mathcal{A},\mathcal{T},r,T\rangle\) where \(\mathcal{S}\), \(\mathcal{A}\) are the state and action spaces, \(\mathcal{T}:\mathcal{S}\times\mathcal{A}\rightarrow\Delta(\mathcal{S})\) is the transition operator, \(r:\mathcal{S}\times\mathcal{A}\rightarrow[-1,1]\) is the reward function, and \(T\) is the horizon.
### Prior Work: Inverse RL as Game Solving
In the inverse RL setup, we are given access trajectories generated by an expert policy \(\pi^{E}:\mathcal{S}\rightarrow\Delta(\mathcal{A})\), but do not know the reward function of the MDP. Our goal is to nevertheless learn a policy that performs as well as the expert's, no matter the true reward function.
We solve the IRL problem via equilibrium computation between a policy player and an adversary that tries to pick out differences between expert and learner policies under potential reward functions Swamy et al. (2021). More formally, we optimize over polices \(\pi:\mathcal{S}\rightarrow\Delta(\mathcal{A})\in\Pi\) and reward functions \(f:\mathcal{S}\times\mathcal{A}\rightarrow[-1,1]\in\mathcal{F}_{r}\). For simplicity, we assume that our strategy spaces (\(\Pi\) and \(\mathcal{F}_{r}\)) are convex and compact and that \(r\in\mathcal{F}_{r},\pi_{E}\in\Pi\). We solve (i.e. compute an approximate Nash
equilibrium) of the two-player zero-sum game
\[\min_{\pi\in\Pi}\max_{f\in\mathcal{F}_{r}}J(\pi,f)-J(\pi_{E},f), \tag{1}\]
where \(J(\pi,f)=\mathbb{E}_{\xi\sim\pi}[\sum_{t=0}^{T}f(s_{t},a_{t})]\) denotes the value of policy \(\pi\) under reward function \(f\).
### Prior Work: Constrained Reinforcement Learning as Game Solving
In CRL, we are given access to both the reward function and a constraint \(c:\mathcal{S}\times\mathcal{A}\rightarrow[-1,1]\). Our goal is to learn the highest reward policy that, over the horizon, has a low expected value under the constraint. More formally, we seek a solution to the optimization problem:
\[\min_{\pi\in\Pi}-J(\pi,r)\text{ s.t. }J(\pi,c)\leq\delta, \tag{2}\]
where \(\delta\) is some error tolerance. We can also formulate CRL as a game via forming the Lagrangian of the above optimization problem (Altman, 1999):
\[\min_{\pi\in\Pi}\max_{\lambda>0}-J(\pi,r)+\lambda(J(\pi,c)-\delta). \tag{3}\]
Intuitively, the adversary updates the weight of the constraint term in the policy player's reward function based on how in violation the learner is.
```
Input: Reward \(r\), constraint \(c\), learning rates \(\eta_{1:N}\), tolerance \(\delta\) Output: Trained policy \(\pi\) Initialize \(\lambda_{1}=0\) for\(i\) in \(1\ldots N\)do \(\pi_{i}\leftarrow\text{RL}(r=r-\lambda_{i}c)\) \(\lambda_{i}\leftarrow[\lambda_{i}+\eta_{i}(J(\pi_{i},c)-\delta)]^{+}\) endfor Return \(\text{Unif}(\pi_{1:N})\).
```
**Algorithm 1** CRL (Constrained Reinforcement Learning)
### Single-Task Inverse Constraint Learning
We are finally ready to formalize ICL. In ICL, we are given access to the reward function, trajectories from the solution to a CRL problem, and a class of potential constraints \(\mathcal{F}_{c}\) in which we assume the ground-truth constraint \(c^{*}\) lies. We assume that \(\mathcal{F}_{c}\) is convex and compact.
In the IRL setup, without strong assumptions on the dynamics of the underlying MDP and expert, it is impossible to guarantee recovery of the ground-truth reward. Often, the only reward function that actually makes the expert optimal is zero everywhere (Abbeel and Ng, 2004). Instead, we attempt to find the reward function that maximally distinguishes the expert from an arbitrary other policy in our policy class via game-solving (Ziebart et al., 2008a; Ho and Ermon, 2016; Swamy et al., 2021). Similarly, for ICL, exact constraint recovery can be challenging. For example, if two constraints differ only on states the expert never visits, it is not clear how to break ties. We instead try to find a constraint that best separates the safe (but not necessarily optimal) \(\pi_{E}\) from policies that achieve higher rewards.
More formally, we seek to solve the following constrained optimization problem.
\[\min_{\pi\in\Pi}J(\pi_{E},r)-J(\pi,r) \tag{4}\] \[\text{s.t.}\max_{c\in\mathcal{F}_{c}}J(\pi,c)-J(\pi_{E},c)\leq 0. \tag{5}\]
Note that in contrast to the _moment-matching_ problem we solve in imitation learning (Swamy et al., 2021), we instead want to be _at least_ as safe as the expert. This means that rather than having equality constraints, we have inequality constraints. Continuing, we can form the Lagrangian:
\[\min_{\pi\in\Pi}\max_{\lambda>0}J(\pi_{E},r)-J(\pi,r)+\lambda( \max_{c\in\mathcal{F}_{c}}J(\pi,c)-J(\pi_{E},c)) \tag{6}\] \[=\max_{c\in\mathcal{F}_{c}}\max_{\lambda>0}\min_{\pi\in\Pi}J(\pi_ {E},r-\lambda c)-J(\pi,r-\lambda c). \tag{7}\]
Notice that the form of the ICL game resembles a combination of the IRL and CRL games. We describe the full game-solving procedure in Algorithm 2, where \(R(c)\) is an arbitrary strongly convex regularizer (McMahan, 2011). Effectively, we pick a constraint function in the same way we pick a reward function in IRL but run a CRL inner loop instead of an RL step. Instead of a fixed constraint threshold, we set tolerance \(\delta\) to the expert's constraint violation. Define
\[\ell_{i}(c)=\frac{1}{T}(J(\pi_{i},c)-J(\pi_{E},c))\in[-1,1] \tag{8}\]
as the per-round loss that the constraint player suffers in their online decision problem. The best-inhindsight comparator constraint is defined as
\[\hat{c}=\operatorname*{argmax}_{c\in\mathcal{F}_{c}}\sum_{i}^{T}\ell_{i}(c). \tag{9}\]
We can then define the cumulative regret the learner suffers as
\[\text{Reg}(T)=\sum_{i}^{T}\ell_{i}(\hat{c})-\sum_{i}^{T}\ell_{i}(c_{i}), \tag{10}\]
and let \(\epsilon_{i}=\ell_{i}(\hat{c})-\ell_{i}(c_{i})\). We prove the following theorem via standard machinery.
**Theorem 3.1**.: _Let \(c_{1:N}\) be the iterates produced by Algorithm 2 and let \(\bar{\epsilon}=\frac{1}{N}\sum_{i}^{N}\epsilon_{i}\) denote their time-averaged regret. Then, there exists a \(c\in c_{1:N}\) such that \(\pi=\texttt{CRL}(r,c,\delta=J(\pi_{E},c))\) satisfies_
\[J(\pi,c^{*})-J(\pi_{E},c^{*})\leq\bar{\epsilon}T\text{ and }J(\pi,r)\geq J(\pi_{E},r). \tag{11}\]
In words, by optimizing under the recovered constraint, we can learn a policy that (weakly) Pareto-dominates the expert policy under \(c^{*}\). We conclude by noting that because FTRL (Follow the Regularized Leader, McMahan (2011)) is a no-regret algorithm for linear losses like (8), we have that \(\lim_{T\to\infty}\frac{\text{Reg}(T)}{T}=0\). This means that with enough iterations, the RHS of the above bound on ground-truth constraint violation will go to 0.
Figure 1: A visual depiction of the optimization problem we’re trying to solve in ICL. We attempt to pick a constraint that minimizes the value difference over the expert policy a safe policy could have. The star corresponds to the output of CRL.
### Multi-task Inverse Constraint Learning
One of the potential failure modes of the single-task approach we outline above is that we could learn an overly conservative constraint, leading to poor task performance (Liu et al., 2023). For example, imagine that we entropy-regularize our policy optimization (Ziebart et al., 2008; Haarnoja et al., 2018), as is common practice. Assuming a full policy class, the learner puts nonzero probability mass on all reachable states in the MDP. The constraint player is therefore incentivized to forbid all states the expert did not visit (Scobee and Sastry, 2019; McPherson et al., 2021). Such a constraint would likely generalize poorly when combined with a new reward function (\(\bar{r}\neq r\)) as it forbids _all untaken_ rather than just _unsafe_ behavior.
At heart, the issue with the single-task formulation lies in the potential for insufficient coverage of the state space within expert demonstrations. Therefore, it is natural to explore a multi-task extension to counteract this limitation. Let each task be defined by a unique reward. We assume the dynamics and safety constraints are consistent across tasks. We observe \(K\) samples of the form \((r_{k},\{\xi\sim\pi_{E}^{k}\})\). This data allows us to define the multi-task variant of our previously described ICL game:
\[\max_{c\in\mathcal{F}_{c}}\min_{\pi^{1:K}\in\Pi}\max_{\lambda^{1:K}>0}\sum_{i }^{K}J(\pi_{E}^{i},r^{i}-\lambda^{i}c)-J(\pi^{i},r^{i}-\lambda^{i}c). \tag{12}\]
We describe how we solve this game in Algorithm 3, where \(R(c)\) is an arbitrary strongly convex regularizer (McMahan, 2011). In short, we alternate between solving \(K\) CRL problems and updating the constraint based on the data from all policies.
We now give two conditions under which generalization to new reward functions is possible.
```
Input: Rewards \(r^{1:K}\), constraint class \(\mathcal{F}_{c}\), trajectories from \(\pi_{E}^{1:K}\) Output: Learned constraint \(c\) Set \(\widetilde{\mathcal{F}}_{c}=\{c\in\mathcal{F}_{c}|\forall k\in[K],J(\pi_{E}^{k },c)\leq 0\}\) Initialize \(c_{1}\in\widetilde{\mathcal{F}}_{c}\) for\(i\) in \(1\ldots N\)do for\(k\) in \(1\ldots K\)do \(\pi_{i}^{k},\lambda_{i}^{k}\leftarrow\texttt{CRL}(r^{k},c_{i},\delta=0)\) endfor #use any no-regret algo. to pick c, e.g. FTRL: \(c_{i+1}\leftarrow\operatorname*{argmax}_{c\in\widetilde{\mathcal{F}}_{c}}\frac {1}{TK}\sum_{j}^{i}\sum_{k}^{K}(J(\pi_{j}^{k},c)-J(\pi_{E}^{k},c))-R(c)\). endfor Return best of \(c_{1:N}\) on validation data.
```
**Algorithm 3**MT-ICL (Multi-task Inverse Constraint Learning)
### A (Strong) Geometric Condition for Identifiability
Consider for a moment the linear programming (LP) formulation of reinforcement learning. We search over the space of occupancy measures (\(\rho_{\pi}\in\Delta(\mathcal{S}\times\mathcal{A})\)) that satisfy the set of Bellman flow constraints (Sutton and Barto, 2018) and try to maximize the inner product with reward vector \(r\in\mathbb{R}^{|\mathcal{S}||\mathcal{A}|}\). We can write the CRL optimization problem (assuming \(\delta=0\) for simplicity) as an LP
Figure 2: If we have a sufficient diversity of expert policies, none of which are optimal along the reward vector, we can identify the hyperplane that separates the safe policies from the unsafe policies. The constraint (red, dashed) will be orthogonal to this hyperplane. For this example, because \(\rho_{\pi}\in\mathbb{R}^{2}\), we need two expert policies.
as well. Using \(\rho_{\Pi}\) to denote the occupancy measures of all \(\pi\in\Pi\),
\[\max_{\rho_{*}\in\rho_{\Pi}}\langle\rho_{\pi},r\rangle\text{ s.t. }\langle\rho_{\pi},c^{*}\rangle\leq 0.\]
We observe a (for simplicity, optimal) solution to such a problem for \(K\) rewards, begging the question of when that is enough to uniquely identify \(c^{*}\). Recall that to uniquely determine the equation of a hyperplane in \(\mathbb{R}^{d}\), we need \(d\) linearly independent points. \(c^{*}\in\mathbb{R}^{|\mathcal{S}||\mathcal{A}|}\), so we need \(|\mathcal{S}||\mathcal{A}|\) expert policies. Furthermore, we need each of these points to lie on the constraint line and not on the boundary of the full polytope. Put differently, we need each distinct expert policy to _saturate_ the underlying constraint (i.e. \(\exists\pi\in\Pi\) s.t. \(J(\pi_{E}^{\mathbb{R}},r^{k})<J(\pi^{k},r^{k})\)). Under these conditions, we can uniquely determine the hyperplane that separates safe from unsafe policies, to which the constraint vector is orthogonal. More formally,
**Lemma 3.2**.: _Let \(\pi_{E}^{1:|\mathcal{S}||\mathcal{A}|}\) be distinct optimal expert policies such that a) \(\forall i\in[|\mathcal{S}||\mathcal{A}|]\), \(\pi_{E}^{i}\in\text{relint}(\rho_{\Pi})\) and b) no \(\rho_{\pi_{E}^{i}}\) can be generated by a mixture of the other visitation distributions. Then, \(c^{*}\) is the unique (up to scaling) nonzero vector in_
\[\text{Nul}\left(\begin{bmatrix}\rho_{\pi_{E}^{1}}-\rho_{\pi_{E}^{2}}\\ \vdots\\ \rho_{\pi_{E}^{|\mathcal{S}||\mathcal{A}|-1}}-\rho_{\pi_{E}^{|\mathcal{S}|| \mathcal{A}|}}\end{bmatrix}\right). \tag{13}\]
We visualize this process for the \(|\mathcal{S}||\mathcal{A}|=2\) case in Fig. 2. Assuming we are able to recover \(c^{*}\), we can guarantee that our learners will be able to act safely, regardless of the task they are asked to do. However, the assumptions required to do so are quite strong: we are effectively asking for our expert policies to form a basis for the space of occupancy measures, which means we must see expert data for a large set of diverse tasks. Furthermore, we need the experts to be reward-optimal.
Identifiability (the goal of prior works like Chou et al. (2020); Amin et al. (2017)) is too strong a goal as it requires us to estimate the value of the constraint _everywhere_ in the state-action space. If we know the learner will only be incentivized to go to a certain subset of states (as is often true in practice), we can guarantee safety without fully identifying \(c^{*}\). Therefore, we now consider how, by making distributional assumptions on how tasks are generated, we can generalize to novel tasks.
### A Statistical Condition for Generalization
Assume that tasks \(\tau\) are drawn i.i.d. from some \(P(\tau)\). Then, even if we do not see a wide enough diversity of expert policies to guarantee identifiability of the ground-truth constraint function, with enough samples, we can ensure we do well in expectation over tasks. For some constraint \(c\), let us define
\[V(c)=\mathbb{E}_{\tau\sim P(\tau)}[J(\pi^{\tau},c)-J(\pi_{E}^{\tau},c)], \tag{14}\]
where \(\lambda^{\tau},\pi^{\tau}=\mathtt{CRL}(\tau^{\tau},c)\) denote the solutions to the inner optimization problem. We begin by proving the following lemma.
**Lemma 3.3**.: _With_
\[K\geq O\left(\log\left(\frac{|\mathcal{F}_{c}|}{\delta}\right)\frac{(2T)^{2}}{ \epsilon^{2}}\right) \tag{15}\]
_samples, we have that with probability \(\geq 1-\delta\), we will be able to estimate all \(|\mathcal{F}_{c}|\) population estimates of \(V(c)\) within \(\epsilon\) absolute error._
Note that we perform the above analysis for finite classes but one could easily extend it (Sriperumbudur et al., 2009). The takeaway from the above lemma is that if we observe a sufficient number of tasks, we can guarantee that we can estimate the population loss of all constraints, up to some tolerance.
Consider the learner being faced with a new task they have never seen before at test time. Unlike in the single task case, where it is clear how to set the cost limit passed to CRL, it is not clear how to do so for a novel task. Hence, we make the following assumption.
**Assumption 3.4**.: We assume that \(\mathbb{E}_{\tau}[J(\pi_{E}^{\tau},c^{*})]\leq 0\), and that \(\forall c\in\mathcal{F}_{c},\exists\pi\in\Pi\) s.t. \(J(\pi,c)\leq 0\).
This (weak) assumption allows us to a) use a cost limit of 0 for our CRL step and b) search over a subset of \(\mathcal{F}_{c}\) that the expert is safe under. Under this assumption, we are able to prove the following:
**Theorem 3.5**.: _Let \(c_{1:N}\) be the iterates produced by Algorithm 3 with \(K(\epsilon,\delta)\) chosen as in Lemma 3.3 and let \(\bar{\epsilon}=\frac{1}{N}\sum_{i}^{N}\epsilon_{i}\) denote their time-averaged regret. Then, w.p. \(\geq 1-\delta\), there exists a \(c\in c_{1:N}\) such that \(\pi(r)=\texttt{CRL}(r,c,\delta=0)\) satisfies \(\mathbb{E}_{\tau\sim P(\tau)}[J(\pi(r^{\tau}),c^{*})-J(\pi_{E}^{\tau},c^{*})] \leq\bar{\epsilon}T+3\epsilon T\) and \(\mathbb{E}_{\tau\sim P(\tau)}[J(\pi(r^{\tau}),r^{\tau})-J(\pi_{E}^{\tau},r^{ \tau})]\geq-2\epsilon T\)._
In short, if we observe enough tasks, we are able to learn a constraint that, when optimized under, leads to policies that approximately Pareto-dominate that of the expert.
We now turn our attention to the practical implementation of these algorithms.
## 4 Practical Algorithm
We provide practical implementations of constrained reinforcement learning and inverse constraint learning and benchmark their performance on several continuous control tasks. We first describe the environments we test our algorithms on. Then, we provide results showing that our algorithms learn policies that match expert performance and constraint violation. While it is hard to guarantee constraint recovery in theory, we show that we can recover the ground-truth constraint empirically if we search over a restricted enough function class.
### Tasks
We focus on the ant environment from the PyBullet (Coumans and Bai, 2016) and MuJoCo (Todorov et al., 2012) benchmarks. The default reward function incentivizes progress along the positive \(x\) direction. For our single-task experiments, we consider a velocity and position constraint on top of this reward function.
1. **Velocity Constraint:**\(\frac{\|q_{t+1}-q_{t}\|_{2}}{\text{dt}}\leq 0.75\) where \(q_{t}\) is the ant's position
2. **Position Constraint:**\(0.5x_{t}-y_{t}\leq 0\) where \(x_{t},y_{t}\) are the ant's coordinates
For our multi-task experiments, we build upon the D4RL (Fu et al., 2020) AntMaze benchmark. The default reward function incentivizes the agent to navigate a fixed maze to a random goal position: \(\exp(-\|q_{\text{goal}}-q_{t}\|_{2})\). We modify this environment such that the walls of the maze are permeable, but the agent incurs a unit step-wise cost for passing through the maze walls.
Our expert policies are generated by running CRL with the ground-truth constraint. We use the Tianshou (Weng et al., 2022) implementation of PPO (Schulman et al., 2017) as our baseline policy optimizer. Classical Lagrangian methods exactly follow the gradient update shown in Algorithm 1, but they are susceptible to oscillating learning dynamics and constraint-violating behavior during training. The PID Lagrangian method (Stooke et al., 2020) extends the naive gradient update of \(\lambda_{i}\) with a proportional and derivative term to dampen oscillations and prevent cost overshooting. To reduce the amount of interaction required to solve the inner optimization problem, we warm-start our policy in each iteration by behavior cloning against the given expert demonstrations. We used a single NVIDIA 3090 GPU for all experiments. Due to space constraints, we defer all other implementation details to Appendix B.
### ICL Results
We begin with results for the single-task problem, before continuing on to the multi-task setup.
### Single-Task Continuous Control Results
As argued above, we expect a proper ICL implementation to learn policies that perform as well and are as safe as the expert. However, by restricting the class of constraints we consider, we can also investigate whether recovery of the ground-truth constraint is possible. To this end, we consider a reduced-state version of our algorithm where the learned constraint takes a subset of the agent state as input. For the velocity constraint, the learned constraint is a linear function of the velocity, while for the position and maze constraints, the learned constraint is a linear function of the ant's position.
Using this constraint representation allows us to visualize the learned constraint over the course of training, as shown in Figure 3. We find that our ICL implementation is able to recover the constraint,
as the learned constraint for both the velocity and position tasks converges to the ground-truth value. Our results further show that over the course of ICL training, the learned policies match and exceed expert performance as their violations of the ground-truth constraint converge towards the expert's. Figure 4 provides a direct depiction of the evolution of the learned constraint and policy. The convergence of the red and blue lines shows that the learned position constraint approaches the ground truth, and the policy's behavior approaches that of the expert in response to this.
To measure the robustness of our method to sub-optimal experts, we repeat the above experiments using demonstrations from an expert with i.i.d. Gaussian noise added to their actions at each timestep. We are still able to learn a safe and performant policy, which matches what our theory predicts.
### Multi-Task Continuous Control Results
We next consider an environment where, even with an appropriate constraint class, recovering the ground-truth constraint with a single task isn't feasible due to the ill-posedness of the inverse constraint learning problem. Specifically, we use the AntMaze environment from D4RL (Fu et al., 2020), modified to have a more complex maze structure. As seen in Figure 7, the goal of each task is to navigate through the maze from one of the starting positions (top/bottom left) to one of the grid cells in the rightmost column. We provide expert data for all 10 tasks to the learner.
As we can see in Figure 6, multi-task ICL is, within a single iteration, able to learn policies that match expert performance and constraint violation across all tasks, _all without ever interacting with the ground-truth maze_. Over time, we are able to approximately recover the entire maze structure.
We visually compare several alternative strategies for using the multi-task demonstration data in the bottom row of Figure 7. The 0/1 values in the cells correspond to querying the deep constraint network from the last iteration of ICL on points from each of the grid cells and thresholding at some confidence. We see that a single-task network _(d)_ learns spurious walls that would prevent the learner from completing more than half of the tasks. Furthermore, learning 10 separate classifiers and then aggregating their outputs _(e)_ / _(f)_ also fails to produce reasonable outputs. However, when we use data from all 10 tasks to train our multi-task constraint network _(g)_ / _(h)_, we are able to approximately recover the walls of the maze. These results echo our preceding theoretical argument about the importance of multi-task data for learning constraints that generalize to future tasks.
Figure 4: As ICL training progresses, the learned position constraint (red line) converges to the ground-truth constraint (blue line) and the policy learns to escape unsafe regions (red region).
Figure 5: We re-conduct our position velocity constraint experiments using suboptimal experts to generate demonstrations. While, because of the ill-posedness of the problem, we do not exactly recover the ground truth constraint, we are able to use it to learn a policy that is higher performance than the expert while being just as safe. Standard errors are computed across 3 seeds.
Figure 3: Over the course of training, the learned ICL constraint recovers the ground-truth constraints for the velocity and position tasks. The learned policy matches expert performance and constraint violation. Standard errors are computed across 3 seeds.
We release the code we used for all of our experiments at [https://github.com/konwook/mticl](https://github.com/konwook/mticl).
## 5 Discussion
In this work, we derive an algorithm for learning safety constraints from multi-task demonstrations. We show that by replacing the inner loop of inverse reinforcement learning with a constrained policy optimization subroutine, we can learn constraints that guarantee learner safety on a single task. We then give statistical and geometric conditions under which we can guarantee safety on unseen tasks by planning under a learned constraint. We validate our approach on several control tasks.
**Limitations.** In the future, we would be interested in applying our approach to real-world problems (e.g. offroad driving). Algorithmically, the CRL inner loop can be more computationally expensive than an RL loop - we would be interested in speeding up CRL using expert demonstrations, perhaps by adopting the approach of Swamy et al. (2023). We also ignore all finite-sample issues, which could potentially be addressed via data-augmentation approaches like that of Swamy et al. (2022).
## 6 Acknowledgements
We thank Drew Bagnell for edifying conversations on the relationship between ICL and IRL and Nan Jiang for connecting us to various references in the literature. We also thank an anonymous reviewer for pointing out that our method does not actually require the expert to be the safe optimal policy, a fact we did not appreciate beforehand. ZSW is supported in part by the NSF FAI Award #1939606, a Google Faculty Research Award, a J.P. Morgan Faculty Award, a Facebook Research Award, an Okawa Foundation Research Grant, and a Mozilla Research Grant. KK and GS are supported by a GPU award from NVIDIA.
Figure 6: We see that over ICL iterations, we are able to recover the ground-truth walls of the ant-maze, enabling the learner to match expert performance and constraint violations. Results for the second two plots are averaged across all 10 tasks. Standard errors are computed across 3 seeds.
Figure 7: We consider the problem of trying to learn the walls of a custom maze **(a)** based on the AntMaze environment from D4RL (Fu et al., 2020). We consider both a single-task **(b)** and multi-task **(c)** setup. We see that the single-task data is insufficient to learn an accurate constraint **(d)**. Averaging or taking the max over the constraints learned from the data for each of the ten goals **(e)-(f)** also doesn’t work. However, if we use the data from all 10 tasks to learn the constraint **(g)-(h)**, we are able to approximately recover the ground-truth constraint with enough constraint learning iterations. |
2304.05695 | Thermodynamic topology of 4D Dyonic AdS black holes in different
ensembles | We study the thermodynamic topology of four dimensional dyonic
Anti-de-Sitter(AdS) black hole in three different ensembles: canonical, mixed
and grand canonical ensemble. While canonical ensemble refers to the ensemble
with fixed electric and magnetic charges, mixed ensemble is an ensemble where
we fix magnetic charge and electric potential. In the grand canonical ensemble,
potentials corresponding to both electric and magnetic charges are kept fixed.
In each of these ensembles, we first compute the topological charges associated
with critical points. We find that while in both canonical and mixed ensembles,
there exists one conventional critical point with topological charge $-1$, in
the grand canonical ensemble, we find no critical point. Then, we consider the
dyonic AdS black hole as topological defects in thermodynamic space and study
its local and global topology by computing the winding numbers at the defects.
We observe that while the topologies of the black hole in canonical and mixed
ensembles are identical with total topological charge equaling $1$, in the
grand canonical ensemble, depending on the values of potentials, the total
topological charge is either equal to $0$ or $1$. In canonical and mixed
ensembles, either one generation and one annihilation points or no
generation/annihilation points are found. In the grand canonical ensemble,
depending on the values of potentials, we find either one generation point or
no generation/annihilation point. Thus, we infer that the topological class of
$4$D dyonic AdS black hole is ensemble dependent. | Naba Jyoti Gogoi, Prabwal Phukon | 2023-04-12T08:36:15Z | http://arxiv.org/abs/2304.05695v1 | # Thermodynamic topology of 4D Dyonic AdS black holes in different ensembles
###### Abstract
We study the thermodynamic topology of four dimensional dyonic Anti-de-Sitter(AdS) black hole in three different ensembles: canonical, mixed and grand canonical ensemble. While canonical ensemble refers to the ensemble with fixed electric and magnetic charges, mixed ensemble is an ensemble where we fix magnetic charge and electric potential. In the grand canonical ensemble, potentials corresponding to both electric and magnetic charges are kept fixed. In each of these ensembles, we first compute the topological charges associated with critical points. We find that while in both canonical and mixed ensembles, there exists one conventional critical point with topological charge \(-1\), in the grand canonical ensemble, we find no critical point. Then, we consider the dyonic AdS black hole as topological defects in thermodynamic space and study its local and global topology by computing the winding numbers at the defects. We observe that while the topologies of the black hole in canonical and mixed ensembles are identical with total topological charge equaling \(1\), in the grand canonical ensemble, depending on the values of potentials, the total topological charge is either equal to \(0\) or \(1\). In canonical and mixed ensembles, either one generation and one annihilation points or no generation/annihilation points are found. In the grand canonical ensemble, depending on the values of potentials, we find either one generation point or no generation/annihilation point. Thus, we infer that the topological class of 4D dyonic AdS black hole is ensemble dependent.
## I Introduction
Thermodynamic phase behavior of black holes has been studied extensively since the early days of black hole thermodynamics [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15]. In recent years, a lot of focus has been attributed to the study of criticality in AdS black holes in extended thermodynamic space [16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27], where the cosmological constant \(\Lambda\) is considered as thermodynamic pressure \(P\)[28; 29; 30; 31].
\[P=-\frac{\Lambda}{8\pi G} \tag{1}\]
where, \(G\) is Newton's gravitational constant. Accordingly, a thermodynamic volume \(V\) is defined conjugate to thermodynamic pressure \(P\) and the first law of black hole thermodynamics takes the revised form:
\[dM=TdS+VdP+\sum_{i}Y_{i}dx^{i}, \tag{2}\]
where, \(T\) is the temperature, \(S\) is the entropy and \(Y_{i}dx^{i}\) is the \(i\)-th chemical potential term.
A recent addition to the study of criticality in black holes is the the idea of thermodynamic topology. Initiated in [33], in this novel approach, Duan's topological current \(\phi\)-mapping theory [35] is invoked in the thermodynamic space of a black hole to study its criticality. Consequently, the critical points in the thermodynamic space are characterized with distinct topological charges and based on those charges, are classified into conventional and novel critical points. The key steps involved are summarized below :
The temperature, \(T\), of a black hole is expressed as a function of pressure \(P\), entropy \(S\) and other thermodynamic parameters.
\[T=T(S,P,x^{i}), \tag{3}\]
where \(x^{i}\) denotes other thermodynamic parameters. Then, pressure is, then, eliminated by using \((\partial_{S}T)_{P,x^{i}}=0,\) and a new potential \(\Phi\), known as Duan's potential is constructed.
\[\Phi=\frac{1}{\sin\theta}T(S,x^{i}). \tag{4}\]
The term '\(1/\sin\theta\)' is introduced for the sake of convenience. A two dimensional vector \(\phi=(\phi^{\theta},\phi^{\theta})\) is defined following the framework of Duan's \(\phi\)-mapping theory [34, 35] as
\[\phi^{S}=(\partial_{S}\Phi)_{\theta,x^{i}}\quad,\quad\phi^{\theta}=(\partial_ {\theta}\Phi)_{S,x^{i}}. \tag{5}\]
The presence of \(\theta\) in the vector field \(\phi\) ensures that the zero point of the vector field \(\phi\) is always at \(\theta=\pi/2\). The critical points can be calculated using this criteria. Also the topological current, \(j^{\mu}\), is conserved i.e., \(\phi^{a}(x^{i})=0\). This construction ensures the existence of topological charge which for a given parameter region \(\Sigma\) is equal to
\[Q=\int_{\Sigma}j^{0}d^{2}x=\sum_{i=1}^{N}\beta_{i}n_{i}=\sum_{i=1}^{N}w_{i}. \tag{6}\]
Here, \(w_{i}\), \(j^{0}\) and \(\beta_{i}\) are the winding number of \(i\)-th zero points of \(\phi\), the density of the topological current and the Hopf index respectively. Critical points with topological charges \(-1\) and \(+1\) are referred as conventional critical point and novel critical point respectively. The total topological charge of a black hole is computed as the sum of individual charges associated with each critical point. Following the work in [33], analysis of thermodynamic topology has been extended to a number of black holes [36, 37, 38, 39, 40, 41, 42].
An alternative way to apply topology in black hole thermodynamics has also been proposed in [43]. In this method, black hole solutions are regarded as defects in thermodynamic parameter spaces. These defects are then studied in terms of their winding numbers. The sign of the winding number of a defect has been linked to the thermodynamic stability of the corresponding black hole solution. The sum of the winding numbers is, now, termed as topological number based on which different black hole solutions are categorized. The analysis begins with the introduction of a generalized free energy \(F\) defined as follows:
\[\mathcal{F}=E-\frac{S}{\tau}, \tag{7}\]
where \(E\) and \(S\) are the energy, entropy. \(\tau\) is a quantity which has the dimension of time. A vector field \(\phi\) is defined from \(F\) in the following way :
\[\phi=\Big{(}\frac{\partial\mathcal{F}}{\partial r_{+}},-\cot\Theta\csc\Theta \Big{)} \tag{8}\]
The zero point of the vector \(\phi\) is at \(\Theta=\pi/2\). The unit vector is defined as
\[n^{a}=\frac{\phi^{a}}{||\phi||}\quad(a=1,2)\quad\text{and}\quad\phi^{1}=\phi^ {r_{+}},\quad\phi^{2}=\phi^{\Theta}. \tag{9}\]
Corresponding to a given value of \(\tau\), the zero points of \(n^{1}\) are computed.The winding numbers of each of these zero points are calculated. The topological number of a black hole is obtained by summing over the individual winding numbers of all the black hole branches. Following [43], study of black holes as topological defects has been extended to a number of black holes in [44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54].
Motivated by all the above mentioned works, in this paper, we extend the study of thermodynamic topology to 4d dyonic AdS black hole in different ensembles. Our primary focus is to understand the ensemble dependent nature of thermodynamic topology. For this, we carry out our analysis in three different
ensembles: 1. canonical ensemble where both the electric and the magnetic charges are kept fixed, 2. mixed ensemble where electric potential and magnetic charge are kept fixed and 3. grand canonical ensemble where both electric and magnetic potentials are kept fixed. To begin with, in each of these ensembles, we figure out the critical points and compute their topological charges. Based on the sign of those charges, we classify them as conventional and novel critical points. Then we consider the black hole as a topological defect in each of these ensembles and find out its topological number, generation and annihilation points in the thermodynamic space.
This paper is organized as follows. In section II, we begin with 4d dyonic AdS black hole in canonical ensemble and study its thermodynamic topology. This is followed by similar studies in mixed ensemble in section III and grand canonical ensemble in section IV. We conclude with our findings in section V.
## II \(4\)D dyonic AdS black hole in canonical ensemble
The four dimensional, asymptotically anti-de Sitter, dyonic black holes solution has its origin in maximal gauged supergravity. Such a black hole carries both electric and magnetic charges. The dyonic black hole solution can be obtained by the reduction of five dimensional Kaluza-Klein theory and it has some very interesting properties [55; 56; 57; 58; 59; 60]. A simpler solution of dyonic AdS black hole can be obtained by varying the Reissner-Nordstrom action with a cosmological constant [61]. The four dimensional, asymptotically anti de-Sitter, dyonic black hole metric is given by :
\[ds^{2}=-f(r)dt^{2}+\frac{1}{f(r)}dr^{2}+r^{2}d\theta^{2}+r^{2}\sin^{2}\theta d \phi^{2}, \tag{10}\]
where,
\[f(r)=\frac{q_{e}^{2}+q_{m}^{2}}{r^{2}}+\frac{r^{2}}{l^{2}}-\frac{2M}{r}+1, \tag{11}\]
Here, \(q_{e}\), \(q_{m}\), \(M\) and \(l\) are electric charge, magnetic charge, mass of the black hole and the AdS radius respectively. Thermodynamic pressure, \(P\), is related to the AdS radius as
\[P=\frac{3}{8\pi l^{2}}. \tag{12}\]
The mass, \(M\) and the entropy, \(S\) of the black hole are given by (in the following expressions, \(r_{+}\) denotes the horizon radius)
\[M=\frac{l^{2}q_{e}^{2}+l^{2}q_{m}^{2}+l^{2}r_{+}^{2}+r_{+}^{4}}{2l^{2}r_{+}}= \frac{3q_{e}^{2}+3q_{m}^{2}+8\pi Pr_{+}^{4}+3r_{+}^{2}}{6r_{+}} \tag{13}\]
\[S=\pi r_{+}^{2}, \tag{14}\]
### Topology of \(4\)d dyonic AdS black hole thermodynamics in canonical ensemble
To study the topology of dyonic AdS black hole thermodynamics, we write the temperature as a function of pressure, horizon radius, electric and magnetic charges.
\[T=\frac{\partial_{r_{+}}M}{\partial_{r_{+}}S}=\frac{8\pi Pr_{+}^{4}+r_{+}^{2} -q_{e}^{2}-q_{m}^{2}}{4\pi r_{+}^{3}}. \tag{15}\]
Use of the condition \(\left(\frac{\partial_{r_{+}}T}{\partial_{r_{+}}S}\right)_{q_{e},q_{m},P}=0\) leads us to an expression for pressure, \(P\).
\[P=\frac{r_{+}^{2}-3\left(q_{e}^{2}+q_{m}^{2}\right)}{8\pi r_{+}^{4}}, \tag{16}\]
Plugging \(P\) in (15), we get rid of the pressure term and the temperature, \(T\) takes the following form :
\[T(q_{e},q_{m},r_{+})=\frac{r_{+}^{2}-2\left(q_{e}^{2}+q_{m}^{2}\right)}{2\pi r_ {+}^{3}}. \tag{17}\]
A thermodynamic function \(\Phi\) is defined as,
\[\begin{split}\Phi&=\frac{1}{\sin\theta}T(q_{e},q_{ m},r_{+})\\ &=\frac{\csc\theta\left\{r_{+}^{2}-2\left(q_{e}^{2}+q_{m}^{2} \right)\right\}}{2\pi r_{+}^{3}},\end{split} \tag{18}\]
The vector components of the vector field \(\phi=\left(\phi^{r_{+}},\phi^{\theta}\right)\) are
\[\phi^{r_{+}}=\left(\frac{\partial\Phi}{\partial r_{+}}\right)_{q_{e},q_{m}, \theta}=-\frac{\csc\theta\left\{r_{+}^{2}-6\left(q_{e}^{2}+q_{m}^{2}\right) \right\}}{2\pi r_{+}^{4}}, \tag{19}\]
and
\[\phi^{\theta}=\left(\frac{\partial\Phi}{\partial\theta}\right)_{q_{e},q_{m}, r_{+}}=-\frac{\cot\theta\csc\theta\left\{r_{+}^{2}-2\left(q_{e}^{2}+q_{m}^{2} \right)\right\}}{2\pi r_{+}^{3}} \tag{20}\]
The normalized vector components are
\[\frac{\phi^{r_{+}}}{||\phi||}=\frac{6\left(q_{e}^{2}+q_{m}^{2} \right)-r_{+}^{2}}{\sqrt{r_{+}^{2}\cot^{2}(\theta)\left(r_{+}^{2}-2\left(q_{e }^{2}+q_{m}^{2}\right)\right){}^{2}+\left(r_{+}^{2}-6\left(q_{e}^{2}+q_{m}^{2} \right)\right){}^{2}}}, \tag{21}\]
and
\[\frac{\phi^{\theta}}{||\phi||}=-\frac{r_{+}\cot\theta\left\{r_{ +}^{2}-2\left(q_{e}^{2}+q_{m}^{2}\right)\right\}}{\sqrt{r_{+}^{2}\cot^{2} \theta\left\{r_{+}^{2}-2\left(q_{e}^{2}+q_{m}^{2}\right)\right\}{}^{2}+\left\{ r_{+}^{2}-6\left(q_{e}^{2}+q_{m}^{2}\right)\right\}{}^{2}}} \tag{22}\]
The normalized vector \(n=\left(\frac{\phi^{r_{+}}}{||\phi||},\frac{\phi^{\theta}}{||\phi||}\right)\) has been plotted in Figure 1. This figure shows the vector plot of \(n\) in a \(r_{+}\) vs \(\theta\) plane for dyonic AdS black hole. For this plot, we have fixed \(q_{e}=q_{m}=1\). The black dot represents the critical point (\(CP_{1}\)). To calculate the critical point we set \(\theta=\pi/2\) in (21) and equate this to zero. The critical point is located at \((r_{+},\theta)=(\sqrt{6(q_{e}^{2}+q_{m}^{2})},\pi/2)\) or at \((r_{+},\theta)=(2\sqrt{3},\pi/2)\) for \(q_{e}=q_{m}=1\).
For the calculation of the topological charge of the critical point, a contour \(C\) parametrized by \(\vartheta\in(0,2\pi)\) is defined [33] as follows:
\[\begin{cases}&r_{+}=a\cos\vartheta+r_{0},\\ &\theta=b\sin\vartheta+\frac{\pi}{2}.\end{cases} \tag{23}\]
We construct two contours \(C_{1}\) and \(C_{2}\) where the first contour encloses the critical point \(CP_{1}\) and the second contour is outside the critical point. For these contours we choose \((a,b,r_{0})=(0.6,0.2,2\sqrt{3})\) and \((0.6,0.4,5)\).
The deflection of the vector field \(n\) along the contour \(C\) is,
\[\Omega(\vartheta)=\int_{0}^{\theta}\epsilon_{ab}n^{a}\partial_{\vartheta}n^{b}d\vartheta. \tag{24}\]
The topological charge is, then, equal to, \(Q=\frac{1}{2\pi}\Omega(2\pi)\). For the critical point \(CP_{1}\) enclosed by the contour \(C_{1}\), the topological charge has been found to be, \(Q_{CP_{1}}=-1\). This is a conventional critical point. Since the contour \(C_{2}\) does not enclose any critical point, it corresponds to zero topological charge. Thus, the total topological charge is, \(Q=-1\).
The behaviour of \(\Omega\) is shown in Figure 2. The red curve corresponds \(C_{1}\) and the blue curve corresponds to \(C_{2}\). The function \(\Omega(\vartheta)\) for \(C_{1}\) decreases non-linearly and reaches \(-2\pi\) at \(\vartheta=2\pi\). On the other hand, \(\Omega(\vartheta)\) reaches zero at \(\vartheta=2\pi\) for \(C_{2}\).
Figure 1: Plot of the normalized vector field \(n\) in \(r_{+}\) vs \(\theta\) plane for dyonic AdS black hole in canonical ensemble. The black dot represents the critical point.
\(4d\) dyonic AdS black hole in canonical ensemble has the following equation of state :
\[T=\frac{8\pi Pr_{+}^{4}+r_{+}^{2}-q_{e}^{2}-q_{m}^{2}}{4\pi r_{+}^{3}}, \tag{25}\]
The corresponding critical points are given by,
\[T_{c}=\frac{1}{3\sqrt{6}\pi\sqrt{q_{e}^{2}+q_{m}^{2}}},\quad P_{c}=\frac{1}{96 \pi q_{e}^{2}+96\pi q_{m}^{2}}\quad\text{and}\quad r_{c}=\sqrt{6(q_{e}^{2}+q_{ m}^{2})} \tag{26}\]
It can be clearly seen that the critical radius in (26) exactly matches the critical point obtained from thermodynamic topology, \((r_{+},\theta)=(\sqrt{6(q_{e}^{2}+q_{m}^{2})},\pi/2)\). As mentioned above this is a conventional critical point with topological charge \(-1\). To see the nature of the critical point we plot the phase structure (isobaric curves) around it in Figure 3. The location of the critical point in the isobaric curve is shown as a black dot.
The red curve is the isobaric curve for \(P=P_{c}\). The green curves above and below the red curve are respectively for \(P>P_{c}\) and \(P<P_{c}\). The blue dashed curve describes the extremal points and is plotted using (17). From Figure 3, it is observed that for \(P<P_{c}\), the small and large black hole phases are separated by the unstable region (the negative slope region of the isobaric curves or the region enclosed by the two extremal points corresponding to each isobaric curve). Different phases of the dyonic AdS black hole in canonical ensemble disappear at the critical point. Hence, the critical point \(CP_{1}\) can be thought of as a phase annihilation point.
### Dyonic AdS black hole solution as topological thermodynamic defects in canonical ensemble
Now, we proceed to study the dyonic AdS black hole solution in canonical ensemble as topological thermodynamic defects. Using the mass and entropy of the black hole from (13) and (14) in (7), the generalized free energy is found to be,
\[\mathcal{F}=\frac{3q_{e}^{2}+3q_{m}^{2}+8\pi Pr_{+}^{4}+3r_{+}^{2}}{6r_{+}}- \frac{\pi r_{+}^{2}}{\tau}. \tag{27}\]
The vector field components of the vector given by (8) are
\[\phi^{r_{+}}=\frac{1}{2}-\frac{q_{e}^{2}+q_{m}^{2}}{2r_{+}^{2}}+4\pi Pr_{+}^{2 }-\frac{2\pi r_{+}}{\tau}, \tag{28}\]
Figure 3: Isobaric curves (red and green) of dyonic AdS black hole in canonical ensemble. Black dot represents the critical point.
\[\phi^{\Theta}=-\cot\Theta\csc\Theta. \tag{29}\]
The corresponding unit vectors are
\[n^{1}=\frac{4\pi r_{+}^{3}\left(2Pr_{+}\tau-1\right)-\tau\left(q_{e}^{2}+q_{m}^{ 2}-r_{+}^{2}\right)}{r_{+}^{2}\tau\sqrt{\frac{\left\{\tau\left(q_{e}^{2}+q_{m} ^{2}-r_{+}^{2}\right)+4\pi r_{+}^{3}(1-2Pr_{+}\tau)\right\}^{2}}{r_{+}^{4}\tau^ {2}}+4\cot^{2}\Theta\csc^{2}\Theta}}, \tag{30}\]
and
\[n^{2}=-\frac{\cot\Theta\csc\Theta}{\sqrt{\frac{\left\{\tau\left(q_{e}^{2}+q_{m} ^{2}-r_{+}^{2}\right)+4\pi r_{+}^{3}(1-2Pr_{+}\tau)\right\}^{2}}{4r_{+}^{4} \tau^{2}}+\cot^{2}\Theta\csc^{2}\Theta}}. \tag{31}\]
These unit vectors are plotted and used to locate the zero points by setting \(\Theta=\pi/2\) in \(n^{1}\) (see (30)) and equating it to zero. For example, setting \(q_{e}/r_{0}=1\), \(q_{m}/r_{0}=1\), \(Pr_{0}^{2}=0.0002\) and \(\tau/r_{0}=30\), we can find one zero point (\(ZP_{1}\)) located at \((r_{+}/r_{0},\Theta)=(80.8742,\pi/2)\). Here, \(r_{0}\) is an arbitrary length scale which is determined by the size of a cavity that surrounds the black hole. The value of pressure is taken below the critical pressure \(P_{c}\). The representation of the unit vectors along with the zero point is shown in Figure 4a. The winding number or the topological charge corresponding to this zero point is computed following the prescription stated in the previous section and found to be \(w=+1\). Similarly, keeping the same charge and pressure configuration, corresponding to \(\tau/r_{0}=50\), we find three zero points \(ZP_{2}\), \(ZP_{3}\) and \(ZP_{4}\) with winding numbers \(+1\), \(-1\) and \(+1\) respectively. These are shown in Figure 4b. For \(\tau/r_{0}=100\), a solitary zero point \(ZP_{5}\) with winding number \(+1\) is observed (Figure 4c).
An analytic expression for \(\tau\) corresponding to zero points can be obtained by setting \(\phi^{r_{+}}=0\).
\[\tau=\frac{4\pi r_{+}^{3}}{8\pi Pr_{+}^{4}+r_{+}^{2}-q_{e}^{2}-q_{m}^{2}}. \tag{32}\]
A plot of \(r_{+}\) vs \(\tau\) obtained above is shown in Figure 5. The points on this curve are the zero points of \(\phi^{r_{+}}\). Here, we have fixed \(q_{e}/r_{0}=1\), \(q_{m}/r_{0}=1\) and \(Pr_{0}^{2}=0.0002\) (below the critical pressure \(P_{c}\)).
In Figure 5, three different black hole branches are clearly visible. The branch \(\tau<\tau_{b}\) corresponds to the large black hole region. The winding number for any zero point on this branch is found to be \(w=+1\). Similarly, winding number \(w=+1\) is also observed for any zero point on the branch \(\tau>\tau_{a}\) which corresponds to the small black hole region. The branch \(\tau_{a}<\tau<\tau_{b}\) represents the intermediate black hole region and winding number for any zero point on this branch is equal to \(w=-1\). The topological number is, hence, \(W=+1-1+1=+1\). We explicitly computed the specific heats at the three branches and found that the branches with winding number \(+1\) have positive specific heat (thermodynamically stable) and the branch with winding number \(-1\) has negative specific heat(thermodynamically unstable).
Finally, we find the generation/annihilation points by using the condition \(\partial_{r_{+}}\mathcal{F}=\partial_{r_{+},r_{+}}\mathcal{F}=0\). For \(q_{e}/r_{0}=1\), \(q_{m}/r_{0}=1\), \(Pr_{0}^{2}=0.0002\), we get the generation and annihilation points at \(\tau/r_{0}=\tau_{a}/r_{0}=44.1585\) and \(\tau/r_{0}=\tau_{b}/r_{0}=89.0811\) respectively which are shown as black dots in Figure 5.
For a value of pressure, \(Pr_{0}^{2}=0.01\), which is above the critical pressure \(P_{c}\), and \(q_{e}/r_{0}=q_{m}/r_{0}=1\), the plot of \(r_{+}\) vs \(\tau\) is shown in Figure 6b. In this case, the plot exhibits only one branch corresponding to stable black hole region with positive specific heat. The winding numbers of zero points on this branch is computed to be \(w=+1\). The topological number is, hence, \(W=+1\). Notably, we don't find any
generation/annihilation point in this case. The zero point for \(\tau/r_{0}=30\), \(q_{m}/r_{0}=q_{e}/r_{0}=1,Pr_{0}^{2}=0.01\) is shown in Figure 5(a). What we have seen is that the topological number of \(4d\) dyonic AdS black hole in canonical ensemble is not altered by a variation in pressure.
We repeated our analysis by changing the values of \(q_{e}\) and \(q_{m}\). We found that the topological number was always equal to \((W=+1)\) for all the charge configurations. As an example, the \(r_{+}\) vs \(\tau\) curve) for \(q_{e}/r_{0}=0.1\), \(q_{m}/r_{0}=0.1\) and \(Pr_{0}^{2}=0.04\) is shown in Figure 7. The topological number of \(4d\) dyonic AdS black hole in canonical ensemble is not influenced by a variation in charge configuration.
## III Dyonic AdS black hole in mixed ensemble
In mixed ensemble, the electric potential \(\phi_{e}\) and the magnetic charge \(q_{m}\) are kept constant. The electric potential \(\phi_{e}\) is defined as
\[\phi_{e}=\frac{q_{e}}{r_{+}}, \tag{33}\]
The mass and temperature, are, then modified as
\[M=\frac{3r_{+}^{2}\phi_{e}^{2}+3q_{m}^{2}+8\pi Pr_{+}^{4}+3r_{+}^{2}}{6r_{+}}, \tag{34}\]
and
\[T=\frac{8\pi Pr_{+}^{4}+r_{+}^{2}-r_{+}^{2}\phi_{e}^{2}-q_{m}^{2}}{4\pi r_{+}^ {3}}. \tag{35}\]
Figure 5: The zero points of \(\phi^{r+}\) in \(\tau/r_{0}\) vs \(r_{+}/r_{0}\) plane for dyonic AdS black hole in canonical ensemble for pressure less than the critical pressure \(P_{c}\).
Figure 6: Plot of unit vector \(n=(n^{1},n^{2})\) and zero point of \(\phi^{r+}\) for pressure \(Pr_{0}^{2}=0.01\) (above the critical pressure \(P_{c}\)).
### Topology of dyonic AdS black hole in mixed ensemble
In this section, we study the thermodynamic topology of \(4d\) dyonic AdS black hole in mixed ensemble. We begin by eliminating pressure from (35) which is then simplified to
\[T(\phi_{e},q_{m},r_{+})=-\frac{r_{+}^{2}\left(\phi_{e}^{2}-1\right)+2q_{m}^{2}}{ 2\pi r_{+}^{3}}, \tag{36}\]
and the thermodynamic function \(\Phi\) becomes
\[\Phi=\frac{1}{\sin\theta}T(\phi_{e},q_{m},r_{+}) =-\frac{\csc\theta\left\{r_{+}^{2}\left(\phi_{e}^{2}-1\right)+2q_{ m}^{2}\right\}}{2\pi r_{+}^{3}} \tag{37}\]
The vector components of \(\phi=(\phi^{r_{+}},\phi^{\theta})\) are given by,
\[\phi^{r_{+}}=\frac{\csc\theta\left\{r_{+}^{2}\left(\phi_{e}^{2}-1 \right)+6q_{m}^{2}\right\}}{2\pi r_{+}^{4}}, \tag{38}\]
and
\[\phi^{\theta}=\frac{\cot\theta\csc\theta\left\{r_{+}^{2}\left( \phi_{e}^{2}-1\right)+2q_{m}^{2}\right\}}{2\pi r_{+}^{3}} \tag{39}\]
The vector \(\phi\) is normalized and the components are
\[\frac{\phi^{r_{+}}}{||\phi||}=\frac{r_{+}^{2}\left(\phi_{e}^{2}-1 \right)+6q_{m}^{2}}{\sqrt{r_{+}^{2}\cot^{2}\theta\left\{r_{+}^{2}\left(\phi_{ e}^{2}-1\right)+2q_{m}^{2}\right\}^{2}+\left\{r_{+}^{2}\left(\phi_{e}^{2}-1 \right)+6q_{m}^{2}\right\}^{2}}}, \tag{40}\]
and
\[\frac{\phi^{\theta}}{||\phi||}=\frac{r_{+}\cot\theta\left(r_{+}^{ 2}\left(\phi_{e}^{2}-1\right)+2q_{m}^{2}\right)}{\sqrt{r_{+}^{2}\cot^{2}\theta \left\{r_{+}^{2}\left(\phi_{e}^{2}-1\right)+2q_{m}^{2}\right\}^{2}+\left\{r_{+ }^{2}\left(\phi_{e}^{2}-1\right)+6q_{m}^{2}\right\}^{2}}}. \tag{41}\]
Now, we plot the normalized vector \(n=\left(\frac{\phi^{r_{+}}}{||\phi||},\frac{\phi^{\theta}}{||\phi||}\right)\) in \(r_{+}\) vs \(\theta\) plane by fixing \(q_{m}=1\) and \(\phi_{e}=1/2\) (see Figure 8). Here, we find a single critical point \(CP_{2}\) at \((r_{+},\theta)=(\sqrt{6}q_{m}/\sqrt{1-\phi_{e}^{2}},\pi/2)\) represented by the black dot.
We draw two contours \(C_{3}\) and \(C_{4}\) for \((a,b,r_{0})=(1.4,0.4,2\sqrt{2})\) and \((1.2,0.5,5.8)\). The contour \(C_{3}\) encloses the critical point \(CP_{2}\) whereas \(C_{4}\) does not enclose any critical point. The topological charge corresponding to the contour \(C_{3}\) is \(-1\) which implies that it is a conventional critical point. The contour \(C_{4}\) does not enclose any critical point and hence the topological charge is \(0\). The total topological charge is \(-1\).
The deflection along the contour \(C_{3}\) and \(C_{4}\) is shown in Figure 9. The equation of state for the black hole in mixed ensemble is given by,
\[T=\frac{8\pi Pr_{+}^{4}+r_{+}^{2}-r_{+}^{2}\phi_{e}^{2}-q_{m}^{2}}{4\pi r_{+}^ {3}}. \tag{42}\]
The critical values are
\[T_{c}=\frac{\left(1-\phi_{e}^{2}\right)^{3/2}}{3\sqrt{6}\pi q_{m}},\quad P_{c }=\frac{\left(\phi_{e}^{2}-1\right){}^{2}}{96\pi q_{m}^{2}}\quad\mbox{and} \quad r_{c}=\frac{\sqrt{6}q_{m}}{\sqrt{1-\phi_{e}^{2}}}. \tag{43}\]
Figure 8: Plot of the normalized vector field \(n\) in \(r_{+}\) vs \(\theta\) plane for dyonic AdS black hole in mixed ensemble. The black dot represents the critical point.
In this ensemble also, we see that the critical radius given in (43) is exactly the same as the conventional critical point which is \((r_{+},\theta)=(\sqrt{6}q_{m}/\sqrt{1-\phi_{e}^{2}},\pi/2)\). We plot the \(T\) vs \(r_{+}\) isobaric curve around the critical point in Figure 10.
This figure shows that the the critical point (black dot) is on the isobaric curve for \(P=P_{c}\) (red curve). The blue curve gives the extremal points of the isobaric curves and is plotted using (36). Similar to the canonical ensemble case, the number of phases clearly decreases with the increase of pressure \(P\) and disappears at the critical point \(CP_{3}\). This implies that the critical point \(CP_{3}\) is a phase annihilation point.
### Dyonic AdS black hole solution as topological thermodynamic defects in mixed ensemble
We now study the dyonic AdS black hole in mixed ensemble as a topological defect. As usual, we start with the generalized free energy potential
\[\mathcal{F}=E-\frac{S}{\tau}-q_{e}\phi_{e}. \tag{44}\]
In this ensemble
\[E=\frac{3r_{+}^{2}\phi_{e}^{2}+3q_{m}^{2}+8\pi Pr_{+}^{4}+3r_{+}^{2}}{6r_{+}},\quad S=\pi r_{+}^{2}\quad\text{and}\quad q_{e}=r_{+}\phi_{e} \tag{45}\]
Hence, (44) gives
\[\mathcal{F}=\frac{3r_{+}^{2}\phi_{e}^{2}+3q_{m}^{2}+8\pi Pr_{+}^{4}+3r_{+}^{2 }}{6r_{+}}-r_{+}\phi_{e}^{2}-\frac{\pi r_{+}^{2}}{\tau}. \tag{46}\]
From (8), the vector components can easily be worked out resulting
\[\phi^{r_{+}}=\frac{6r_{+}\phi_{e}^{2}+32\pi Pr_{+}^{3}+6r_{+}}{6r_{+}}-\frac{3 r_{+}^{2}\phi_{e}^{2}+3q_{m}^{2}+8\pi Pr_{+}^{4}+3r_{+}^{2}}{6r_{+}^{2}}- \phi_{e}^{2}-\frac{2\pi r_{+}}{\tau}, \tag{47}\]
and
\[\phi^{\Theta}=-\cot\Theta\csc\Theta. \tag{48}\]
Figure 10: Isobaric curves (red and green) of dyonic AdS black hole in mixed ensemble. Red curve is the isobaric curve for \(P=P_{c}\). The isobaric curves above and below the red curve is for \(P>P_{c}\) and \(P<P_{c}\) respectively. Black dot represents the critical point.
We plot the normalized vector and figure out the zero points. The corresponding winding numbers are also calculated. For different values of \(\tau/r_{0}\) the zero points are shown in Figure 11. Here, we have set \(\phi_{e}=1/2\), \(q_{m}/r_{0}=1\), \(Pr_{0}^{2}=0.001\) (pressure value below the critical pressure \(P_{c}\)). While for \(\tau/r_{0}=35\), we find one zero point \(ZP_{6}\) with winding number \(+1\), for \(\tau/r_{0}=45\), we locate three zero points \(ZP_{7}\), \(ZP_{8}\) and \(ZP_{9}\) with winding numbers \(+1,-1\) and \(+1\) respectively. For \(\tau/r_{0}=55\), we again encounter one zero point \(ZP_{1}0\) with winding number \(+1\).
The zero points of the component \(\phi^{r_{+}}=0\) are given by the equation
\[\tau=\frac{4\pi r_{+}^{3}}{8\pi Pr_{+}^{4}+r_{+}^{2}-r_{+}^{2}\phi_{e}^{2}-q_{m }^{2}}. \tag{49}\]
For \(\phi_{e}=1/2\), \(q_{m}/r_{0}=1\), \(Pr_{0}^{2}=0.001\) the resulting \(r_{+}\) vs \(\tau\) graph is plotted in Figure 12. Similar to the canonical ensemble case, below critical pressure, we have three branches of the \(\tau\) curve in the regions for \(\tau<\tau_{b}\), \(\tau_{a}<\tau<\tau_{b}\) and \(\tau>\tau_{a}\). The first and third branch are the large black hole and small black hole region and the zero points on these two branches have \(w=+1\) and positive specific heat. The other branch represents intermediate black hole region and the zero points in this region have \(w=-1\) and negative specific heat. Thus, the topological number is \(W=+1-1+1=+1\). The generation and annihilation points for \(\phi_{e}=1/2\), \(q_{m}/r_{0}=1\), \(Pr_{0}^{2}=0.001\) is found at \(\tau/r_{0}=\tau_{a}/r_{0}=41.5688\) and \(\tau/r_{0}=\tau_{b}/r_{0}=46.9484\) respectively.
Above critical pressure \(P_{c}\), the same plot is shown in Figure 12(b). Here, we chose \(Pr_{0}^{2}=0.02\), \(q_{m}=1\) and \(\phi_{e}=1/2\). In this case, the unstable region disappears and the \(\tau(r_{+})\) curve corresponds to stable black hole region. The winding number for each point on the curve is \(w=+1\) and the topological number is hence \(W=+1\). The unit vector and the zero point for \(\tau/r_{0}=35\), \(Pr_{0}^{2}=0.02\), \(q_{m}/r_{0}=1\) and \(\phi_{e}=1/2\) is shown in Figure 12(a).
We repeated the exercise altering the values of \(\phi_{e}\) and \(q_{m}/r_{0}\) and observed that the topological number for all the combinations was identical and equal to \(W=+1\). The zero points of \(\phi^{r_{+}}\) is shown in Figure 14 for \(\phi_{e}=0.1\), \(q_{m}/r_{0}=0.1\) and \(Pr_{0}^{2}=0.04\).
## IV Dyonic AdS black hole in grand canonical ensemble
In the grand canonical ensemble, both the electric potential \(\phi_{e}\) and the magnetic potential \(\phi_{m}\) are kept fixed.
\[\phi_{e}=\frac{q_{e}}{r_{+}},\quad\text{and}\quad\phi_{m}=\frac{q_{m}}{r_{+}}. \tag{50}\]
The relevant thermodynamic parameters of dyonic AdS black hole in grand canonical ensemble are given by,
\[M=\frac{1}{6}r_{+}\big{\{}3\left(\phi_{e}^{2}+\phi_{m}^{2}+1\right)+8\pi Pr_{ +}^{2}\big{\}}, \tag{51}\]
\[S=\pi r_{+}^{2}, \tag{52}\]
and
\[T=\frac{8\pi Pr_{+}^{2}+1-\phi_{e}^{2}-\phi_{m}^{2}}{4\pi r_{+}}. \tag{53}\]
### Topology of dyonic AdS black hole in grand canonical ensemble
Now, we proceed to study the topology of dyonic AdS black hole thermodynamics in grand canonical ensemble. First, we eliminate pressure from (53) using \(\left(\frac{\partial_{r_{+}}T}{\partial_{r_{+}}S}\right)_{\phi_{e},\phi_{m},P}=0\).
\[T=\frac{1-{\phi_{e}}^{2}-{\phi_{m}}^{2}}{2\pi r_{+}}. \tag{54}\]
The thermodynamics function \(\Phi=T/\sin\theta\) is, therefore,
\[\Phi=\frac{\csc\theta\left(-{\phi_{e}}^{2}-{\phi_{m}}^{2}+1\right)}{2\pi r_{+}}. \tag{55}\]
The components of vector field \(\phi=(\phi^{r_{+}},\phi^{\theta})\) are
\[\phi^{r_{+}}=\frac{\csc\theta\left({\phi_{e}}^{2}+{\phi_{m}}^{2}-1\right)}{2 \pi r_{+}^{2}}, \tag{56}\]
and
\[\phi^{\theta}=\frac{\cot\theta\csc\theta\left({\phi_{e}}^{2}+{\phi_{m}}^{2}-1 \right)}{2\pi r_{+}}. \tag{57}\]
The normalized vector components are
\[\frac{\phi^{r_{+}}}{||\phi||}=\frac{1}{\sqrt{r_{+}^{2}\cot^{2}( \theta)+1}}, \tag{58}\]
and
\[\frac{\phi^{\theta}}{||\phi||}=\frac{r_{+}\cot(\theta)}{\sqrt{r_{ +}^{2}\cot^{2}(\theta)+1}}. \tag{59}\]
On following the procedure discussed earlier, we find that this system does not have any critical points (see Figure 14(a)). Also, since the contour does not enclose any critical point therefore, the \(\Omega(\vartheta)\) function reaches \(0\) at \(\vartheta=2\pi\). (see Figure 14(b))
Figure 15: Plot of vector \(n\) in \(r_{+}\) vs \(\theta\) plane and the plot of deflection angle \(\Omega(\vartheta)\) along \(C_{5}\).
### Dyonic AdS black hole solution as topological thermodynamic defects in grand canonical ensemble
Now, we identify dyonic AdS black hole as a topological defect in the thermodynamic we can use the following general free energy potential:
\[\mathcal{F}=E-\frac{S}{\tau}-q_{e}\phi_{e}-q_{m}\phi_{m}. \tag{60}\]
Following (8), we calculate the vector components Accordingly, we find the unit vectors
\[\phi^{r_{+}}=\frac{1}{6}\left(3\phi_{e}^{2}+3\phi_{m}^{2}+8\pi Pr_{+}^{2}+3 \right)-\phi_{e}^{2}-\phi_{m}^{2}+\frac{8}{3}\pi Pr_{+}^{2}-\frac{2\pi r_{+}}{ \tau}, \tag{61}\]
and
\[\phi^{\Theta}=-\cot\Theta\csc\Theta. \tag{62}\]
For \(\tau/r_{0}=110\), \(\phi_{e}=0.1\), \(\phi_{m}=0.1\) and \(Pr_{0}^{2}=0.0001\), we find two zero points \(ZP_{11}\) and \(ZP_{12}\) with winding numbers \(-1\) and \(+1\) respectively. These are shown in Figure 16. Hence, the topological number of dyonic AdS black hole in grand canonical ensemble is \(0\).
The expression for \(\tau\) representing zero points is given by
\[\tau=\frac{4\pi r_{+}}{8\pi Pr_{+}^{2}+1-\phi_{e}^{2}-\phi_{m}^{2}} \tag{63}\]
For \(\phi_{e}=0.1\), \(\phi_{m}=0.1\) and \(Pr_{0}^{2}=0.0001\), \(r_{+}/r_{0}\) vs \(\tau/r_{0}\) plot is shown in Figure 17.
In this case, unlike what we saw in canonical and mixed ensembles, the plot exhibits two black hole branches in the regions \(\tau<\tau_{a}\) and \(\tau>\tau_{a}\). The former branch represents unstable black hole region whereas the later branch represents stable black hole region. Any zero point in the unstable and stable region has winding numbers \(w=-1\) and \(w=+1\) respectively. The topological number is hence \(W=0\)
which is different from what we had in canonical and mixed ensembles. The creation point is located at \(\tau/r_{0}=\tau_{a}/r_{0}=126.604\).
Interestingly, for \(\phi_{e}^{2}+\phi_{m}^{2}>1\), we get only one black hole branch with winding number \(+1\) (see Figure 18). The topological number, therefore, is \(+1\). In this case, we do not see any generation or annihilation point.
Thus, in the grand canonical ensemble, for\(4d\) dyonic AdS black hole, we have two topological numbers. When \(\phi_{e}^{2}+\phi_{m}^{2}<1\), the topological number is \(0\) in contrast to what we had in canonical and mixed ensembles. When \(\phi_{e}^{2}+\phi_{m}^{2}>1\), the topological number is \(1\), same as the ones in canonical and mixed ensembles.
Conclusion
In this work, we studied the thermodynamic topology of \(4\)d dyonic AdS black hole in canonical, mixed and grand canonical ensembles. Canonical, mixed and grand canonical ensembles were formed by fixing electric and magnetic charges, magnetic charge and electric potential and potentials corresponding to both electric and magnetic charges respectively. In all the three ensembles, we evaluated the topological charges of the critical points in their thermodynamic spaces. We observed the presence of a solitary conventional critical point with topological charge \(-1\) in both canonical and mixed ensembles. Contrastingly, in the grand canonical ensemble, no critical point was found. Next, we recognized the dyonic AdS black hole as topological defects in thermodynamic space and analyzed its local and global topology by calculating the winding numbers at the defects. We found that, in both canonical and mixed ensembles, the total topological charge was equal to \(1\), which was not altered by changes in thermodynamic parameters. In both these ensembles, either one generation and one annihilation points (below critical pressure) or no generation/annihilation points (above critical pressure) were seen. In the grand canonical ensemble, depending on the values of potentials, the total topological charge was found to be either equal to \(0\) (when \(\phi_{e}^{2}+\phi_{m}^{2}<1\)) or \(1\) (when \(\phi_{e}^{2}+\phi_{m}^{2}>1\)). In this ensemble, we found either one generation point (when \(\phi_{e}^{2}+\phi_{m}^{2}<1\)) or no generation/annihilation point ( when \(\phi_{e}^{2}+\phi_{m}^{2}>1\)).
From our analysis, we conclude that \(4\)d dyonic AdS black hole in canonical and mixed ensembles can be placed in the same thermodynamic topological class. However, the thermodynamic topology of \(4\)d dyonic AdS black hole in grand canonical ensemble is different from those in the other two ensembles. Or in other words, the topological class of \(4\)d dyonic AdS black hole is ensemble dependent. It will be interesting to extend the study of ensemble dependent thermodynamic topology to other black holes with rich phase structures. We plan to do so in our future works. |
2310.16010 | More properties of optimal polynomial approximants in Hardy spaces | This work studies optimal polynomial approximants (OPAs) in the classical
Hardy spaces on the unit disk, $H^p$ ($1 < p < \infty$). For fixed $f\in H^p$
and $n\in\mathbb{N}$, the OPA of degree $n$ associated to $f$ is the polynomial
which minimizes the quantity $\|qf-1\|_p$ over all complex polynomials $q$ of
degree less than or equal to $n$. We begin with some examples which illustrate,
when $p\neq2$, how the Banach space geometry makes these problems interesting.
We then weave through various results concerning limits and roots of these
polynomials, including results which show that OPAs can be witnessed as
solutions of certain fixed point problems. Finally, using duality arguments, we
provide several bounds concerning the error incurred in the OPA approximation. | Raymond Cheng, Christopher Felder | 2023-10-24T17:04:38Z | http://arxiv.org/abs/2310.16010v1 | # More properties of optimal polynomial approximants in Hardy spaces
###### Abstract.
This work studies optimal polynomial approximants (OPAs) in the classical Hardy spaces on the unit disk, \(H^{p}\) (\(1<p<\infty\)). For fixed \(f\in H^{p}\) and \(n\in\mathbb{N}\), the OPA of degree \(n\) associated to \(f\) is the polynomial which minimizes the quantity \(\|qf-1\|_{p}\) over all complex polynomials \(q\) of degree less than or equal to \(n\). We begin with some examples which illustrate, when \(p\neq 2\), how the Banach space geometry makes these problems interesting. We then weave through various results concerning limits and roots of these polynomials, including results which show that OPAs can be witnessed as solutions of certain fixed point problems. Finally, using duality arguments, we provide several bounds concerning the error incurred in the OPA approximation.
Key words and phrases:Optimal polynomial approximant, Pythagorean inequality, duality, fixed point 2020 Mathematics Subject Classification: Primary 30E10; Secondary 46E30
###### Contents
* 1 Introduction
* 2 Preliminaries & Geometric Oddities
* 3 Limits and Continuity
* 3.1 Continuity
* 3.2 Roots of OPAs
* 3.3 Fixed point approach
* 4 Error Bounds and Duality Arguments
## 1. Introduction
This paper concerns a minimization problem in classical Hardy spaces on the unit disk \(\mathbb{D}\),
\[H^{p}:=\left\{f\in\operatorname{Hol}(\mathbb{D}):\sup_{0\leq r<1}\int_{0}^{2\pi }\left|f(re^{i\theta})\right|^{p}\,d\theta<\infty\right\},\]
where \(\operatorname{Hol}(\mathbb{D})\) denotes the collection of holomorphic functions on \(\mathbb{D}\). As is standard, for \(1\leq p<\infty\), we denote the norm of \(f\in H^{p}\) as
\[\|f\|_{p}:=\left(\sup_{0\leq r<1}\int_{0}^{2\pi}\left|f(re^{i\theta})\right|^{ p}\,d\theta\right)^{1/p}.\]
When \(p=\infty\), we have the set of bounded analytic functions
\[H^{\infty}:=\left\{f\in\operatorname{Hol}(\mathbb{D}):\sup_{z\in\mathbb{D}}|f (z)|<\infty\right\},\]
with corresponding norm
\[\|f\|_{\infty}:=\sup_{z\in\mathbb{D}}|f(z)|.\]
We will frequently view these spaces as subspaces of the Lebesgue spaces \(L^{p}:=L^{p}(\mathbb{T},dm)\), where \(dm\) is normalized Lebesgue measure on the unit circle \(\mathbb{T}\).
Our main objects of study in this paper are _optimal polynomial approximants_ (OPAs) in Hardy spaces; these are solutions to minimization problem
\[\inf_{q\in\mathscr{P}_{n}}\|qf-1\|_{p},\]
where \(f\in H^{p}\) and \(\mathscr{P}_{n}\) is the set of complex polynomials of degree less than or equal to \(n\). We point out that the infimum above is actually a minimum. For in a uniformly convex Banach space, any closed subspace enjoys a unique nearest point property. In our context, this means the problem of finding a degree \(n\) OPA can be restated as finding the solution to
\[\inf_{h\in f\mathscr{P}_{n}}\|h-1\|_{p},\]
which is given by the _metric projection_ of \(1\) on the subspace \(f\mathscr{P}_{n}\). A priori, the minimizing argument may not be unique. However, when \(1<p<\infty\), it is well-known that there is, in fact, a unique minimizing polynomial (due to the uniform convexity of the space). When \(p\neq 2\), the projection is _non-linear_, which starkly contrasts the Hilbert space setting.
For \(p\neq 2\), this problem was originally studied by Centner [10], and considered again in an additional paper by Centner and the authors [11]. We will give some background now, but point the reader to [10, 11] for more thorough exposition, and to [5, 6, 7, 8, 9, 10, 15, 17, 18] for relevant work on OPAs in various Hilbert spaces.
When \(p=2\), this problem was first studied by engineers in work related to digital filter design. The problem reemerged later as a potential way to study cyclic vectors for the forward shift (see [4] for historical discussion). This renewed interest is evidenced by many papers over the last decade (again, see [4], as well as [1, 3] for recent results in the weighted and non-commutative settings, respectively). Other than the work in [14], these results
concern only Hilbert spaces, where the geometry makes computation of OPAs an explicit (but non-trivial!) linear algebra exercise. For example, in \(H^{2}\), the coefficients (say, \(a_{0},\ldots,a_{n}\)) of the OPA of degree \(n\) associated to a function \(f\in H^{2}\) can be found via the linear system
\[\big{(}\langle S^{j}f,S^{k}f\rangle_{H^{2}}\big{)}_{0\leq j,k\leq n}\,(a_{0}, \ldots,a_{n})^{T}=\Big{(}\overline{f(0)},0,\ldots,0\Big{)}^{T}\,, \tag{1.0.1}\]
where \(S\) is the forward shift operator, given by \(f(z)\mapsto zf(z)\) (see, e.g., [15, Theorem 2.1]).
In the Banach space setting (e.g. \(H^{p}\), \(p\neq 2\)), there is not a direct analogue of this exercise, and the non-linearity of the metric projection makes explicit calculation of OPAs a highly non-trivial task. In the next section, we will state precisely the definition of optimal polynomial approximant. Before moving there, let us give an outline of the paper:
* Section 2 will formally introduce the OPA problem, give some background information concerning the geometry of Banach spaces, and provide some examples illustrating how this geometry differs from that of Hilbert space.
* The results of Section 3 are broken into three parts:
* Convergence of OPAs under variance of the parameters of the OPA problem (e.g., \(n,p\), and \(f\)).
* The location of roots of OPAs.
* Constant and linear OPAs as solutions to a fixed point problem.
* Using duality, Section 4 establishes various bounds for the error \(\|qf-1\|_{p}\).
## 2. Preliminaries & Geometric Oddities
We begin here by providing some background material concerning the geometry of Banach spaces, followed by several examples in \(H^{p}\) which illustrate some oddities that arise when \(p\neq 2\).
Let \(\mathbf{x}\) and \(\mathbf{y}\) be vectors belonging to a normed linear space \(\mathscr{X}\). We say that \(\mathbf{x}\) is _orthogonal_ to \(\mathbf{y}\) in the Birkhoff-James sense [2, 16] if
\[\|\mathbf{x}+\beta\mathbf{y}\|_{\mathscr{X}}\geqslant\|\mathbf{x}\|_{\mathscr{ X}} \tag{2.0.1}\]
for all scalars \(\beta\). In this situation we write \(\mathbf{x}\perp_{\mathscr{X}}\mathbf{y}\). In the case \(\mathscr{X}=L^{p}\), let us write \(\perp_{p}\) instead of \(\perp_{L^{p}}\), and similarly for \(\mathscr{X}=H^{p}\).
For \(1<p<\infty\), there is also a function-theoretic test for \(p\)-orthogonality, which we note now.
**Theorem 2.0.2** (James [16]).: _Suppose \(1<p<\infty\). Then for \(f\) and \(g\) belonging to \(L^{p}\), we have_
\[f\ \perp_{p}\ g\iff\int_{\mathbb{T}}|f|^{p-2}\overline{f}g\,dm=0, \tag{2.0.3}\]
_where any occurrence of "\(|0|^{p-2}0\)" in the integrand is interpreted as zero._
In light of (2.0.3) we adopt, for a measurable function \(f\) and any \(s>0\), the notation
\[f^{\langle s\rangle}:=|f|^{s-1}\overline{f}. \tag{2.0.4}\]
If \(f\in L^{p}\), then \(f^{\langle p-1\rangle}\in L^{q}\), where \(q\) is the classical Holder conjugate to \(p\), satisfying \(\frac{1}{p}+\frac{1}{q}=1\). For \(g\in L^{p}\) and \(f\in L^{q}\), we use the standard notation for the dual pairing
\[\langle f,g\rangle=\int_{\mathbb{T}}f\overline{g}\,dm,\]
and from (2.0.3), we have
\[f\ \perp_{p}\ g\iff\langle g,f^{\langle p-1\rangle}\rangle=0. \tag{2.0.5}\]
Consequently, the relation \(\perp_{p}\) is linear in its second argument when \(1<p<\infty\), and it then makes sense to speak of a vector being orthogonal to a subspace. We use this now to formally define OPAs.
**Definition 2.0.6** (Opa).: Let \(1<p<\infty\) and let \(f\in H^{p}\). Given a non-negative integer \(n\), the _\(n\)-th optimal polynomial approximant_ to \(1/f\) in \(H^{p}\) is the polynomial solving the minimization problem
\[\min_{q\in\mathscr{P}_{n}}\|qf-1\|_{p},\]
where \(\mathscr{P}_{n}\) is the set of complex polynomials of degree less than or equal to \(n\). This polynomial exists, is unique, and will be denoted by
\[q_{n,p}[f].\]
Given previous discussion on the metric projection, it is immediate that
\[1-q_{n,p}[f]f\ \perp_{p}\ \bigvee\{f,zf,z^{2}f,\ldots,z^{n}f\}.\]
We note that we will use the notations \(z^{k}f\) and \(S^{k}f\) interchangeably when there is no risk of confusion. We will also use the notation \([f]_{p}\) to denote the closure of \(\bigvee\{f,zf,z^{2}f,z^{3}f,\ldots\}\) in \(H^{p}\), i.e.,
\[[f]_{p}:=\overline{\bigvee\{f,zf,z^{2}f,z^{3}f,\ldots\}}\,^{H^{p}}.\]
In order to avoid trivialities, we will also often ask that \(f(0)\neq 0\); in the case that \(f(0)=0\), this is equivalent to \(1\ \perp_{p}f\), and so the metric projection of \(1\) onto \(f\mathscr{P}_{n}\) is identically zero.
In connection with Birkhoff-James orthogonality, there is a version of the Pythagorean Theorem for \(L^{p}\). This theorem takes the form of a family of inequalities relating the lengths of orthogonal vectors with that of their sum [13, Corollary 3.4].
**Theorem 2.0.7**.: _Suppose that \(x\ \perp_{p}\ y\) in \(L^{p}\). If \(p\in(1,2]\), then_
\[\|x+y\|_{p}^{p} \leqslant\|x\|_{p}^{p}+\frac{1}{2^{p-1}-1}\|y\|_{p}^{p}\] \[\|x+y\|_{p}^{2} \geqslant\|x\|_{p}^{2}+(p-1)\|y\|_{p}^{2}.\]
_If \(p\in[2,\infty)\), then_
\[\|x+y\|_{p}^{p} \geqslant\|x\|_{p}^{p}+\frac{1}{2^{p-1}-1}\|y\|_{p}^{p}\] \[\|x+y\|_{p}^{2} \leqslant\|x\|_{p}^{2}+(p-1)\|y\|_{p}^{2}.\]
These Pythagorean inequalities enable us to obtain bounds and estimates when \(p\neq 2\), in lieu of exact calculations possible in the Hilbert space case.
The following examples illustrate some of the ways the geometry of \(H^{p}\) (\(p\neq 2\)) can run counter-intuitive to experience in Hilbert space. Although these examples may not be immediately surprising to the Banach space enthusiast, we relay them for the general functional analyst, especially working in linear approximation problems, as interesting observations related to natural geometric questions.
**Example 2.0.8**.: In a Hilbert space, an orthogonal projection is always a contraction. However, when \(p\neq 2\), the norm of the metric projection of a vector can exceed the length of the vector itself.
Consider the linear OPA for \(f(z)=1+0.5z\) in \(H^{4}\). Numerically, we find that \(Q(z):=q_{1,4}[f]\approx 0.9771018-0.4339644z\), and thus
\[\|Qf\|_{4}^{4}=1.10294>1.\]
**Example 2.0.9**.: In \(H^{2}\), it is simple to verify that if \(F(0)=0\), then, for \(c\in\mathbb{C}\), the quantity
\[\|c+F(z)\|_{2}\]
is minimized when \(c=0\). However, this is not the case when \(p\neq 2\).
For example, let \(p=4\) and \(F(z)=z+2z^{2}\). Then
\[\|c+F\|_{4}^{4}=33+8c+20c^{2}+c^{4}.\]
Numerically, this is minimized when \(c\approx-0.199209\). In particular, the value of the minimizing argument can be nonzero when \(p\neq 2\).
**Example 2.0.10**.: Notice for \(f\in H^{2}\) and any \(n>0\), we have
\[1-q_{n,2}[f]f\ \perp_{2}\ zf\quad\text{and}\quad 1\ \perp_{2}\ zf.\]
Using linearity, we have
\[q_{n,2}[f]f=1-(1-q_{n,2}[f]f)\ \perp_{2}\ zf.\]
It is natural to ask if this is true when \(p\neq 2\), i.e., is it true in general that
\[q_{n,p}[f]f\ \perp_{p}\ zf?\]
Let us take \(p=4\), \(n=1\), and \(f(z)=1+2z+z^{8}\). Numerically, one can find that
\[\int_{\mathbb{T}}(q_{1,p}[f]f)^{\langle p-1\rangle}zf\,dm\approx 0.00355837,\]
which is nonzero, and so orthogonality fails. This illustrates that for \(p\neq 2\), the relation \(\perp_{p}\) fails to be linear in its first argument, and so \(q_{n,p}[f]f\) is not necessarily orthogonal to \(zf\).
**Example 2.0.11**.: For \(f,g\in H^{2}\), an exercise shows that if \(\|f\|_{2}\leqslant\|g\|_{2}\), then \(\|1+zf\|_{2}\leqslant\|1+zg\|_{2}\). Might a similar statement hold for \(p\neq 2\)? The following example shows that the answer is _no_.
Let \(p=4\) and choose
\[f(z) =(0.9)(1+z+z^{2})\] \[g(z) =-1-z-z^{2}.\]
It is immediate that \(\|f\|_{4}<\|g\|_{4}\). However, numerically, we find
\[\|1+zf(z)\|_{4}^{4} \approx 31.9339\] \[\|1+zg(z)\|_{4}^{4} \approx 20.0000.\]
With these examples in hand, it may now be reasonable to suspect that OPAs have a dependence on \(p\) which is highly non-linear. In general, this is true. Let us demonstrate this with what we describe as the OPA "error"- the quantity \(\|q_{n,p}[f]f-1\|_{p}\). We use this as motivation in Section 3, where we study the \(p\)-dependence of OPAs.
**Example 2.0.12**.: For \(c>0\) and \(m\) a positive integer, consider \(f(z)=1+cz^{m}\). Let us show that when \(c=2\), we have
\[\|q_{0,2}[f]f-1\|_{2}\neq\|q_{0,4}[f]f-1\|_{4}.\]
For \(p=2\),
\[\|af-1\|_{2}^{2} =\int_{0}^{2\pi}\left([a-1]^{2}+ace^{im\theta}\right)\left([a-1] ^{2}+ace^{-im\theta}\right)\,\frac{d\theta}{2\pi}\] \[=(a-1)^{2}+a^{2}c^{2}.\]
Notice that the result of the integration is the extraction of the constant OPA. Minimizing this expression (by differentiating with respect to \(a\)), we find
\[a=\frac{1}{1+c^{2}}.\]
This yields
\[\|af-1\|_{2}^{2} =\left(\frac{1}{1+c^{2}}-1\right)^{2}+\frac{c^{2}}{(1+c^{2})^{2}}\] \[=\frac{c^{2}}{1+c^{2}}.\]
This is equal to \(4/5\) when \(c=2\), so we obtain
\[\|af-1\|_{2}=\sqrt{4/5}\approx 0.894427.\]
For \(p=4\), we have
\[\|af-1\|_{4}^{4}=\int_{0}^{2\pi}\left(a+ace^{im\theta}-1\right)^{2}\left(a+ ace^{-im\theta}-1\right)^{2}\,\frac{d\theta}{2\pi},\]
and one may extract the constant term as
\[(a-1)^{4}+4a^{2}c^{2}(a-1)^{2}+a^{4}c^{4}.\]
Next, setting \(c=2\), one may numerically find that \(a\approx 0.121991\) minimizes the above expression.
Finally, this yields
\[\|q_{0,4}[f]f-1\|_{4}\approx\sqrt[4]{0.781388}\approx 0.940192.\]
In addition to the error, one may also notice that the OPAs themselves vary with \(p\). For example, letting \(f(z)=1+2z+z^{8}\), one may numerically find that
\[q_{0,4}[f] \approx 0.0970262\] \[q_{0,6}[f] \approx 0.0674066.\]
However, this is not always the case(!). The following example is a generalization of [10, Example 6.1], which showed that the constant OPAs for \(f(z)=1-z\) do not vary with \(p\).
**Example 2.0.13**.: Let \(1<p<\infty\) and let \(f\in H^{p}\). Let \(\lambda\in\mathbb{C}\) and let \(h=1+f\). Suppose that \(|f(e^{it})|=1\) a.e. on \(\mathbb{T}\) and \(\overline{f(e^{-it})}=f(e^{it})\) (i.e., the Fourier coefficients of \(f\) are real).
Putting \(a=q_{0,p}[h]\), we observe:
\[\|a(1+f(e^{it}))-1\|_{p} =\|af(e^{it})+a-1\|_{p}\] \[=\|a+\overline{f(e^{it})}(a-1)\|_{p}\quad\text{(multiply by $ \overline{f}$, inner)}\] \[=\|a+f(e^{-it})(a-1)\|_{p}\quad\text{(real coefficients)}\] \[=\|a+f(e^{it})(a-1)\|_{p}\quad\text{($t\mapsto-t$)}\] \[=\|-a+1-1-f(e^{it})(a-1)\|_{p}\quad\text{(multiply by -1 and add 0)}\] \[=\|(1-a)(1+f(e^{it}))-1\|_{p}.\]
This tells us that \(a=\frac{1}{2}\), which is independent of \(p\). Note that if \(f\) is any Blaschke product with real zeros, the hypotheses above are satisfied.
## 3. Limits and Continuity
In this section, we provide results which relate to varying the parameters in the OPA problem (i.e., the degree \(n\), the value of \(p\), and the function \(f\)). We first deal with this directly. Then, as a corollary, the first subsection below discusses the possible set of roots for OPAs. In the final subsection, we show that OPAs (in certain cases) are solutions to a fixed-point problem. All of these results enable us to make estimates concerning OPAs, knowing that exact computation is difficult when \(p\neq 2\).
We begin by recording, without proof, a known result about metric projections.
**Proposition 3.0.1**.: _Let \(1<p<\infty\). Let \(f\in H^{p}\) with \(f(0)\neq 0\) and let \(h\) be the metric projection of 1 onto \([f]_{p}\). Then, in norm,_
\[q_{n,p}[f]f\longrightarrow h\ \ as\ \ n\longrightarrow\infty.\]
This result can be seen as a consequence of the fact that as \(n\longrightarrow\infty\), the metric projections from \(H^{p}\) onto \(f\mathscr{P}_{n}\) converge, in the strong operator topology, to the metric projection from \(H^{p}\) onto \([f]_{p}\) (see, e.g., [12, Proposition 4.8.3]).
In the following proposition, for \(1<p<\infty\) and \(f\in H^{p}\), we write \(Q_{\infty}f\) for the metric projection of 1 onto \([f]_{p}\), understanding that \(Q_{\infty}\) need not be a bonafide \(H^{p}\) function. The next result tells us something about the error incurred by approximating \(q_{n,p}[f]\) using the Taylor polynomials of \(Q_{\infty}\), when the (rather strict) assumption of norm convergence holds.
**Proposition 3.0.2**.: _Let \(1<p<\infty\), and \(f\in H^{p}\). Suppose that the representation_
\[Q_{\infty}(z)f(z)=\sum_{k=0}^{\infty}\alpha_{k}z^{k}f(z)\]
_converges in norm. Then there exist a positive constant \(C\) and an index \(N\) such that_
\[\|q_{n,p}[f]f-Q_{(n)}f\|_{p}^{r}\leqslant C\|Q_{\infty}f-Q_{(n)}f\|_{p}\]
_for all \(n\geqslant N\), where \(Q_{(n)}(z)=\sum_{k=0}^{n}\alpha_{k}z^{k}\), and \(r\) and \(K\) are the applicable Pythagorean parameters._
Proof.: From the orthogonality relation
\[1-q_{n,p}[f]f\ \perp_{p}\ q_{n,p}[f]f-Q_{(n)}f,\]
the Pythagorean inequality gives
\[\|1-q_{n,p}[f]f\|_{p}^{r}+K\|q_{n,p}[f]f-Q_{(n)}f\|_{p}^{r}\leqslant\|1-Q_{(n) }f\|_{p}^{r}.\]
Rearrange and estimate to get
\[K\|q_{n,p}[f]f-Q_{(n)}f\|_{p}^{r} \leqslant\|1-Q_{(n)}f\|_{p}^{r}-\|1-q_{n,p}[f]f\|_{p}^{r}\] \[\leqslant\|1-Q_{(n)}f\|_{p}^{r}-\|1-Q_{\infty}f\|_{p}^{r}\] \[\leqslant r\|1-Q_{(n)}f\|_{p}^{r-1}\left(\|1-Q_{(n)}f\|_{p}-\|1-Q_ {\infty}f\|_{p}\right)\] \[\leqslant r\|1-Q_{(n)}f\|_{p}^{r-1}\|Q_{\infty}f-Q_{(n)}f\|_{p}\] \[\leqslant 2r\|1-Q_{\infty}f\|_{p}^{r-1}\|Q_{\infty}f-Q_{(n)}f\|_{p},\]
for \(n\) sufficiently large. In the third step we applied the elementary inequality
\[a^{r}-b^{r}\leqslant ra^{r-1}(a-b),\]
for \(0<b<a\) and \(r>1\)
This verifies the claim, with \(C=2r\|1-Q_{\infty}f\|_{p}^{r-1}/K\)
The previous proposition can be applied when \(f\) is any polynomial; we record that result now.
**Proposition 3.0.3**.: _Suppose that \(1<p<\infty\). Let \(z_{1}\), \(z_{2}\),..., \(z_{N}\) be a sequence of nonzero points of \(\mathbb{D}\), and define_
\[f(z):=\left(1-\frac{z}{z_{1}}\right)\left(1-\frac{z}{z_{2}}\right)\cdots\left(1 -\frac{z}{z_{N}}\right).\]
_Set \(r=2\) if \(1<p\leqslant 2\), and set \(r=p\) if \(2<p<\infty\). Then the metric projection \(h\) of the unit constant function 1 onto the subspace \([f]_{p}\) of \(H^{p}\) has a norm convergent representation_
\[h(z)=\sum_{k=0}^{\infty}b_{k}z^{k}f(z),\]
_and there exists a positive constant \(C\) such that_
\[\left\|q_{n,p}[f]f-\sum_{k=0}^{n}b_{k}z^{k}f\right\|_{p}^{r}\leqslant C\left\| \sum_{k=n+1}^{\infty}b_{k}z^{k}f\right\|_{p} \tag{3.0.4}\]
_for all positive integers \(n\)._
We omit the proof here, but note that the metric projection \(h\) must vanish at the zeros of \(f\); in turn, boundedness of the difference-quotient operator, given by
\[\left(B_{w}f\right)(z)=\frac{f(z)-f(w)}{z-w},\quad z,w\in\mathbb{D},\]
(applied where \(f(w)=0\)) then ensures the norm convergent representation.
### Continuity
As discussed earlier, OPAs generally vary with \(p\). We discuss this variance here, first showing that when \(f\) is a bounded function, \(q_{n,p}[f]\) varies continuously with respect to \(p\).
**Lemma 3.1.1**.: _Let \(f\in H^{\infty}\) with \(f(0)\neq 0\) and let \(d\in\mathbb{N}\). If \((p_{k})_{k}\subseteq(1,\infty)\) with \(p_{k}\longrightarrow p\in(1,\infty)\), then \(q_{d,p_{k}}[f]\) converges to \(q_{d,p}[f]\) uniformly as \(k\longrightarrow\infty\)._
Proof.: Let us write
\[f(z) =\sum_{j=0}^{\infty}f_{j}z^{j},\] \[q_{d,p}[f](z) =a_{0}+a_{1}z+a_{2}z^{2}+\cdots+a_{d}z^{d},\] \[q_{d,p_{k}}[f](z) =a_{0}^{(k)}+a_{1}^{(k)}z+a_{2}^{(k)}z^{2}+\cdots+a_{d}^{(k)}z^{ d}.\]
Since \(f\in H^{\infty}\), Cauchy-Schwarz yields
\[\left|\int_{\mathbb{T}}fz^{-j}\,dm\right|\leqslant\|f\|_{\infty}\]
for all \(j\geq 0\). Hence, all of the coefficients \(f_{k}\) are bounded by \(\|f\|_{\infty}\).
Letting \(p_{k}^{\prime}\) be the dual exponent of \(p_{k}\), we observe
\[|a_{0}^{(k)}f_{0}-1|=\left|\int_{\mathbb{T}}(q_{d,p_{k}}[f]f-1)\,dm\right|\leqslant \|q_{d,p_{k}}[f]f-1\|_{p_{k}}\|1\|_{p_{k}^{\prime}}\leqslant 1,\]
and so the sequence \(\{a_{0}^{(k)}\}\) is bounded.
Further, since
\[\left|a_{0}^{(k)}f_{j}+a_{1}^{(k)}f_{j-1}+\cdots+a_{j}^{(k)}f_{0}\right| \leqslant\left|\int_{\mathbb{T}}(q_{d,p_{k}}[f]f-1)z^{-j}\,dm\right|\leqslant 1,\]
it follows
\[|a_{j}^{(k)}|\leqslant\frac{\left|a_{0}^{(k)}f_{j}+a_{1}^{(k)}f_{j-1}+\cdots+ a_{j-1}^{(k)}f_{1}\right|+1}{|f_{0}|},\]
for all \(k\in\mathbb{N}\) and \(1\leqslant j\leqslant d\). That is, \(\{a_{j}^{(k)}\}_{k=1}^{\infty}\) is also a bounded sequence for \(1\leqslant j\leqslant d\). By passing to a subsequence and relabeling, we can assume that \(\{q_{n,p_{k}}[f]\}_{k=1}^{\infty}\) is a uniformly convergent sequence of polynomials, which converges to some polynomial, say, \(A(z)=a_{0}+a_{1}z+\cdots+a_{d}z^{d}\in\mathscr{P}_{d}\).
Now, for \(0\leqslant j\leqslant d\), recall the orthogonality equations
\[\int_{\mathbb{T}}[q_{d,p_{k}}[f]f-1]^{(p_{k}-1)}z^{j}f\,dm=0.\]
Taking \(k\longrightarrow\infty\) and invoking uniform convergence, we find that
\[\int_{\mathbb{T}}[Af-1]^{(p_{k}-1)}z^{j}f\,dm=0,\]
for \(0\leqslant j\leqslant d\) (the taking of \(\langle p_{k}-1\rangle\) powers also being well behaved). By uniqueness of the optimal polynomial, it must be that
\[A(z)=q_{d,p}[f](z).\]
Since every subsequence of the originally given sequence \(\{q_{d,p_{k}}[f]\}_{k=1}^{\infty}\) has a further subsequence that converges to the same limit \(q_{d,p}[f]\), it must be that
\[q_{d,p_{k}}[f]\longrightarrow q_{d,p}[f]\]
uniformly.
We now present another continuity result- continuity in \(f\). In particular, if \(f_{k}\longrightarrow f\) in \(H^{p}\), then \(q_{n,p}[f_{k}]\longrightarrow q_{n,p}[f]\). Before establishing this result, we need a couple of lemmas.
**Lemma 3.1.2**.: _Let \(1<p<\infty\) and \(1/p+1/q=1\). If \(\varphi_{k}\longrightarrow\varphi\) in \(L^{p}\), then \(\varphi_{k}^{(p-1)}\longrightarrow\varphi^{(p-1)}\) in \(L^{q}\)._
Proof.: First, we check that
\[\int_{\mathbb{T}}\left|\varphi^{(p-1)}\right|^{q}\,dm=\int_{\mathbb{T}}\left| \varphi\right|^{(p-1)q}\,dm=\int_{\mathbb{T}}\left|\varphi\right|^{p}dm,\]
and so \(\varphi^{(p-1)}\in L^{q}\); similarly \(\varphi_{k}^{(p-1)}\in L^{q}\).
Next, we apply the generalized dominated convergence theorem, using the sequential bound
\[\left|\varphi_{k}^{(p-1)}-\varphi^{(p-1)}\right|^{q}\leqslant 2^{q-1}\left(| \varphi_{k}|^{p}+|\varphi|^{p}\right)\text{ a.e.-}dm,\]
with the Carleson-Hunt theorem supplying pointwise convergence almost everywhere. The conclusion is
\[\int_{\mathbb{T}}\left|\varphi_{k}^{(p-1)}-\varphi^{(p-1)}\right|^{q}\,dm \longrightarrow 0,\]
as claimed.
Below, we use the standard notation \(\hat{f}(n)\) to denote the \(n\)-th Fourier coefficient of a function \(f\in f\in L^{p}\).
**Lemma 3.1.3**.: _Let \(1<p<\infty\). If \(\varphi\in H^{p}\), then_
\[\|\varphi\|_{p}^{r}\geqslant|\hat{\varphi}(0)|^{r}+K|\hat{\varphi}(1)|^{r}+K^ {2}|\hat{\varphi}(2)|^{r}+\cdots,\]
_where \(r\) and \(K\) are the lower Pythagorean parameters._
Proof.: This follows immediately from the orthogonality relations
\[z^{k}\ \ \perp_{p}\ z^{m}H^{p}\ \ \ \ \forall m>k\geqslant 0,\]
and repeated application of the lower Pythagorean inequality.
We are now prepared to prove the aforementioned result.
**Theorem 3.1.4**.: _Suppose that \(1<p<\infty\) and \(n\in\mathbb{N}\). Let \(f_{k}\in H^{p}\) and let \(Q_{k}:=q_{n,p}[f_{k}]\) for each \(k\in\mathbb{N}\). If \(f_{k}\longrightarrow f\) in \(H^{p}\), and \(f(0)\neq 0\), then \(Q_{k}\longrightarrow Q:=q_{n,p}[f]\)._
Proof.: Let us first handle the case \(n=1\), and write \(Q_{k}(z)=a_{k}+b_{k}z\) for \(q_{1,p}[f]\).
Since \(f_{k}(0)\longrightarrow f(0)\), and \(f(0)\neq 0\), there is no harm in assuming that there exists \(c>0\) such that \(|f_{k}(0)|\geqslant c\) for all \(k\).
From the relation
\[1\ \perp_{p}\ zH^{p}\]
we see that
\[1\geqslant\|1-Q_{k}f_{k}\|_{p}^{r}\geqslant|1-a_{k}f_{k}(0)|^{r}+K\|Q_{k}f_{k }-a_{k}f_{k}(0)\|_{p}^{r},\]
where \(r\) and \(K\) are the lower Pythagorean parameters. It follows that
\[1\geqslant|1-a_{k}f_{k}(0)|,\]
implying that
\[|a_{k}|\leqslant\frac{2}{c}\]
for all \(k\). Thus \(\{a_{k}\}\) is a bounded complex sequence, from which we can extract a convergent subsequence, which for now we relabel as the original sequence.
Next, subharmonicity and the triangle inequality yield
\[c|b_{k}|\leqslant|b_{k}f_{k}(0)|\leqslant\|b_{k}zf_{k}\|_{p}\leqslant\|1-(a_{k}+b _{k}z)f_{k}\|_{p}+\|1-a_{k}f_{k}\|_{p}.\]
The last expression on the right side is uniformly bounded as \(k\) varies through \(\mathbb{N}\), and hence \(\{b_{k}\}\) is a bounded sequence. Once again we may draw a convergent subsequence, and relabel it so that
\[Q_{k}=a_{k}+b_{k}z\]
converges uniformly to some \(R(z)=a+bz\).
It needs to be shown that \(R=Q:=q_{1,p}[f]\). For this we rely on the elementary result that if \(v_{k}\longrightarrow v\) in a Banach space and \(\lambda_{k}\longrightarrow\lambda\) in its dual space, then \(\lambda_{k}(v_{k})\longrightarrow\lambda(v)\).
We apply this, identifying
\[v_{k} =f_{k}\] \[v =f\] \[\lambda_{k}(\cdot) =\int_{\mathbb{T}}\left(1-[a_{k}+b_{k}z]f_{k}\right)^{\langle p- 1\rangle}(\cdot)\,dm\] \[\lambda(\cdot) =\int_{\mathbb{T}}\left(1-[a+bz]f\right)^{\langle p-1\rangle}( \cdot)\,dm.\]
Then Lemma 0.49 ensures that \(\lambda_{k}\longrightarrow\lambda\), as needed.
The conclusion is that \(\lambda(v)=\lim_{k\to\infty}\lambda_{k}(v_{k})=\lim_{k\to\infty}0=0\), or
\[1-(a+bz)f\ \perp_{p}\ f.\]
Repeat this argument with the choices
\[v_{k}=zf_{k}\quad\text{and}\quad v=zf\]
to see that
\[1-(a+bz)f\ \perp_{p}\ zf\]
as well. This forces, \(R(z)=Q(z)=a+bz=q_{1,p}[f](z)\), as claimed.
So far, we only know that there is a subsequence that satisfies the claim. However, we see that every subsequence of the original sequence \(\{f_{k}\}\) has a further subsequence for which the linear OPAs tend _to the same limit_\(a+bz\), the linear OPA from \(f\) being unique. This proves that in fact the full sequence \(\{a_{k}+b_{k}z\}\) converges to \(a+bz\).
This verifies the claim when \(n=1\).
More generally, for arbitrary \(n\in\mathbb{N}\), let
\[Q_{k}(z)=q_{n,p}[f](z)=a_{0}^{(k)}+a_{1}^{(k)}z+\cdots+a_{n}^{(k)}z^{n}.\]
From Lemma 0.50, we get
\[1 \geqslant\left\|1-Q_{k}f_{k}\right\|_{p}^{r}\] \[\geqslant\left|1-a_{0}^{(k)}f_{0}\right|^{r}+\sum_{m=1}^{\infty}K^ {m}\left|a_{0}^{(k)}f_{m}+a_{1}^{(k)}f_{m-1}+\cdots+a_{m}^{(k)}f_{0}\right|^{r}\] \[\frac{1}{K^{m/r}} \geqslant\left|a_{0}^{(k)}f_{m}+a_{1}^{(k)}f_{m-1}+\cdots+a_{m}^{ (k)}f_{0}\right|\]
for all \(m\).
We know that \(a_{0}^{(k)}\) is bounded in \(k\). It is also easy to see that \(|f_{j}|\leqslant\|f\|_{p}\) for all \(j\). If \(a_{0}^{(k)},a_{1}^{(k)},\ldots,a_{j}^{(k)}\) are also bounded in \(k\), then the relation
\[\left|a_{j+1}^{(k)}\right|\leqslant\frac{1}{K^{m/r}|f_{0}|}+\left|\frac{a_{0} ^{(k)}f_{j+1}}{f_{0}}+\frac{a_{1}^{(k)}f_{j}}{f_{0}}+\cdots+\frac{a_{j}^{(k)} f_{1}}{f_{0}}\right|\]
ensures that \(a_{j+1}^{(k)}\) is bounded as well. This proves that all of the coefficients of \(Q_{k}\) are uniformly bounded in \(k\).
Arguing as before, we may find a subsequence from \(\{Q_{k}\}\) that converges uniformly, and the limit must be \(q_{n,p}[f]\). In fact, this must be the limit of the original sequence.
### Roots of OPAs
As a corollary to the last continuity theorem, we begin this subsection with two results concerning the set of possible OPA roots. Let us first establish some notation.
**Definition 3.2.1**.: For \(1<p<\infty\) and \(n\geq 0\), we denote the set of possible roots of OPAs of degree \(n\) in \(H^{p}\) as
\[\Omega_{n,p}:=\left\{w\in\mathbb{C}:\exists f\in H^{p},f(0)\neq 0\text{ with }q_{n,p}[f](w)=0\right\},\]
and let
\[\Omega_{p}:=\bigcup_{n\geq 0}\Omega_{n,p}.\]
Note that \(\Omega_{0,p}=\emptyset\) for all \(p\in(1,\infty)\). We have an immediate proposition concerning these sets.
**Proposition 3.2.2**.: _For \(1<p<\infty\) and each \(n\geq 1\), we have \(\Omega_{n,p}\subseteq\Omega_{1,p}\), and therefore \(\Omega_{p}=\Omega_{1,p}.\)_
Proof.: Suppose \(w\in\Omega_{n,p}\) with \(q_{n,p}[f](w)=0\). Put \(q_{n,p}[f]=(z-w)\tilde{q}\). Then, by optimality, we have
\[\|q_{1,p}[\tilde{q}f]\tilde{q}f-1\|_{p} \leq\|(z-w)\tilde{q}f-1\|_{p}\] \[=\|q_{n,p}[f]f-1\|_{p}\] \[\leq\|q_{1,p}[\tilde{q}f]\tilde{q}f-1\|_{p},\]
and we deduce that \(q_{1,p}[\tilde{q}f]=q_{n,p}[f]/\tilde{q}=z-w\), which implies that \(w\in\Omega_{1,p}\).
Presently, we see that the set of OPA roots must contain the set \(\mathbb{C}\setminus\overline{\mathbb{D}}\).
**Proposition 3.2.3**.: _Let \(1<p<\infty\). If \(w\in\mathbb{C}\setminus\overline{\mathbb{D}}\), then there exists \(f\in H^{p}\) such that \(q_{1,p}[f]\) has the root \(w\), and so_
\[\mathbb{C}\setminus\overline{\mathbb{D}}\subseteq\Omega_{p}.\]
Proof.: Let \(w\in\mathbb{C}\setminus\overline{\mathbb{D}}\) and let
\[f(z):=\frac{1}{z-w},\]
which belongs to \(H^{p}\) for all \(p\in(1,\infty)\). Further,
\[\|1-Q(z)f(z)\|_{p}=0\]
when
\[Q(z)=z-w.\]
Therefore, it must be that \(q_{n,p}[f](z)=z-w\) for all \(n\geqslant 1\). Hence, \(w\in\Omega_{p}\).
We now show that this set is connected and symmetric under rotation.
**Proposition 3.2.4**.: _For \(1<p<\infty\), the set \(\Omega_{p}\) is rotationally symmetric and connected._
Proof.: We begin by establishing rotational symmetry. Let \(f\in H^{p}\), and suppose \(q_{1,p}[f]=a(z-w)\) (by Proposition 3.2.2, it suffices to take the linear OPA). Then for any \(\gamma\) with \(|\gamma|=1\),
\[\|1-a(z-w)\|_{p}^{p} =\int_{0}^{2\pi}|1-a(z-w)f(z)|^{p}\,dm(z)\] \[=\int_{0}^{2\pi}|1-a(\gamma\zeta-w)f(\gamma\zeta)|^{p}|\gamma|^{p }\,dm(\zeta)\] \[=\int_{0}^{2\pi}|1-a(\gamma\zeta-w)f(\gamma\zeta)|^{p}\,dm(\zeta)\] \[=\int_{0}^{2\pi}|1-(a\gamma)(\zeta-\overline{\gamma}w)f(\gamma \zeta)|^{p}\,dm(\zeta).\]
It must be that \((a\gamma)(z-\overline{\gamma}w)\) is the linear OPA for \(f(\gamma z)\), for otherwise, by reversing these steps from
\[\|1-q_{1,p}[f(\gamma z)]f(\gamma z)\|_{p}^{p}\]
we obtain a contradiction.
This shows that if \(w\) is an OPA root, then so is \(\overline{\gamma}w\) for all \(\gamma\), \(|\gamma|=1\). That is, the set \(\Omega_{p}\) is rotationally symmetric.
Next, suppose that \(f\) and \(g\) belong to \(H^{p}\), with real coefficients, and with \(f(0)>0\) and \(g(0)>0\). Let their linear OPA roots be \(r\) and \(R\) respectively, where \(0<r<R\). By the
continuity of the map \(F\longmapsto q_{1,p}[F]\), we see that the set of linear OPA roots of the collection of functions \(tf+(1-t)g\), \(0\leqslant t\leqslant 1\), must be an interval containing \([r,R]\); this is because the collection of functions is connected, and continuous maps preserve connectivity. Note that \(tf(0)+(1-t)g(0)>0\) for all \(t\), as required for the linear OPA to be nontrivial. Consequently, \(\Omega_{p}\) is path connected, and hence connected.
### Fixed point approach
Again, we mention that computing OPAs when \(p\neq 2\) is a challenging task. Here, we explore the idea of OPAs being fixed points of an iterative process. We begin with the degree zero case and then move to the degree one case.
**Theorem 3.3.1**.: _Let \(2<p<\infty\), and let \(f\in H^{p}\) be a nonconstant function. Then the degree zero OPA \(q_{0,p}[f]\) is the unique solution to the fixed point equation_
\[\zeta=\Phi(\zeta),\]
_where \(\Phi:\mathbb{C}\longmapsto\mathbb{C}\) is given by_
\[\Phi(\zeta):=\left(\int_{\mathbb{T}}|1-\zeta f|^{p-2}\overline{f}\,dm\right) \left(\int_{\mathbb{T}}|1-\zeta f|^{p-2}|f|^{2}\,dm\right)^{-1}.\]
_Moreover, for any \(\lambda_{1}\in\mathbb{C}\), the sequence \(\{\lambda_{k}\}\) given by \(\lambda_{k+1}=\Phi(\lambda_{k})\) converges to \(q_{0,p}[f]\)._
Proof.: Since \(p>2\), and
\[1=\frac{2}{p}+\frac{p-2}{p},\]
the parameters \(p/2\) and \(p/(p-2)\) are Holder conjugates of each other. Hence Holder's inequality gives
\[\int_{\mathbb{T}}|1-\zeta f|^{p-2}|f|^{2}\,dm\leqslant\left(\int_{\mathbb{T}} |1-\zeta f|^{(p-2)p/(p-2)}\,dm\right)^{(p-2)/p}\left(\int_{\mathbb{T}}|f|^{2(p /2)}\,dm\right)^{2/p}<\infty.\]
Furthermore, since \(f\) is nonconstant, the integral in the denominator of \(\Phi\) is nonzero for any value of \(\zeta\).
When \(\zeta\neq 0\), we have
\[|\zeta|\int_{\mathbb{T}}|1-\zeta f|^{p-2}|f|\,dm \leqslant\int_{\mathbb{T}}|1-\zeta f|^{p-2}\left(|1-\zeta f|+1 \right)\,dm\] \[=\int_{\mathbb{T}}|1-\zeta f|^{p-1}\,dm+\int_{\mathbb{T}}|1- \zeta f|^{p-2}\,dm\] \[<\infty;\]
and when \(\zeta=0\)
\[\int_{\mathbb{T}}|1-\zeta f|^{p-2}|f|\,dm=\int_{\mathbb{T}}|1-0\cdot f|^{p-2} |f|\,dm=\int_{\mathbb{T}}|f|\,dm<\infty.\]
This verifies that \(\Phi\) is well defined for all \(\zeta\in\mathbb{C}\).
In fact, \(\Phi\) is continuous and bounded. Continuity of the numerator and denominator of \(\Phi\) at any point \(\zeta_{0}\) can be established by a Dominated Convergence argument, with respective dominating functions
\[2^{p-2}(1+C|f|^{p-2})|f|\quad\text{and}\quad 2^{p-2}(1+C|f|^{p-2})|f|^{2},\]
where \(C>|\zeta_{0}|^{p-2}\). Continuity at infinity is established by
\[\lim_{\zeta\to\infty}\frac{\int_{\mathbb{T}}|1-\zeta f|^{p-2} \overline{f}\,dm}{\int_{\mathbb{T}}|1-\zeta f|^{p-2}|f|^{2}\,dm} =\lim_{\zeta\to\infty}\frac{\int_{\mathbb{T}}|1/\zeta-f|^{p-2} \overline{f}\,dm}{\int_{\mathbb{T}}|1/\zeta-f|^{p-2}|f|^{2}\,dm}\] \[=\lim_{\zeta\to\infty}\frac{\int_{\mathbb{T}}|f|^{p-2}\overline{ f}\,dm}{\int_{\mathbb{T}}|f|^{p}\,dm}.\]
Consequently, \(\Phi\) is a bounded function. For any choice of \(\lambda_{1}\in\mathbb{C}\), define \(\lambda_{k+1}=\Phi(\lambda_{k})\) for all \(k=1,2,3,\ldots\). The resulting sequence \(\{\lambda_{k}\}\) is a bounded sequence, and must contain a convergent subsequence, \(\{\lambda_{n_{k}}\}\), with \(\lambda_{k}\longrightarrow\lambda\in\mathbb{C}\). Continuity ensures that
\[\lambda=\Phi(\lambda),\]
which is to say that
\[\lambda\ \int_{\mathbb{T}}|1-\lambda f|^{p-2}f\overline{f}\,dm =\int_{\mathbb{T}}|1-\lambda f|^{p-2}\overline{f}\,dm\] \[0 =\int_{\mathbb{T}}|1-\lambda f|^{p-2}(1-\lambda f)\overline{f}\,dm,\]
or \(1-\lambda f\ \perp_{p}\ f\). This shows that \(\lambda=q_{0,p}[f]\).
But any subsequence of \(\{\lambda_{k}\}\) must have a further subsequence that converges to the same limit. Thus the sequence \(\{\lambda_{k}\}\) itself must converge to \(\lambda=q_{0,p}[f]\).
We now discuss the linear case, first recording some notation.
Let a linear polynomial \(Q_{1}(z)=a_{1}+b_{1}z\) be given and, for \(k\geqslant 1\), let
\[\left[\begin{array}{c}a_{k+1}\\ b_{k+1}\end{array}\right] =\left[\begin{array}{cc}C_{k}&\overline{D_{k}}\\ D_{k}&C_{k}\end{array}\right]^{-1}\left[\begin{array}{c}A_{k}\\ B_{k}\end{array}\right] \tag{3.3.2}\] \[=\frac{1}{|C_{k}|^{2}+|D_{k}|^{2}}\left[\begin{array}{cc}C_{k}& -\overline{D_{k}}\\ -D_{k}&C_{k}\end{array}\right]\left[\begin{array}{c}A_{k}\\ B_{k}\end{array}\right]\] \[=\frac{1}{|C_{k}|^{2}+|D_{k}|^{2}}\left[\begin{array}{c}A_{k}C_ {k}-B_{k}\overline{D_{k}}\\ B_{k}C_{k}-A_{k}D_{k}\end{array}\right],\]
where
\[A_{k} =\int_{\mathbb{T}}|1-Q_{k}f|^{p-2}\overline{f}\,dm \tag{3.3.3}\] \[B_{k} =\int_{\mathbb{T}}|1-Q_{k}f|^{p-2}\overline{z}\overline{f}\,dm\] \[C_{k} =\int_{\mathbb{T}}|1-Q_{k}f|^{p-2}|f|^{2}\,dm\] \[D_{k} =\int_{\mathbb{T}}|1-Q_{k}f|^{p-2}\overline{z}|f|^{2}\,dm,\]
and \(Q_{k}(z)=a_{k}+b_{k}z\). This determines a sequence of linear polynomials.
**Theorem 3.3.4**.: _Let \(2<p<\infty\), and suppose that \(f\in H^{p}\) is a nonconstant polynomial with \(f(0)\neq 0\). If \(Q_{k}(z)=a_{k}+b_{k}z\), \(k\geqslant 0\), is the sequence of linear polynomials arising from (3.3.2), then \(Q_{k}\) converges to \(q_{1,p}[f]\)._
Proof.: If \(Q_{1}\) is identically zero, then by inspection we see that \(Q_{2}\) is not the zero polynomial. Thus, by relabeling if necessary, let us assume \(Q_{1}\) is not identically zero.
By the hypotheses on \(f\), the expression \(w-Qf\) is a nonconstant polynomial for any complex number \(w\) and linear polynomial \(Q\); hence \(|w-Qf|^{p-2}\) will be integrable on the unit circle.
Consider the expression, with integrals being taken over the circle,
\[\Phi(Q):=\left|\frac{\int|Qf|^{p-2}\overline{f}\,dm\int|Qf|^{p-2}|f|^{2}\,dm- \int|Qf|^{p-2}\overline{z}\overline{f}\,dm\int|Qf|^{p-2}\overline{z}|f|^{2}\, dm}{\left|\int|Qf|^{p-2}|f|^{2}\,dm\right|^{2}+\left|\int|Qf|^{p-2}\overline{z}|f|^{2} \,dm\right|^{2}}\right|,\]
as \(Q\) varies over set
\[\mathscr{Q}:=\{a+bz\in\mathscr{P}_{1}:\max\{|a|,|b|\}=1\}.\]
Under the assumptions on \(f\), the denominator is bounded away from zero. Thus \(\Phi(Q)\) is a continuous function on a compact set, and achieves its maximum. In fact, the value of \(\Phi(Q)\) is indifferent to rescaling \(Q\), except for multiplying it by zero.
From this we can further deduce that the values of
\[\Psi(Q,w):=\] \[\left|\frac{\int|w-Qf|^{p-2}\overline{f}\,dm\int|w-Qf|^{p-2}|f|^{2 }\,dm-\int|w-Qf|^{p-2}\overline{z}\overline{f}\,dm\int|w-Qf|^{p-2}\overline{z }|f|^{2}\,dm}{\left|\int|w-Qf|^{p-2}|f|^{2}\,dm\right|^{2}+\left|\int|w-Qf|^{p -2}\overline{z}|f|^{2}\,dm\right|^{2}}\right|,\]
are uniformly bounded for \(Q\in\mathscr{Q}\) and \(|w|\leqslant 1\).
Next, notice that for any nonzero linear polynomial \(Q(z)=a+bz\) we have
\[\int_{\mathbb{T}}|1-Qf|^{p-2}\overline{f}\,dm=\int_{\mathbb{T}}|1-(a+bz)f|^{p- 2}\overline{f}\,dm=|c|^{p-2}\int_{\mathbb{T}}|1/c-(a/c+[b/c]z)f|^{p-2} \overline{f}\,dm,\]
where \(c:=\max\{|a|,|b|\}\). This is to say that the value of \(A_{k}\) in (3.3.3) scales in a simple way with \(c\), with the result that \(f\) is multiplied by a member of \(\mathscr{Q}\), and the \(1\) inside the integrand is replaced by \(1/c\). Similar remarks apply to the formulas for \(B_{k}\), \(C_{k}\), and \(D_{k}\).
Consequently, when \(A_{k}\), \(B_{k}\), \(C_{k}\) and \(D_{k}\) are assembled together to yield \(a_{k+1}\) and \(b_{k+1}\), the scaling factors \(|c|^{p-2}\) attached to each integral cancel.
Let us write \(c_{k}:=\max\{|a_{k}|,|b_{k}|\}\). The above observations establish that \(|c_{k+1}|\) is uniformly bounded as \(k\) varies over such indices that \(|c_{k}|\geqslant 1\).
For the other values of \(k\), for which \(|c_{k}|<1\), the corresponding expressions for \(|1-Q_{k}f|^{p-2}\) are again uniformly bounded in the obvious way, implying that the resulting \(c_{k+1}\) are also uniformly bounded.
This shows that \(\{Q_{k}\}\) is a bounded sequence of linear polynomials, which must therefore have a convergent subsequence. The limit is a linear polynomial \(Q_{\infty}\), which satisfies the orthogonality conditions for \(q_{1,p}[f]\), and hence must be the OPA. Uniqueness of the OPA ensures that, in fact, every subsequence of \(\{Q_{k}\}\) has a further subsequence that converges to \(q_{1,p}[f]\). In conclusion, we have
\[\lim_{k\to\infty}Q_{k}=q_{1,p}[f].\]
We end this section by noting that Theorems 3.3.1 and 3.3.4 are only established for \(2<p<\infty\), and, in the degree one case, only for polynomials. It is currently unclear if these results extend to \(1<p<2\), or if analogous results hold for higher degree OPAs.
## 4. Error Bounds and Duality Arguments
The present section is concerned with estimating (both above and below) the quantity \(\|q_{n,p}[f]f-1\|_{p}\), i.e., the "error" in the optimal polynomial approximation algorithm. We begin by employing some duality methods, first recalling a fundamental result from classical functional analysis, tailored to our setting.
**Lemma 4.0.1**.: _Let \(1<p<\infty\) and \(f\in H^{p}\). For any \(n\in\mathbb{N}\), we have_
\[\|q_{n,p}[f]f-1\|_{p}=\left[\inf_{\psi\in L^{q}}\left\{\|\psi\|_{L^{q}}:\psi_{ 0}=1,\langle z^{k}f,\psi\rangle=0\ \forall\ 0\leqslant k\leqslant n\right\} \right]^{-1}.\]
Proof.: By an elementary duality theorem of functional analysis, with respect to the pairing
\[\langle f,g\rangle=\int_{0}^{2\pi}f(e^{i\theta})\overline{g(e^{i\theta})}\, \frac{d\theta}{2\pi},\]
we have
\[\|q_{n,p}[f]f-1\|_{p} =\inf\left\{\|Qf-1\|_{p}:\ Q\in\mathscr{P}_{n}\right\}\] \[=\operatorname{dist}_{H^{p}}(1,\mathscr{P}_{n}f)\] \[=\operatorname{dist}_{L^{p}}(1,\mathscr{P}_{n}f) \tag{4.0.2}\] \[=\|1\|_{[(\mathscr{P}_{n}f)^{\perp}]^{*}}\] (4.0.3) \[=\sup\left\{\frac{|\langle\psi,1\rangle|}{\|\psi\|_{L^{q}}}:\ \psi\in( \mathscr{P}_{n}f)^{\perp}\setminus\{0\}\right\}\] \[=\sup\left\{\frac{|\psi_{0}|}{\|\psi\|_{L^{q}}}:\ \psi\in( \mathscr{P}_{n}f)^{\perp}\setminus\{0\}\right\}\] \[=\Big{[}\inf\left\{\|\psi/\psi_{0}\|_{L^{q}}:\ \psi\in( \mathscr{P}_{n}f)^{\perp}\setminus\{0\}\right\}\Big{]}^{-1}\] \[=\Big{[}\inf\left\{\|\psi\|_{L^{q}}:\ \psi\in L^{q},\ \psi_{0}=1, \langle z^{k}f,\psi\rangle=0\ \forall 0\leqslant k\leqslant n\right\}\Big{]}^{-1}.\]
**Remark 4.0.4**.: The reason to move to \(L^{p}\) in line (4.0.2) is that the dual of \(L^{p}\) is \(L^{q}\). If we stick with the norm in \(H^{p}\), then (caution!) the relevant dual space is the quotient space \(L^{q}/H^{q}\), rather than \(H^{q}\). These spaces are isometrically isomorphic only when \(p=2\).
In line (4.0.3), we mean "the norm of the unit constant function, viewed as a bounded linear functional on the annihilator of the subspace spanned by \(f\mathscr{P}_{n}\)."
We first apply this duality to provide a lower bound for the OPA error in the case that we are approximating a polynomial with zeros in the disk.
**Proposition 4.0.5**.: _Suppose \(f\) is a polynomial_
\[f(z)=(z-w_{1})(z-w_{2})\cdots(z-w_{d}),\]
_with the roots being distinct, nonzero and contained inside \(\mathbb{D}\). Then, for \(1<p<\infty\) and \(\lambda:=q_{0,p}[f]\), we have_
\[\|1-\lambda f\|_{p}\geqslant 1-|w_{1}w_{2}\cdots w_{d}|.\]
Proof.: The space of functions in \(L^{q}\) which annihilate \(f\) contains functions of the form
\[c_{1}\Lambda_{w_{1}}+c_{2}\Lambda_{w_{2}}+\cdots+c_{d}\Lambda_{w_{d}},\]
where \(\Lambda_{w_{j}}\) denotes the point evaluation functional (or Szego kernel) at the point \(w_{j}\in\mathbb{D}\). Thus, by the Lemma 4.0.1, we have
\[\|1-\lambda f\|_{p}=\left[\inf\|\psi\|_{q}\right]^{-1},\]
where the infimum is over \(\psi\in L^{q}\) satisfying \(\langle f,\psi\rangle=0\) and \(\psi_{0}=1\).
Let
\[B(z)=a\prod_{k=1}^{d}\frac{w_{k}-z}{1-\overline{w}_{k}z},\]
a constant multiple of the Blaschke product with the same zeros as \(f\). Its numerator has leading term \(\pm az^{d}\), while the denominator has leading term \(\pm\overline{w_{1}w_{2}\cdots w_{d}}z^{d}\) (with matching signs). Thus long division followed by partial fractions expansion results in an expression of the form
\[B(z)=\frac{a}{\overline{w_{1}w_{2}\cdots w_{d}}}+c_{1}\Lambda_{w_{1}}+c_{2} \Lambda_{w_{2}}+\cdots+c_{d}\Lambda_{w_{d}}.\]
Evaluating this equation at \(z=0\) tells us that
\[aw_{1}w_{2}\cdots w_{d}=\frac{a}{\overline{w_{1}w_{2}\cdots w_{d}}}+c_{1}+c_{2 }+\cdots+c_{d}.\]
This suggests making the specific choice of
\[\psi(z)=c_{1}\Lambda_{w_{1}}+c_{2}\Lambda_{w_{2}}+\cdots+c_{d}\Lambda_{w_{d}}\]
with the coefficients determined above. The requirement of \(\psi(0)=1\) therefore gives
\[aw_{1}w_{2}\cdots w_{d}=\frac{a}{\overline{w_{1}w_{2}\cdots w_{d}}}+1,\]
which will furnish the value of \(a\), namely,
\[a=\left[w_{1}w_{2}\cdots w_{d}-\frac{1}{\overline{w_{1}w_{2}\cdots w_{d}}} \right]^{-1}.\]
Finally, an application of the triangle inequality yields
\[\|1-\lambda f\|_{p} \geqslant\|\psi\|_{q}^{-1}\] \[\geqslant\left\|B-\frac{a}{\overline{w_{1}w_{2}\cdots w_{d}}} \right\|_{q}^{-1}\] \[\geqslant\left[\|B\|_{q}+\left\|\frac{a}{\overline{w_{1}w_{2} \cdots w_{d}}}\right\|_{q}\right]^{-1}\] \[=\left[|a|+\left|\frac{a}{w_{1}w_{2}\cdots w_{d}}\right|\right]^{ -1}\] \[=\frac{1-|f(0)|^{2}}{1+|f(0)|}\] \[=1-|f(0)|\] \[=1-|w_{1}w_{2}\cdots w_{d}|.\]
**Remark 4.0.6**.: The above result holds for any function \(f\) vanishing at the points \(w_{1},\ldots,w_{d}\). Further, by Theorem 3.1.4, the result also extends to any infinite Blaschke product.
Let us now use duality to further investigate OPA errors for more general functions.
**Proposition 4.0.7**.: _Let \(1<p<\infty\), \(1/p+1/q=1\), \(n\in\mathbb{N}\), and \(f\in H^{p}\) with \(f(0)=1\). Then_
\[\|q_{n-1,p}[f]f-1\|_{p}\geqslant\frac{1}{\|1+\psi_{1}z+\psi_{2}z^{2}+\cdots+ \psi_{n}z^{n}\|_{q}},\]
_where the coefficients \(\psi_{k}\), \(1\leqslant k\leqslant n\), satisfy the matrix equation_
\[\left[\begin{array}{cccc}f_{1}&f_{2}&f_{3}&\cdots&f_{n}\\ f_{0}&f_{1}&f_{2}&\cdots&f_{n-1}\\ 0&f_{0}&f_{1}&\cdots&f_{n-2}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&\cdots&f_{1}\end{array}\right]\left[\begin{array}{c}\psi_{1}\\ \psi_{2}\\ \psi_{3}\\ \vdots\\ \psi_{n}\end{array}\right]=\left[\begin{array}{c}-1\\ 0\\ 0\\ \vdots\\ 0\end{array}\right].\]
Proof.: It suffices to check that the function
\[\psi(z)=1+\psi_{1}z+\psi_{2}z^{2}+\cdots+\psi_{n}z^{n}\]
satisfies the hypotheses of Lemma 4.0.1. That is, for \(0\leq k\leq n\), that
\[\langle z^{k}f,\psi\rangle=0.\]
This is ensured precisely by the linear system in the statement of the proposition.
With further calculation, the approach in the previous proposition can be used to show the following:
**Proposition 4.0.8**.: _Let \(1<p<\infty\), \(1/p+1/q=1\), \(n\in\mathbb{N}\), and \(f\in H^{p}\) with \(f(0)=1\). Let_
\[\frac{1}{f(z)}=1+g_{1}z+g_{2}z^{2}+\cdots\]
_be the power series of \(1/f\) about the origin._
_Then_
\[\|q_{n-1,p}[f]f-1\|_{p}\geqslant\frac{|g_{n}|}{\|1+g_{1}z+g_{2}z^{2}+\cdots+g_ {n}z^{n}\|_{q}}.\]
Proof.: Let us begin with the matrix equation from Proposition 4.0.7:
\[\left[\begin{array}{cccc}f_{1}&f_{2}&f_{3}&\cdots&f_{n}\\ f_{0}&f_{1}&f_{2}&\cdots&f_{n-1}\\ 0&f_{0}&f_{1}&\cdots&f_{n-2}\\ \vdots&\vdots&\cdots&\ddots&\vdots\\ 0&0&0&\cdots&f_{1}\end{array}\right]\left[\begin{array}{c}\psi_{1}\\ \psi_{2}\\ \psi_{3}\\ \vdots\\ \psi_{n}\end{array}\right]=\left[\begin{array}{c}-1\\ 0\\ 0\\ \vdots\\ 0\end{array}\right].\]
There is no harm in multiplying both sides of the equation on the left by the elementary permutation matrix
\[\left[\begin{array}{ccccc}0&0&0&\cdots&1\\ 1&0&0&\cdots&0\\ 0&1&0&\cdots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&\cdots&0\end{array}\right]\]
(the next to last entry of the bottom row is \(1\)), which has the effect of changing the equation to
\[\left[\begin{array}{ccccc}f_{0}&f_{1}&f_{2}&\cdots&f_{n-1}\\ 0&f_{0}&f_{1}&\cdots&f_{n-2}\\ 0&0&f_{0}&\cdots&f_{n-3}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&\cdots&f_{1}\\ f_{1}&f_{2}&f_{3}&\cdots&f_{n}\end{array}\right]\left[\begin{array}{c}\psi_{ 1}\\ \psi_{2}\\ \psi_{3}\\ \psi_{4}\\ \vdots\\ \psi_{n}\end{array}\right]=\left[\begin{array}{c}0\\ 0\\ 0\\ 0\\ \vdots\\ -1\end{array}\right].\]
By successively subtracting multiples of the other rows, the bottom row can be placed in the form
\[\left[\begin{array}{ccccc}0&0&0&\cdots&C\end{array}\right]\]
for some constant \(C\), which could be zero. In fact, recalling that \(f_{0}=1\), we see that \(C\) must be given by
\[C=\det\left[\begin{array}{ccccc}f_{1}&f_{2}&f_{3}&\cdots&f_{n}\\ f_{0}&f_{1}&f_{2}&\cdots&f_{n-1}\\ 0&f_{0}&f_{1}&\cdots&f_{n-2}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&\cdots&f_{1}\end{array}\right].\]
Furthermore, the sequence of row operations to diagonalize the matrix leaves the right side unchanged as
\[\left[\begin{array}{ccccc}0&0&0&\cdots&-1\end{array}\right]^{T}.\]
Assuming that \(C\neq 0\), and again recalling that \(f_{0}=1\), our matrix equation can be written as
\[\left[\begin{array}{ccccc}f_{0}&f_{1}&f_{2}&\cdots&f_{n-1}\\ 0&f_{0}&f_{1}&\cdots&f_{n-2}\\ 0&0&f_{0}&\cdots&f_{n-3}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&\cdots&f_{0}\end{array}\right]\left[\begin{array}{c}\psi_{1}\\ \psi_{2}\\ \psi_{3}\\ \vdots\\ \psi_{n}\end{array}\right]=\left[\begin{array}{c}0\\ 0\\ 0\\ \vdots\\ -1/C\end{array}\right].\]
The inverse of the transposed (Toeplitz) matrix on the left is simply
\[\left[\begin{array}{ccccc}g_{0}&g_{1}&g_{2}&\cdots&g_{n-1}\\ 0&g_{0}&g_{1}&\cdots&g_{n-2}\\ 0&0&g_{0}&\cdots&g_{n-3}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&\cdots&g_{0}\end{array}\right],\]
where \(g(0)=1\) and \(g(z)=g_{0}+g_{1}z+g_{2}z^{2}+\cdots\) is the Taylor expansion of \(1/f(z)\), valid for some disk centered at the origin. The conclusion is that
\[\psi_{k}=-g_{n-k}/C\ \ \forall 0\leqslant k\leqslant n-1.\]
Our next challenge is to find an analytical meaning for the constant \(C\). But notice that the row operations needed to clear entries from the bottom row of
\[\left[\begin{array}{ccccc}f_{0}&f_{1}&f_{2}&\cdots&f_{n-1}\\ 0&f_{0}&f_{1}&\cdots&f_{n-2}\\ 0&0&f_{0}&\cdots&f_{n-3}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&\cdots&f_{1}\\ f_{1}&f_{2}&f_{3}&\cdots&f_{n}\end{array}\right] \tag{4.0.9}\]
would (suitably modified) similarly clear the second through the last entries from the top row. Performing all of these (suitably modified) row operations on the identity matrix would have to result in
\[\left[\begin{array}{ccccc}1&g_{1}&g_{2}&\cdots&g_{n-1}\\ 0&1&0&\cdots&0\\ 0&0&1&\cdots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&\cdots&1\end{array}\right]\]
Then following carefully what operations are correspondingly performed on the last column in (4.0.9), we conclude that
\[C=f_{n}+g_{1}f_{n-1}+g_{2}f_{n-2}+\cdots+g_{n-1}f_{1}=-g_{n}.\]
Finally, note that
\[\left\|1+\frac{g_{n-1}}{g_{n}}z+\frac{g_{n-2}}{g_{n}}z^{2}+\cdots +\frac{1}{g_{n}}z^{n}\right\|_{q} =\frac{1}{|g_{n}|}\left\|g_{n}+g_{n-1}z+g_{n-2}z^{2}+\cdots+z^{n} \right\|_{q}\] \[=\frac{1}{|g_{n}|}\left\|z^{-n}(g_{n}+g_{n-1}z+g_{n-2}z^{2}+ \cdots+z^{n})\right\|_{q}\] \[=\frac{1}{|g_{n}|}\left\|1+g_{1}z^{-1}+g_{2}z^{-2}+\cdots+g_{n}z^ {-n}\right\|_{q}\] \[=\frac{1}{|g_{n}|}\left\|1+g_{1}z+g_{2}z^{2}+\cdots+g_{n}z^{n} \right\|_{q},\]
where in the last step, the change of variable \(\theta\longmapsto-\theta\), for \(z=e^{i\theta}\), leaves the norm integral net unchanged.
We will provide an improvement to the above proposition, but we must first consider the problem of finding \(G\in zH^{p}\) such that
\[\left\|\overline{G}+F\right\|_{p}\]
is minimized. Duality tells us that (as we continue to mark extremal functions with (*))
\[\left\|\overline{G^{*}}+F\right\|_{p} =\sup\left\{\frac{|\langle F,\psi\rangle|}{\|\psi\|_{q}}:\ \psi\in H^{q}\right\}\] \[=\frac{|\langle F,\psi^{*}\rangle|}{\|\psi^{*}\|_{q}}\] \[=|\langle F,\psi^{*}\rangle|/\inf\left\{\|\psi^{*}+K\|_{q}:\ K\in \overline{zH^{q}}\right\}.\]
Once again, we are up against the dual of \(H^{p}\) being isomorphic to \(H^{q}\), but not isometrically.
Nonetheless, we must consider the metric projection of \(F\) onto the subspace \(\overline{zH^{p}}\). Notice that \((\overline{G^{*}}+F)^{\langle p-1\rangle}\) annihilates any negative frequencies. Therefore, there exists \(h\in H^{q}\) such that
\[(\overline{G^{*}}+F)^{\langle p-1\rangle}=h,\]
and thus, taking a \(\langle q-1\rangle\) power, we have
\[|h|^{q-2}h=\overline{G^{*}}+F.\]
In turn, we see that finding \(G^{*}\) amounts to solving the above highly unpleasant functional equation.
Let us record this in the following result, where we write \(P_{+}\) for the Riesz projection, given by
\[\sum_{k=-\infty}^{\infty}c_{k}z^{k}\longmapsto\sum_{k=0}^{\infty}c_{k}z^{k},\]
which is bounded from \(L^{p}\to H^{p}\).
**Proposition 4.0.10**.: _Let \(1<p<\infty\) and \(1/p+1/q=1\). Suppose \(h\in H^{q}\), and define \(F:=P_{+}\overline{h^{\langle q-1\rangle}}\). Then_
\[\inf\left\{\|F+\overline{G}\|_{p}:\ G\in zH^{p}\right\}\]
_is attained by taking \(F+\overline{G}=\overline{h^{\langle q-1\rangle}}\). In this case, the value of the infimum is given by_
\[\inf\left\{\|F+\overline{G}\|_{p}:\ G\in zH^{p}\right\}=\|h\|_{q}^{q-1}.\]
This warrants the following observation:
**Proposition 4.0.11**.: _For \(1<p<\infty\), the set of images \(P_{+}\overline{(H^{q})^{\langle q-1\rangle}}\) is dense in \(H^{p}\)._
Proof.: Suppose \(g\in H^{q}\) has the property that
\[\left\langle P_{+}\,\overline{h^{(q-1)}},g\right\rangle=0\]
for all \(h\in H^{q}\). Then
\[0 =\left\langle P_{+}\,\overline{h^{\langle q-1\rangle}},g\right\rangle\] \[=\int_{\mathbb{T}}P_{+}\,\overline{h^{\langle q-1\rangle}} \overline{g}\,dm\] \[=\int_{\mathbb{T}}\overline{h^{\langle q-1\rangle}}\overline{g} \,dm.\]
We are able to drop the projection in the last line since integration against \(\overline{g}\) will annihilate the negative frequencies of \(\overline{h^{\langle q-1\rangle}}\). In particular, this must hold for \(h=g\), hence \(0=\|g\|_{q}^{q}\). This forces \(g\) to be identically zero.
In turn, we can make the following improvement to Proposition 4.0.8.
**Proposition 4.0.12**.: _Let \(1<p<\infty\), \(1/p+1/q=1\), \(n\in\mathbb{N}\), and \(f\in H^{p}\) with \(f(0)=1\). Let_
\[\frac{1}{f(z)}=1+g_{1}z+g_{2}z^{2}+\cdots\]
_be the power series of \(1/f\) about the origin._
_If \(g_{n}\neq 0\), then_
\[\|q_{n-1,p}[f]f-1\|_{p}\geqslant\frac{1}{\|h^{\langle p-1\rangle}\|_{q}},\]
_where \(h\in H^{p}\) satisfies_
\[P_{+}\,\overline{h^{\langle p-1\rangle}}=1+\frac{g_{n-1}}{g_{n}}z+\frac{g_{n- 2}}{g_{n}}z^{2}+\cdots+\frac{g_{0}}{g_{n}}z^{n}.\]
Let us now consider the case where \(n\longrightarrow\infty\). Then, by writing \(f=JG\) for \(J\) inner and \(G\) outer, we see, for all \(k\geq 0\), that \(\psi\in L^{q}\) satisfies
\[0 =\langle z^{k}f,\psi\rangle\] \[=\langle z^{k}JG,\psi\rangle\] \[=\langle z^{k}G,\overline{J}\psi\rangle.\]
As \(\left\{z^{k}G:k\geq 0\right\}\) is dense in \(H^{p}\), we have, for any \(n\geq 0\),
\[0 =\langle z^{n},\overline{J}\psi\rangle\] \[=\int_{0}^{2\pi}J(e^{i\theta})\overline{\psi(e^{i\theta})}e^{in \theta}\,\frac{d\theta}{2\pi}.\]
From this, it follows that \(K(z):=J(z)\overline{\psi(z)}/z\) is an element of \(H^{q}\). We further divine that \(\psi\) must be determined by
\[\psi(z)=J(z)\overline{zK(z)},\ z\in\mathbb{T},\]
for some \(K\in H^{q}\). The condition \(\psi(0)=1\) takes the form
\[1=J_{1}\overline{K_{0}}+J_{2}\overline{K_{1}}+J_{3}\overline{K_{2}}+\cdots.\]
We now must minimize \(\|J\overline{zK}\|_{q}\) subject to \(K\in H^{q}\) satisfying the above constraint. It is tempting to try \(K=\overline{J}\), but this will not work, since
\[0=J_{1}\overline{J_{0}}+J_{2}\overline{J_{1}}+J_{3}\overline{J_{2}}+\cdots.\]
Instead, take \(K(z)=c[J(z)-J(0)]/z\), where \(c^{-1}=|J_{1}|^{2}+|J_{2}|^{2}+|J_{3}|^{2}+\cdots\). Then
\[J_{1}\overline{K_{0}}+J_{2}\overline{K_{1}}+J_{3}\overline{K_{2}}+\cdots=c \left(|J_{1}|^{2}+|J_{2}|^{2}+|J_{3}|^{2}+\cdots\right)=1,\]
as needed.
Using this choice of \(K\) to compute \(\psi\), we obtain
\[\psi(z) =J(z)\overline{zK(z)}\] \[=cJ(z)\overline{z}\ \frac{\overline{J(z)}-\overline{J(0)}}{ \overline{z}}\] \[=c(1-J(z)\overline{J(0)}).\]
Since \(\|q_{n,p}[f]f-1\|_{p}\geqslant 1/\|\psi\|_{q}\), this furnishes the following bound.
**Proposition 4.0.13**.: _Let \(1<p<\infty\), \(1/p+1/q=1\), \(n\in\mathbb{N}\), and \(f\in H^{p}\) with \(f(0)=1\). Then_
\[\|q_{n,p}[f]f-1\|_{p}\geqslant\frac{|J_{1}|^{2}+|J_{2}|^{2}+|J_{3}|^{2}+\cdots} {\|1-\overline{J(0)}J(z)\|_{q}},\]
_where \(J\) is the inner part of \(f\)._
Incidentally, \(c^{-1}=(|J_{1}|^{2}+|J_{2}|^{2}+|J_{3}|^{2}+\cdots)=\|J\|_{2}^{2}-|J(0)|^{2}=1 -|J(0)|^{2}\), so the lower bound above could be written equivalently as
\[\frac{1-|J(0)|^{2}}{\left\|(1-|J(0)|^{2})-\overline{J(0)}(J_{1}z+J_{2}z^{2}+ \cdots)\right\|_{q}},\]
which is obviously no greater than \(1\), as needed.
We now step away from duality. Our final results concern OPA errors, but are proven with \(H^{2}\) methods. The following proposition should be compared with Proposition 4.0.5; although the result below provides a better bound, it holds only for \(p>2\).
**Proposition 4.0.14**.: _Let \(2<p<\infty\), and suppose \(f\in H^{p}\) has a factorization \(f=JG\), where \(J\) is inner and \(G\) is outer. If \(f(0)\neq 0\), then for any \(n\in\mathbb{N}\),_
\[\|q_{n,p}[f]f-1\|_{p}\geqslant\sqrt{1-|J(0)|^{2}}.\]
Proof.: Let \(\mathscr{P}\) be the collection of all polynomials. Then
\[\|q_{n,p}[f]f-1\|_{p} \geqslant\inf_{Q\in\mathscr{P}}\|Qf-1\|_{p}\] \[\geqslant\inf_{Q\in\mathscr{P}}\|Qf-1\|_{2}\] \[=\inf_{Q\in\mathscr{P}}\|QJG-1\|_{2}\] \[=\inf_{Q\in\mathscr{P}}\|QG-\overline{J}\|_{2}\] \[=\|\overline{J(0)}-\overline{J}\|_{2},\]
with the last equality following from the fact that since \(G\) is outer, then the set \(\{QG:Q\in\mathscr{P}\}\) is dense in \(H^{2}\). Now use
\[1=\|J\|_{2}^{2}=|J(0)|^{2}+\|J-J(0)\|_{2}^{2}.\]
Note further that if \(J=B\) is a Blaschke product, then this implies that
\[\|q_{n,p}[f]f-1\|_{p}\geqslant\|J(0)-J\|_{2}=\sqrt{1-|B(0)|^{2}}=\sqrt{1-|w_{ 1}w_{2}w_{3}\cdots|^{2}},\]
where \(w_{1},w_{2},w_{3},\ldots\) are the zeros of \(B\).
We end by providing a related result when \(1<p<2\).
**Proposition 4.0.15**.: _Let \(1<p<2\) and suppose \(f(0)\neq 0\). Then for any \(n\in\mathbb{N}\),_
\[\|q_{n,p}[f]f-1\|_{p}\leqslant\sqrt{1-(q_{n,2}[f]f)(0)}.\]
Proof.: Routine bounds yield
\[\|q_{n,p}[f]f-1\|_{p} \leqslant\|q_{n,2}[f]f-1\|_{p}\] \[\leqslant\|q_{n,2}[f]f-1\|_{2}\] \[=\sqrt{1-(q_{n,2}[f]f)(0)},\]
where the last equality is a consequence of the linear system described in Equation 1.0.1.
Taking \(n=0\) above, we have the simple bound
\[\|q_{n,p}[f]f-1\|_{p}\leqslant\left(1-\frac{|f(0)|^{2}}{\|f\|_{2}^{2}}\right) ^{1/2}.\]
|
2310.19323 | A Low-Complexity Machine Learning Design for mmWave Beam Prediction | The 3rd Generation Partnership Project (3GPP) is currently studying machine
learning (ML) for the fifth generation (5G)-Advanced New Radio (NR) air
interface, where spatial and temporal-domain beam prediction are important use
cases. With this background, this letter presents a low-complexity ML design
that expedites the spatial-domain beam prediction to reduce the power
consumption and the reference signaling overhead, which are currently
imperative for frequent beam measurements. Complexity analysis and evaluation
results showcase that the proposed model achieves state-of-the-art accuracy
with lower computational complexity, resulting in reduced power consumption and
faster beam prediction. Furthermore, important observations on the
generalization of the proposed model are presented in this letter. | Muhammad Qurratulain Khan, Abdo Gaber, Mohammad Parvini, Philipp Schulz, Gerhard Fettweis | 2023-10-30T07:48:15Z | http://arxiv.org/abs/2310.19323v2 | # A Low-Complexity Machine Learning Design for mmWave Beam Prediction
###### Abstract
The 3rd Generation Partnership Project (3GPP) is currently studying machine learning (ML) for the fifth generation (5G)-Advanced New Radio (NR) air interface, where spatial and temporal-domain beam prediction are important use cases. With this background, this letter presents a low-complexity ML design that expedites the spatial-domain beam prediction to reduce the power consumption and the reference signaling overhead, which are currently imperative for frequent beam measurements. Complexity analysis and evaluation results showcase that the proposed model achieves state-of-the-art accuracy with lower computational complexity, resulting in reduced power consumption and faster beam prediction. Furthermore, important observations on the generalization of the proposed model are presented in this letter.
beam prediction, machine learning (ML), millimeter-wave (mmWave), supervised learning (SL).
## I Introduction
The availability of abundant bandwidth at millimeter-wave (mmWave) bands makes it a requisite for higher throughput. However, to achieve an adequate link margin, beamforming via large antenna arrays is essential [1]. Consequently, the evaluation of beam qualities through frequent beam measurements and beam qualities reporting is imperative to help the base station (BS) and the user equipment (UE) decide the optimal beam pair for link establishment. Within the 3rd Generation Partnership Project (3GPP) this is referred to as beam management procedure.
In order to enable the UE to measure the beam qualities, beamformed reference signals (synchronization sequence blocks (SSBs)) are sequentially transmitted from the BS in the form of an SSB burst. This allows the UE to measure the qualities of all the BS transmit beams in terms of their reference signal received powers (RSPs) through one of its receive beams. Further, to measure the qualities of all possible transmit-receive beam pairs, several SSB bursts are transmitted. This procedure of beam qualities measurement is known as exhaustive beam scan (EBS), which suffers from large beam measurement overhead, increased latency, and higher power consumption [2, 3]. To overcome this, a two-level hierarchical beam scan (HBS) consisting of parent (wide) and child (narrow) beams is employed [4]. Nevertheless, it suffers from increased latency and inaccuracy of beam selection.
Recently, machine learning (ML) methods have been extensively applied to wireless communications to solve the non-linear problems that were burdensome to be resolved by conventional signal processing techniques. Consequently, several studies propose the use of ML for beam prediction and selection [2]. A straightforward approach to reduce the beam measurement overhead is to utilize the UE location information [5] to train an ML model for beam prediction. However, transmission of UE location information, which may not necessarily be available always to the BS, poses an additional feedback overhead. To avoid this issue, the study in [6] fuses the concept of HBS with a supervised ML model and exploits the spatial correlation among the parent and the child beam qualities to predict the optimal child beam. A similar approach in [7] utilizes the received signal vector of parent beams as an input to a convolutional neural network (CNN). Another approach in [8] proposes to reduce the beam measurement overhead by transmitting a subset of child beams and then utilizes a CNN that predicts the optimal beam by learning the spatial correlation among child beams.
Starting from 2022, the study of ML for the fifth generation (5G)-Advanced New Radio (NR) air interface is an important project at 3GPP. Here, the focus is to explore the benefits of augmenting the NR air interface with ML models for enhanced performance and/or reduced overhead and complexity [9]. An important study item in this project is the evaluation of ML for beam management, where spatial and temporal-domain beam prediction are the sub use cases [10]. Following 3GPP guidelines, companies report their proposed evaluation methodology and results on ML-based beam prediction [11]. A recent proposal for spatial-domain beam prediction is presented in [12], where based on the received power of a subset of the transmit beams, a CNN is trained to predict the RSRPs of the non-transmitted beams resulting in reduced overhead.
Though most of the discussed ML solutions reduce the beam measurement overhead while achieving a performance closer to EBS, no significant attention has been paid to the model computational complexity, model training time and its generalization capabilities. To bridge this research gap, this letter presents a low-complexity ML beam prediction approach that achieves the performance closer to the optimal EBS but with lower computational complexity as compared to other ML approaches, resulting in faster beam prediction. Additionally, to investigate the generalization capabilities of our model, we evaluate its performance over 3GPP specified scenarios.
## II System Model
This section details channel and beam steering models, followed by an overview of the beam management procedure.
### _Channel Model_
We consider a downlink mmWave multi-in multi-out (MIMO) communication system, where the BS and the UE are equipped with \(N_{\rm T}\) and \(N_{\rm R}\) antenna elements, respectively. Using the clustered channel model, the channel is assumed to be the sum of the line-of-sight (LOS) path and \(C\) non-line-of-sight (NLOS) clusters with \(L\) paths per cluster. The channel matrix \(\textbf{H}\in\mathbb{C}^{N_{\rm R}\times N_{\rm T}}\) can then be written as [13]
\[\textbf{H}=\underbrace{\sqrt{\frac{K\Lambda}{K+1}}}_{\text{OLOS}} \mathbf{a}_{\text{R}}(\phi_{\text{LOS}}^{\text{R}},\theta_{\text{LOS}}^{\text{ R}})\mathbf{a}_{\text{T}}^{H}(\phi_{\text{LOS}}^{\text{T}},\theta_{\text{ LOS}}^{\text{T}})}_{\textbf{H}_{\text{LOS}}}\\ +\underbrace{\sqrt{\frac{\Lambda}{L(K+1)}}\sum_{c=1}^{C}\sum_{l= 1}^{L}\alpha_{c,l}\mathbf{a}_{\text{R}}(\phi_{c,l}^{\text{R}},\theta_{c,l}^{ \text{R}})\mathbf{a}_{\text{T}}^{H}(\phi_{c,l}^{\text{T}},\theta_{c,l}^{ \text{T}})}. \tag{1}\]
Here, the \(l\)-th path of the \(c\)-th cluster has azimuth (elevation) angle-of-arrival (AoA) \(\phi_{c,l}^{\text{R}}(\theta_{c,l}^{\text{R}})\) and azimuth (elevation) angle-of-departure (AoD) \(\phi_{c,l}^{\text{T}}(\theta_{c,l}^{\text{T}})\), while \(\alpha_{c,l}\) is the complex path gain. The same variables are analogously defined for the LOS path and are indicated by the LOS index. Furthermore, \(\mathbf{a}_{\text{R}}(\cdot)\in\mathbb{C}^{N_{\rm R}\times 1}\) and \(\mathbf{a}_{\text{T}}(\cdot)\in\mathbb{C}^{N_{\rm T}\times 1}\) denote the UE and the BS array response, respectively, \((\cdot)^{H}\) denotes conjugate transpose, \(K\) is the Ricean factor, and \(\Lambda\) indicates the pathloss.
We assume a uniform planar array (UPA) in the y-z plane at the BS and the UE with \(N_{\rm y}\) and \(N_{\rm z}\) antenna elements (\(N_{\rm y}N_{\rm z}=N\)) on \(\rm y\) and \(\rm z\) axis, respectively. Here, for ease of notation we drop the subscript for the BS and UE. The array response vector for the UPA can then be written as
\[\mathbf{a}(\phi,\theta)=\frac{1}{\sqrt{N}}[1,\cdots,e^{j\frac{2 \pi}{s}d(y^{\prime}\sin(\phi)\sin(\theta)+z^{\prime}\cos(\theta))},\cdots,\] \[e^{j\frac{2\pi}{s}d((N_{\rm y}-1)\sin(\phi)\sin(\theta)+(N_{\rm z }-1)\cos(\theta))}]^{T}, \tag{2}\]
where \(y^{\prime}\in\{0,1,\cdots,N_{\rm y}-1\}\), \(z^{\prime}\in\{0,1,\cdots,N_{\rm z}-1\}\), while \(\lambda\) and \(d=\frac{\lambda}{2}\) indicate the wavelength and antenna element spacing, respectively.
### _Beam Steering Model_
We consider phase shifter based analog beamforming with one radio frequency (RF) chain. At the BS the transmit signal is beamformed by a beamforming vector \(\textbf{f}=[f_{1},f_{2},\cdots,f_{N_{\rm T}}]^{T}\in\mathbb{C}^{N_{\rm T}\times 1}\) and at the UE the received signals are combined with a receive combining vector \(\textbf{w}=[w_{1},w_{2},\cdots,w_{N_{\rm R}}]^{T}\in\mathbb{C}^{N_{\rm R}\times 1}\). Here, \(f_{i}\) and \(w_{j}\) denote the complex weight on the \(i\)-th transmit and \(j\)-th receive antenna element, respectively. The transmit and receive beams are selected from the predefined codebooks \(\mathcal{F}\) and \(\mathcal{W}\), consisting of \(F\) and \(W\) candidate beams, respectively. The codebooks are designed on the following beam steering scheme.
\[\textbf{f}\in\mathcal{F}=\{\mathbf{a}_{\text{T}}(\bar{\phi}_{1}^{ \text{T}},\bar{\theta}_{1}^{\text{T}}),\mathbf{a}_{\text{T}}(\bar{\phi}_{2}^{ \text{T}},\bar{\theta}_{2}^{\text{T}}),\cdots,\mathbf{a}_{\text{T}}(\bar{ \phi}_{F}^{\text{T}},\bar{\theta}_{F}^{\text{T}})\} \tag{3}\] \[\textbf{w}\in\mathcal{W}=\{\mathbf{a}_{\text{R}}(\bar{\phi}_{1}^{ \text{R}},\bar{\theta}_{1}^{\text{R}}),\mathbf{a}_{\text{R}}(\bar{\phi}_{2}^{ \text{R}},\bar{\theta}_{2}^{\text{R}}),\cdots,\mathbf{a}_{\text{R}}(\bar{\phi} _{W}^{\text{R}},\bar{\theta}_{W}^{\text{R}})\} \tag{4}\]
Here, \(\bar{\phi}_{m}^{\text{T}}(\bar{\theta}_{m}^{\text{T}})\) for the \(m\)-th transmitting beam \(\textbf{f}_{m}\), \(m\in\{1,2,\cdots,F\}\) and \(\bar{\phi}_{n}^{\text{R}}(\bar{\theta}_{n}^{\text{R}})\) for the \(n\)-the receiving beam \(\textbf{w}_{n}\), \(n\in\{1,2,\cdots,W\}\) are the quantized azimuth (elevation) AoD and AoA, respectively. Given the channel matrix **H**, the transmit signal \(x\), the \(m\)-th transmitting beam \(\textbf{f}_{m}\) and the \(n\)-th receiving beam \(\textbf{w}_{n}\), the received signal \(y_{m,n}\) is
\[y_{m,n}=\sqrt{P}\textbf{w}_{n}^{H}\textbf{Hf}_{m}x+\textbf{w}_{n}^{H}\boldsymbol{ \eta}, \tag{5}\]
where \(P\) is the transmit power and \(\boldsymbol{\eta}\in\mathbb{C}^{N_{\rm R}\times 1}\) is the additive white Gaussian noise (AWGN).
### _Beam Management in 5G NR_
The 3GPP beam management procedure is based on the EBS and aims to find the optimal beam pair \(\{\textbf{f}_{m^{*}},\textbf{w}_{n^{*}}\}\) that maximizes the RSRP given as: \(\text{RSRP}_{m,n}=|y_{m,n}|^{2}\). The optimization problem can be formulated as
\[\{m^{*},n^{*}\}=\underset{\begin{subarray}{c}m\in\{1,2,\cdots,F \},\\ n\in\{1,2,\cdots,W\}\end{subarray}}{\text{RSRP}_{m,n}}. \tag{6}\]
EBS solves this optimization problem by exhaustively searching over all possible beamforming and combining vectors leading to an excessively huge beam training overhead of \(F\cdot W\) beam measurements.
To reduce this beam measurement overhead, HBS utilizes a multi-resolution codebook and the problem of beam selection is divided into two levels. The first-level search identifies the best parent beam by solving
\[\{m_{\rm p}^{*},n_{\rm p}^{*}\}=\underset{\begin{subarray}{c}m_{\rm p}\in\{1,2, \cdots,F_{\rm p}\},\\ n_{\rm p}\in\{1,2,\cdots,W_{\rm p}^{*}\}\end{subarray}}{\text{RSRP}_{m_{\rm p},n_{ \rm p}}^{\rm p}}. \tag{7}\]
Here, \(F_{\rm p}=\frac{F}{s_{\rm T}}\) and \(W_{\rm p}=\frac{W}{s_{\rm R}}\) indicate the number of parent beams at the BS and UE, respectively. Further, \(s_{\rm T}\) and \(s_{\rm R}\) defines the number of child beams within each parent beam at the BS and UE, respectively. After identifying the best parent beam pair, the second-level search confirms the optimal child beam pair within the range of the selected parent beam pair (7), by
\[\{m^{*},n^{*}\}=\underset{\begin{subarray}{c}m\in\{(m_{\rm p}^{*}-1)s_{\rm T}+1, \cdots,m_{\rm p}^{*}s_{\rm T}\},\\ n\in\{(n_{\rm p}^{*}-1)s_{\rm R}+1,\cdots,n_{\rm p}^{*}s_{\rm R}\}\end{subarray}}{ \text{argmax}}\text{RSRP}_{m,n}^{c}. \tag{8}\]
Notably, the first and the second-level search requires \(F_{\rm p}\cdot W_{\rm p}\) and \(s_{\rm R}\cdot s_{\rm T}\) beam measurements, respectively, resulting in reduced beam measurement overhead. However, the multi-level search incurs increased latency.
## III Low-Complexity machine learning Design for mmWave Beam Prediction
In this section, we leverage the angular domain spatial correlation to propose a low-complexity beam prediction model for fast beam training. Motivated by the fact that very large antenna arrays can only be employed at the BS due to size constraints, in the following sections, we limit our discussion to the identification of the optimal transmit beam, i.e., the assumption of the knowledge of the optimal receive beam.
### _Algorithm Framework_
Motivated by the two-level beam search, we propose to cover the whole angular region with the first-level parent beams. By doing so, we observe that there exists a strong angular spatial correlation among parent and child beams in a certain environment. As an example, Fig. 1 shows the angular spatial correlation between the RSRPs of the parent and the child beams, where each parent beam contains four child beams. Here, it can be observed that the parent beam has a stronger correlation with a limited number of child beams. Consequently, we assume that the RSRP\({}^{c}\) of the child beams is a function \(f_{1}(\cdot)\) of the parental RSRP values, i.e.,
\[\text{RSRP}^{c}=f_{1}(\text{RSRP}^{\text{p}}). \tag{9}\]
In particular, we aim on probing the parent beams and obtaining their corresponding RSRPs from the received signal vector \(\mathbf{y}^{\text{p}}=[y_{1}^{\text{p}},y_{2}^{\text{p}},\cdots,y_{F_{\text{p}} }^{\text{p}}]^{T}\) and by intelligently merging these parent RSRPs with the strong correlation among parent and child beams, we can predict the optimal child beam index \(m^{*}\). Due to the discrete number of candidate beams, the beam prediction problem can be formulated as multiclasslassification problem and can be written as
\[m^{*}=f_{2}(\text{RSRP}^{\text{p}}),\quad m^{*}\in\{1,2,\cdots,F\} \tag{10}\]
where \(f_{2}(\cdot)\) is the function that learns the correlation between parent and child RSRPs for optimal beam index prediction. Further, due to the highly non-linear relationship between RSRPs and channel directivity, the prediction is difficult to be estimated by conventional signal processing methods. With this background, we propose a low-complexity ML design for beam prediction in the following section.
### _Model Design_
In this section, we introduce our ML model and its corresponding inputs and outputs as shown in Fig. 2.
#### Iii-B1 Input Layer
Based on our previous discussions, the RSRP\({}^{\text{p}}\) of the parent beams obtained via the first level of traditional HBS is provided as an input to the model. This indicates that the input layer consists of \(F_{\text{p}}\) nodes. As an example, considering \(F=64\) beams and selecting \(s_{\text{T}}=4\) results in \(F_{\text{p}}=16\) parent beams which means that a beam measurement overhead reduction of \(1-\frac{16}{64}=75\%\) is achieved as compared to the EBS.
#### Iii-B2 Output Layer
For the prediction of the optimal child beam from all the candidate child beams, a fully-connected (FC) layer, consisting of \(F\) nodes is introduced, which learns the spatial correlation between RSRP\({}^{\text{p}}\) and RSRP\({}^{\text{c}}\) and transforms it to the candidate child beams. Finally, a non-linear softmax activation layer is introduced that returns the probabilities of all the child beams. The output of the proposed low-complexity neural network (NN) can be written as
\[\hat{\boldsymbol{\mathcal{P}}}=\text{softmax}(\mathbf{A}^{T}\text{RSRP}^{ \text{p}}+\mathbf{b}). \tag{11}\]
Here, \(\hat{\boldsymbol{\mathcal{P}}}\in\mathbb{C}^{F\times 1}\) is the predicted output probability vector of all the child beams, while \(\mathbf{A}\in\mathbb{C}^{F_{\text{p}}\times F}\) and \(\mathbf{b}\in\mathbb{C}^{F\times 1}\) are the weights and the biases, respectively. Finally, the child beam with maximum probability \(\hat{\mathcal{P}}_{m}\) is selected, i.e.,
\[\hat{m}^{*}=\operatorname*{argmax}_{m\in\{1,2,\cdots,F\}}\hat{\mathcal{P}}_{m}. \tag{12}\]
## IV Performance Evaluation
This section details dataset generation, model training, complexity analysis, and performance evaluation over specified key performance indicators (KPIs). For reproducibility of results, our simulation dataset and source code is publicly available [14].
### _Dataset Generation and Model Training_
For dataset collection, we utilize the EBS approach in combination with HBS. Our dataset consists of parent RSRP measurements, i.e., RSRP\({}^{\text{p}}\) obtained via the traditional HBS and are provided as input features to the ML model. In addition, the offline training labels, i.e., optimal beam indices are obtained via the traditional EBS [15]. Table I lists default simulation parameters. The location of the UE is drawn based on a uniform spatial distribution in the cell coverage area. The noise power \(\sigma^{2}\) is computed as \((-174+10\text{log}_{10}B+N_{\text{F}})\) dBm and the path loss is given as \((20\text{log}_{10}d+20\text{log}_{10}f_{\text{c}}-147.56)\) dB, where \(d\) indicates distance. Finally, the channel model is considered as a clustered delay line (CDL) model [13]. Further, to investigate the generalization capabilities of our ML model, we consider following scenarios with different combinations of channel profiles [15].
* Scenario \(1\): The ML model is trained based on a training dataset constructed by utilizing the CDL-D channel profile and performs inference on the UE with same channel profile but with unknown location.
Fig. 1: Spatial correlation among RSRPs of parent and child beams.
Fig. 2: Proposed low-complexity ML design for beam prediction.
* Scenario \(2\): The ML model is trained based on a training dataset constructed by utilizing the CDL-D channel profile and performs inference on a UE with the CDL-E channel profile and with unknown location.
* Scenario \(3\): The ML model is trained on the mixed dataset from above scenarios and performs inference on the UE of both channel profiles but with unknown location.
Our dataset consists of \(25{,}000\) samples, where the training, validation, and testing data split is \(70\%\), \(10\%\), and \(20\%\), respectively. Further, the ML model is trained for \(n_{\mathrm{e}}=100\) epochs, the model parameters are optimized by the Adam optimizer [16] with the mean square error as loss function.
### _Key Performance Indicators_
For performance evaluation in terms of beam measurement overhead, the KPI is selected as reference signalling overhead reduction (%) \(1-\frac{N}{M}\), where \(N\) is the number of beams (SSBs) required as input by the ML model and \(M\) is the total number of beams to be predicted [15]. For beam prediction accuracy, the KPI Top-\(K\) (%) is defined as the percentage that the truly optimal genie-aided transmit beam is among the \(K\) best beams predicted by the ML model and the beam prediction error (%) is calculated as \(1-\) beam prediction accuracy. Here, the Top-\(1\) genie-aided transmit beam is obtained via EBS [15]. Further, the beam prediction accuracy is also evaluated in terms of achieved average RSRP. Finally, for complexity analysis, we compare the model complexity in terms of number of trainable parameters and number of floating-point operations (FLOPs).
### _Complexity Analysis_
An important measure of ML model complexity is the number of trainable parameters (\(n_{\mathrm{l}}\)), which for an FC-NN layer with \(n_{\mathrm{i}}\) inputs and \(n_{\mathrm{o}}\) outputs can be computed as \(n_{\mathrm{l}}=(n_{\mathrm{i}}+1)n_{\mathrm{o}}\). Consequently, for proposed model the number of parameters are \(n_{\mathrm{l}}=(F_{p}+1)F=1088\). Further, the number of trainable parameters for a convolutional layer can be obtained as \(n_{\mathrm{l}}=n_{\mathrm{f}}(f_{\mathrm{h}}f_{\mathrm{w}}f_{\mathrm{d}}+1)\), where \(n_{\mathrm{f}},f_{\mathrm{h}},f_{\mathrm{w}}\), and \(f_{\mathrm{d}}\) indicate the number of filters, filter height, width, and depth, respectively. We evaluate the complexity in terms of model size with \(32\)-bit precision. Table II indicates that due to a smaller number of trainable parameters the proposed model has the smallest size as compared to other models.
The time complexity of our proposed ML model is compared in terms of number of required FLOPs using Big-\(\mathcal{O}\) notation. During training, the ML model performs forward and backward pass and it is useful to analyze the training and inference time complexity. In both forward and backward pass, the trainable parameters of a layer with \(w\) nodes are updated by a matrix-vector multiplication resulting in a time complexity of \(\mathcal{O}(w^{2})\) FLOPs. Furthermore, considering an NN with \(l\) layers, \(w\) nodes per layer, and training the network with \(n_{\mathrm{d}}\) data samples, and for \(n_{\mathrm{e}}\) epochs requires \(\mathcal{O}(n_{\mathrm{e}}n_{\mathrm{d}}w^{2})\) FLOPs during training, while the inference requires only \(\mathcal{O}(n_{\mathrm{d}}lw^{2})\) FLOPs as only forward pass is performed during inference. Similarly, the time complexity of a CNN, with \(l_{\mathrm{c}}\) convolutional and \(l\) FC layers during training is \(\mathcal{O}(n_{\mathrm{e}}n_{\mathrm{d}}(n_{\mathrm{f}}l_{\mathrm{c}}i_{ \mathrm{h}}\hat{w}_{\mathrm{w}}(f_{\mathrm{h}}f_{\mathrm{w}}f_{\mathrm{d}}))+ lw^{2})\). Here, in addition to the parameters defined above, \(\hat{n}_{\mathrm{h}}\) and \(i_{\mathrm{w}}\) indicate input height and width, respectively. Further, the inference time complexity is then given as \(\mathcal{O}(n_{\mathrm{d}}(n_{\mathrm{f}}l_{\mathrm{c}}i_{\mathrm{h}}\hat{w}_{ \mathrm{w}}(f_{\mathrm{h}}f_{\mathrm{w}}f_{\mathrm{d}}))+lw^{2})\).
Table II summarizes the complexity comparison with the state of the art. For a fair comparison, the number of estimated FLOPs are for one epoch and one data sample, i.e., \(n_{\mathrm{e}}=n_{\mathrm{d}}=1\). Here, it can be seen that the proposed ML model achieves significantly lower computational complexity and benefits from lower power consumption. Further, the execution of the proposed ML model on an Intel i7-1185G7 processor indicates that the training time per epoch and per data sample is \(9\,\mathrm{\SIUnitSymbolMicro s}\), which allows efficient and less time consuming model retraining. Besides, the execution time for each prediction is around \(2\,\mathrm{\SIUnitSymbolMicro s}\) allowing faster beam prediction.
### _Simulation Results_
For performance evaluation, in addition to the two-level HBS, CNN from [6], and the FC-NN from [5], the EBS based beam selection is selected as a baseline for comparison [15]. During inference the input to all ML models are the RSRP\({}^{\mathrm{p}}\) measurements of the parent beams and the outputs are the predicted probabilities of each child beam being the best.
In terms of beam measurement overhead, the baseline EBS requires \(64\) beam measurements, resulting in \(100\%\) beam measurement overhead. HBS requires \(16\) parent and \(4\) child beam measurements, resulting in beam measurement overhead of \(32\%\). For the ML models, during inference, the measurement overhead depends on the value of \(K\in\{4,2,1\}\), reflecting the necessity of probing the remaining \(K\) beams for final selection, resulting in beam measurement overhead of around \(32\%\), \(28\%\)
and \(25\%\), respectively, as shown in Fig. 3. In terms of beam prediction error for \(K=1\), our proposed approach reduces the error by around \(2\), \(1.4\), and \(1\) percentage points as compared to HBS, [5], and [6], respectively. Similar observations can be made from Fig. 4, where the performance is compared in terms of the average RSRP. Here it can be noticed that the mean RSRP achieved by all ML approaches is well within a \(0.15\) dB margin of the genie-aided (EBS) transmit beam. However, it is worth mentioning that the HBS achieves similar performance at the cost of increased latency.
Fig. 5 showcases the generalization capabilities of our proposed model over three different scenarios as discussed in Section IV-A. We observe that for ML Top-1 the prediction error of the model increases by around \(5\) percentage points for scenario \(2\), due to different channel profiles used in training and testing. Further, the error can be reduced when the model is trained on a mixed data set from different channel profiles, i.e., scenario \(3\). However, the error in scenario \(3\) is still around \(1.2\) percentage points higher as compared to scenario \(1\). An important observation made here is that training a model for a large number of scenarios results in reduced inference performance for a specific scenario. Thus, there exists a trade-off between ML model accuracy performance and its generalization capabilities.
## V Conclusion
This letter proposes an ML-based beam prediction design that reduces the reference signaling overhead and predicts the transmit beam with higher accuracy and much lower computational complexity as compared to the state-of-the-art. Specifically, we formulated the beam prediction problem as a multiclass-classification task and proposed a low-complexity ML design to learn the spatial angular correlation between parent and child beams to predict the optimal beam. Due to lower computational complexity, the proposed model reduces the power consumption at the UE and the beam prediction time making it suitable for faster beam prediction. Further, through simulation results, we showed that there exists a trade-off between ML model performance and its generalization capabilities. These 3GPP compliant evaluation results indicate the feasibility of ML-based mmWave beam prediction for 5G-Advanced NR and beyond 5G communication networks.
|
2303.01454 | The field of moduli of plane curves | It is a classical fact going back to F. Klein that an elliptic curve $E$ over
$\bar{\mathbb{Q}}$ is defined by a homogeneous polynomial in $3$ variables with
coefficients in $\mathbb{Q}(j_{E})$, where $j_{E}$ is the $j$-invariant of $E$,
and $\mathbb{Q}(j_{E})$ is the field of moduli of $E$. The general definition
of field of moduli goes back to T. Matsusaka and G. Shimura. With few
exceptions, it coincides with the intersection of the fields where the curve is
defined.
We prove that every smooth plane curve of degree prime with $6$ is defined by
a homogeneous polynomial with coefficients in the field of moduli. Furthermore,
we show that most plane curves in arbitrary degree, and more generally most
algebraic cycles in $\mathbb{P}^{2}$ with finite automorphism group, descend to
a Brauer-Severi surface over the field of moduli. | Giulio Bresciani | 2023-03-02T18:12:01Z | http://arxiv.org/abs/2303.01454v3 | # The field of moduli of plane curves
###### Abstract.
It is a classical fact that an elliptic curve \(E\) over \(\bar{\mathbb{Q}}\) is defined by a homogeneous polynomial in \(3\) variables with coefficients in \(\mathbb{Q}(j_{E})\), where \(j_{E}\) is the \(j\)-invariant of \(E\), and \(\mathbb{Q}(j_{E})\) is the _field of moduli_ of \(E\).
We show that an analogous result holds for most smooth, plane curves of degree \(d\geq 4\) in characteristic \(0\), and more generally for cycles with finite automorphism groups. A particular case is that, if \(d\) is prime with \(6\), then every smooth plane curve of degree \(d\) descends to a curve in \(\mathbb{P}^{2}\) over the field of moduli.
## 1. Introduction
We work over a field \(k\) of characteristic \(0\) with algebraic closure \(K\). Consider a cycle \(Z\) on \(\mathbb{P}^{2}_{K}\), i.e. a formal sum of irreducible, closed subsets of \(\mathbb{P}^{2}_{K}\) with integral coefficients. Let \(H\subset\operatorname{Gal}(K/k)\) be the subgroup of elements \(\sigma\in\operatorname{Gal}(K/k)\) such that \(\sigma^{*}Z\) is linearly equivalent to \(Z\), i.e. \(\sigma^{*}(\mathbb{P}^{2}_{K},Z)\simeq(\mathbb{P}^{2}_{K},Z)\). The _field of moduli_\(k_{Z}\) of \(Z\) is the subfield of \(K\) fixed by \(H\). If \(Z\) descends to a cycle on a Brauer-Severi surface over \(k^{\prime}\) for some subextension \(k^{\prime}\), then \(k^{\prime}\) contains the field of moduli \(k_{Z}\).
**Question**.: Does \(Z\) descend to a cycle on a Brauer-Severi surface over \(k_{Z}\)? If it does, does it also descend to a cycle on \(\mathbb{P}^{2}_{k_{Z}}\)?
**Known results**.: The question can be formulated for any variety \(X\) with a structure \(\xi\) such as a cycle, a vector bundle, a group structure and so forth: is \((X,\xi)\) defined over its field of moduli? There has been considerable work on this question for curves and abelian varieties [16][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41][42][43][44][45][46][47][48][49][50][51][52][53][54][55][56][57][58][59][60][61][62][63][64][65][66][67][68][69][70][71][72][73][74][75][76][77][78][79][80][81][82][83][84][85][86][87][88][89][90][91][92][93][94][95][96][97][98][99][99][100][101][102][103][104][105][106][107][108][109][110][111][112][113][114][115][116][117][118][119][120][121][122][123][124][125][126][127][128][129][130][131][132][133][141][142][143][144][145][146][147][148][149][150][151][152][153][154][155][156][160][161][162][163][164][165][166][167][168][169][170][171][172][173][174][175][176][177][178][180][181][182][183][184][185][186][187][188][190][191][192][193][194][195][196][197][198][299][299][299][299][300][299][299][310][299][320][299][332][333][340][341][342][343][344][345][346][347][348][349][350][351][352][353][354][355][356][357][361][362][363][364][365][366][367][368][369][370][369][380][360][361][369][371][360][362][363][365][367][368][369][372][361][363][364][365][369][373][374][375][376][377][381][382][383][384][399][390][391][392][393][393][400][401][402][403][403][404][405][406][407][408][409][410][409][411][412][413][414][415][416][417][418][419][420][421][422][423][424][43][435][436][447][448][449][451][452][453][454][455][456][457][458][459][460][461][462][463][464][465][466][467][468][469][471][472][473][474][475][476][481][482][483][484][491][485][486][487][488][491][489][492][493][494][495][496][497][498][499][499][500][501][502][503][504][505][506][507][508][510][509][511][511][512][513][514][515][516][517][518][519][520][521][522][53][534][535][545][55][556][571][581][592][536][593][501][502][537][538][539][540][541][542][543][5439][554][556][561][562][563][564][565][566][5671][568][569][572][569][573][574][575][576][577][581][582][583][584][585][586][587][588][593][594][595][596][597][598][599][600][610][621][630][631][640][641][642][643][644][645][657][6573][668][66][66][66][666][671][672][673][687][688][690][690][691][692][693][601][694][695][696][697][610][697][6111][698][698][699][701][699][6112][699][613][6013][699][614][6144][6015][696][615][696][616][6173][6174][618][619][622][619][623][619][631][632][644][645][657][666][66][671][689][697][6100][633][646][657][66][671][689][6101][698][622][633][647][668][699][6102][699][6113][699][603][6148][696][616][696][6173][618][619][623][649][624][657][66][6174][618][619][633][649][610][632][649][625][649][633][657][66][6640][658][671][689][697][6113][698][614][603][649][615][616][6174][6175][618][619][6240][619][6200][610][633][641][642][657][68][696][617][697][6114][698][6175][699][6200][618][699][600][619][6100][690][6100][691][6115][690][6170][691][6181][691][603][619][6221][632][643][644][657][688][696][6170][697][
whether \(C\) is defined over \(k_{C}\) is equivalent to studying whether \(C\) descends to a curve in \(\mathbb{P}^{2}_{k_{C}}\).
E. Badr, F. Bars and E. Lorenzo Garcia have given an example where \(P_{\mathfrak{C}}\) is a non-trivial Brauer-Severi surface [1]. E. Badr and F. Bars have given an example where \(C\) has no models over \(k_{C}\)[1].
M. Artebani and S. Quispe [1, Theorem 0.2] proved that, if \(k=\mathbb{R}\) and \(K=\mathbb{C}\), a smooth plane quartic \(C\) over \(\mathbb{C}\) with \(\operatorname{Aut}(C)\neq C_{2}\) is defined over the field of moduli; furthermore, they have given a counterexample with \(\operatorname{Aut}(C)=C_{2}\). They also prove a partial result for smooth plane quartics over arbitrary fields of characteristic \(0\), see [1, Corollary 4.1].
### New results
Our first result is about curves of degree prime with \(3\). Write \(\operatorname{diag}(\alpha,\beta,\gamma)\) for the \(3\times 3\) diagonal matrix with eigenvalues \(\alpha,\beta,\gamma\), and \(\zeta_{n}=e^{\frac{2\pi i}{n}}\in\bar{\mathbb{Q}}\).
**Theorem 1**.: _Let \(k\) be a field of characteristic \(0\) with algebraic closure \(K\), \(C\subset\mathbb{P}^{2}_{K}\) a smooth plane curve over \(K\) of degree \(d\) prime with \(3\) and with field of moduli \(k\)._
_If \(C\) does not descend to a curve \(\mathfrak{C}\subset\mathbb{P}^{2}_{k}\), then \(\operatorname{Aut}(C)\) has the form \(C_{a}\times C_{2an}\) generated by \(\operatorname{diag}(\zeta_{1},1,1)\), \(\operatorname{diag}(1,\zeta_{a},1)\), \(\operatorname{diag}(\zeta_{2an},\zeta_{2an}^{2},1)\) for \(a,e,n\) positive integers with \(e\cong\pm 1\pmod{q}\) for each prime power \(q\mid 2n\). Furthermore, if \(a\neq 1\) then \(2an\mid d\), while if \(a=1\) then \(4n\mid d(d-2)\)._
As a direct consequence, descent always works if the degree is prime with \(6\).
**Corollary 2**.: _Let \(k\) be a field of characteristic \(0\) with algebraic closure \(K\), \(C\subset\mathbb{P}^{2}_{K}\) a smooth plane curve over \(K\) of degree prime with \(6\) and with field of moduli \(k\). Then \(C\) descends to a curve \(\mathfrak{C}\subset\mathbb{P}^{2}_{k}\)._
If \(d\) is even and prime with \(3\), it might happen that some group of the form described in Theorem 1 is not the group of automorphisms of any plane curve of degree \(d\). For instance, if \(d=4\), the groups of this form are \(C_{2}\), \(C_{4}\) and \(C_{2}\times C_{4}\), but only \(C_{2}\) can actually be the group automorphism of a plane quartic [1]. As a consequence, the result by M. Artebani and S. Quispe holds for every field of characteristic \(0\).
**Corollary 3**.: _Let \(k\) be a field of characteristic \(0\) with algebraic closure \(K\) and \(C\subset\mathbb{P}^{2}_{K}\) a smooth plane quartic over \(K\) with field of moduli \(k\). If \(\operatorname{Aut}(C)\neq C_{2}\), there exists a model \(\mathfrak{C}\) of \(C\) over \(k\) with an embedding \(\mathfrak{C}\subset\mathbb{P}^{2}_{k}\)._
Since a smooth, projective curve of genus \(3\) is either a plane curve or an hyperelliptic curve, and B. Huggins has completely analyzed the problem for hyperelliptic curves [1], this essentially completes the study of fields of moduli of genus \(3\) curves in characteristic \(0\).
Our method works in the case in which \(d\) is a multiple of \(3\), too, but the results are less clear-cut. Notice that, if \(P\) is a non-trivial Brauer-Severi variety over \(k\) and \(3|d\), a generic smooth proper curve \(\mathfrak{C}\) of degree \(d\) in \(P\) has trivial automorphism group: such a curve is the only model of \(\mathfrak{C}_{K}\) over \(k\) and there are no embeddings of \(\mathfrak{C}\) in \(\mathbb{P}^{2}\) by J. Roe and X. Xarles' theorem [1, Theorem 5]. Because of this, if \(3|d\) we cannot expect to avoid non-trivial Brauer-Severi surfaces.
Let us look at the first case with \(3\mid d\), i.e. \(d=6\). Consider the semidirect products \(C_{3}^{2}\rtimes C_{2}\subset C_{3}^{2}\rtimes C_{4}\) where the generator of \(C_{2}\) acts on \(C_{3}^{2}\) as \(-1\), while a generator of \(C_{4}\) acts as \(\big{(}\begin{smallmatrix}-1\\ 1\end{smallmatrix}\big{)}\). The automorphism groups of smooth plane sextics are classified [1]. Using this classification, we obtain the following.
**Theorem 4**.: _Let \(k\) be a field of characteristic \(0\) with algebraic closure \(K\) and \(C\subset\mathbb{P}^{2}_{K}\) a smooth plane sextic over \(K\) with field of moduli \(k\). If \(\operatorname{Aut}(C)\) is not isomorphic to one of the groups_
\[C_{2},\ C_{3},\ C_{4},\ C_{6},\ C_{3}^{2},\ C_{3}^{2}\rtimes C_{2},\ C_{3}^{2} \rtimes C_{4},\]
_then there exists a model \(\mathfrak{C}\) of \(C\) over \(k\) with an embedding \(\mathfrak{C}\subset P_{\mathfrak{C}}\) in a Brauer-Severi surface \(P_{\mathfrak{C}}\) over \(k\). Furthermore, if \(\operatorname{Aut}(C)\) is also not trivial and not isomorphic to \(C_{2}^{2}\), we may choose \(\mathfrak{C}\) so that \(P_{\mathfrak{C}}\simeq\mathbb{P}^{2}_{k}\)._
_Moreover, if \(\sqrt{3},\sqrt{-1}\in k\) and \(\operatorname{Aut}(C)\simeq C_{3}^{2}\rtimes C_{4}\), there exists a model \(\mathfrak{C}\) over \(k\) with and embedding \(\mathfrak{C}\subset\mathbb{P}^{2}_{k}\)._
Theorems 1 and 4 both follow from a general theorem which holds for arbitrary cycles on \(\mathbb{P}^{2}\) with finite automorphism group. In particular, this general theorem implies that for most finite subgroups \(G\) of \(\operatorname{PGL}_{3}\), a smooth plane curve with \(\operatorname{Aut}(C)=G\) descends to a curve in \(\mathbb{P}^{2}\) over the field of moduli, even if \(3\mid\deg C\).
In order to state the general theorem, let us give a definition.
**Definition 5**.: Let \(k\) be a field of characteristic \(0\) with algebraic closure \(K\). A finite subgroup \(G\subset\operatorname{PGL}_{3}(K)\) is _critical_ if it is conjugate to
* the abelian subgroup \(C_{a}\times C_{an}\) generated by \(\operatorname{diag}(\zeta_{a},1,1)\), \(\operatorname{diag}(1,\zeta_{a},1)\), \(\operatorname{diag}(\zeta_{an},\zeta_{an}^{d},1)\) for positive integers \(a,n,d\) satisfying \(d^{2}-d+1\cong 0\pmod{n}\) and \(3\mid an\),
* the abelian subgroup \(C_{a}\times C_{a2^{b}n}\) generated by \(\operatorname{diag}(\zeta_{a},1,1)\), \(\operatorname{diag}(1,\zeta_{a},1)\), \(\operatorname{diag}(\zeta_{a2^{b}n},\zeta_{a2^{b}n}^{d},1)\) for some positive integers \(a,b,n,d\) with \(d^{2}\cong 1\pmod{n}\), \(d\cong\pm 1\pmod{2^{b}}\), and \(n\) odd.
* the Hessian subgroup \(H_{2}\simeq C_{3}^{2}\rtimes C_{2}\) of order \(18\) (see SS3.2).
Furthermore, \(G\) is critical if \(\zeta_{12}\not\in k\) and \(G\) is conjugate to the Hessian subgroup \(H_{3}\simeq C_{3}^{2}\rtimes C_{4}\) of order \(36\) (see SS3.2).
We say that \(G\) is _lucky_ if it is not critical and it is not conjugate to
* the abelian subgroup \(C_{a}\times C_{an}\) generated by \(\operatorname{diag}(\zeta_{a},1,1)\), \(\operatorname{diag}(1,\zeta_{a},1)\), \(\operatorname{diag}(\zeta_{an},\zeta_{an}^{d},1)\) for positive integers \(a,n,d\) satisfying \(d^{2}-d+1\cong 0\pmod{n}\),
* the Hessian subgroup \(H_{1}\simeq C_{3}^{2}\) of order \(9\) (see SS3.2).
In particular, if \(\zeta_{12}\in k\) and \(G\) is conjugate to \(H_{3}\), then \(G\) is lucky.
As we will see, a subgroup of \(\operatorname{PGL}_{3}\) is critical if it might give an obstruction to descent, while it is lucky if descent always works and as a bonus we may descend to \(\mathbb{P}^{2}\), as opposed to a Brauer-Severi surface.
Observe that, if \(K\) is algebraically closed of characteristic \(0\), the finite subgroups of \(\operatorname{PGL}_{3}(\bar{\mathbb{Q}}),\operatorname{PGL}_{3}(K)\) and \(\operatorname{PGL}_{3}(\mathbb{C})\) coincide. In SS3 we recall the classification of finite subgroups of \(\operatorname{PGL}_{3}(\mathbb{C})\).
The large majority of them are lucky, with the noteworthy exception of the trivial group which is not lucky nor critical. In particular, only two non-abelian subgroups are not lucky, one if \(\zeta_{12}\in k\), and the large majority of the abelian ones are lucky too.
**Theorem 6**.: _Let \(Z\) be a cycle on \(\mathbb{P}^{2}_{K}\) with \(\operatorname{Aut}(\mathbb{P}^{2}_{K},Z)\subset\operatorname{PGL}_{3}(K)\) finite and with field of moduli equal to \(k\). If \(\operatorname{Aut}(\mathbb{P}^{2}_{K},Z)\) is not critical, then \(Z\) descends to a cycle on some Brauer-Severi surface over \(k\). If \(G\) is lucky, \(Z\) descends to \(\mathbb{P}^{2}_{k}\)._
_On the other hand, if \(G\subset\operatorname{PGL}_{3}(\bar{\mathbb{Q}})\) is critical there exists a field \(k\) of characteristic \(0\) with algebraic closure \(K\) and a cycle \(Z\) on \(\mathbb{P}^{2}_{K}\) not defined over its field
of moduli with \(\operatorname{Aut}(\mathbb{P}^{2}_{K},Z)=G\). If \(G\) is not conjugate to \(H_{3}\), we may choose \(k\) so that it contains \(\mathbb{C}\)._
Theorem 6 holds in even greater generality for _algebraic structures_ in the sense of [3, SS5], see Theorem 11. In [1], we study in detail the case of finite subsets of \(\mathbb{P}^{2}\).
### A remark about our techniques
Our setup, techniques, and proofs are heavily stack-theoretical. This is not very common in the literature about fields of moduli and fields of definition, even though it has long been known that stacks are helpful in studying twisted forms of varieties.
One of the main strategies we use is based on finding rational points on a certain smooth Deligne-Mumford stack \(\mathscr{Y}\) with \(\dim\mathscr{Y}=\dim X\). If \(\dim X=1\), then \(\mathscr{Y}\) has a rational point if and only if its coarse moduli space has a rational point. This fact is implicitly used in a result by P. Debes and M. Emsalem [1, Corollary 4.3.c] which is the backbone of a lot of the literature about fields of moduli of curves: the coarse moduli space of \(\mathscr{Y}\) is what they call _the canonical model of \(X/\operatorname{Aut}(X)\)_.
If \(\dim X\geq 2\), this is simply wrong: the coarse moduli space of \(\mathscr{Y}\) might have a rational point even though \(\mathscr{Y}(k)=\emptyset\). The reason behind this is the fact that quotients of smooth varieties are not necessarily smooth in dimension \(\geq 2\): if we pass to the coarse moduli space of \(\mathscr{Y}\), we lose smoothness. Because of this, stacks are especially helpful when studying fields of moduli in dimension \(\geq 2\).
### Acknowledgements
I would like to thank A. Vistoli for pointing out to me the fact that every automorphism of a smooth plane curve of degree \(\geq 4\) is linear.
### Notations and conventions
We work over a field \(k\) of characteristic \(0\) with algebraic closure \(K\). We fix a basis of \(k^{3}\) so that it makes sense to talk about diagonal and permutation matrices (recall that a permutation matrix is a matrix which permutes the basis), and we fix a preferred embedding \(\operatorname{GL}_{2}\subset\operatorname{PGL}_{3}\) using the first two coordinates.
With an abuse of terminology, we often identify matrices with their images in \(\operatorname{PGL}_{3}(K)\). We write \(\operatorname{diag}(a_{1},a_{2},a_{3})\in\operatorname{PGL}_{3}(K)\) for the image in \(\operatorname{PGL}_{3}(K)\) of the diagonal \(3\times 3\) matrix with eigenvalues \(a_{1},a_{2},a_{3}\), so \(\operatorname{diag}(a_{1},a_{2},a_{3})=\operatorname{diag}(\lambda a_{1}, \lambda a_{2},\lambda a_{3})\) for \(\lambda\neq 0\). We say that a subgroup of \(\operatorname{PGL}_{3}(K)\) is diagonal if its elements are diagonal.
Let \(G\) be a group acting on some space or set \(X\), \(Z\subset X\) a subspace, \(g\in G\) an element. We say that \(g\)_stabilizes_\(Z\), or that \(Z\) is \(g\)-invariant, if \(g(Z)=Z\). We say that \(g\)_fixes_\(Z\) if \(g\) restricts to the identity on \(Z\). We say that \(G\) stabilizes (resp. fixes) \(Z\) if every element \(g\in G\) stabilizes (resp. fixes) \(Z\). The _fixed locus_ of \(g\) (resp. \(G\)) is the subspace of points \(x\in X\) with \(gx=x\) (resp. \(\forall g\in G:gx=x\)).
## 2. The two main strategies
We recall some basic constructions from [3] and [1]. Let \(k\) be a field of characteristic \(0\) with algebraic closure \(K\) and \(\xi\) an algebraic structure, e.g. a cycle, on \(\mathbb{P}^{2}_{K}\) with field of moduli \(k\) in the sense of [3, SS5], write \(G=\operatorname{Aut}(\mathbb{P}^{2},\xi)\subset\operatorname{PGL}_{3}(K)\).
There is a finite gerbe \(\mathscr{G}_{\xi}\) over \(k\), called the _residual gerbe_, with a universal projective bundle \(\mathscr{P}_{\xi}\to\mathscr{G}_{\xi}\) of relative dimension \(2\) which is a twisted form over \(k\) of the the natural morphism \([\mathbb{P}^{2}_{K}/G]\to\mathscr{B}_{K}G\). The residual gerbe \(\mathscr{G}_{\xi}\) and the
projective bundle \(\mathscr{P}_{\xi}\to\mathscr{G}_{\xi}\) are characterized by the following property: given a scheme \(S\) over \(k\), a morphism \(S\to\mathscr{G}_{\xi}\) corresponds to a twisted form of \(\xi\) on the projective bundle \(\mathscr{P}_{\xi}|_{S}\to S\). In particular, \(\xi\) descends to a structure on some Brauer-Severi surface over \(k\) if and only if \(\mathscr{G}_{\xi}(k)\neq\emptyset\), and it descends to a structure on \(\mathbb{P}_{k}^{2}\) if and only if \(\mathscr{P}_{\xi}(k)\neq\emptyset\).
Let us break down the definition in the case of cycles. If \(C\subset\mathbb{P}_{K}^{2}\) is a reduced, irreducible closed subscheme and \(S\) is a scheme over \(k\), a twisted form of \(C\) over \(S\) is a projective bundle \(\mathscr{P}\to S\) with a closed subscheme \(\mathscr{C}\subset\mathscr{P}\) such that, for some etale covering \(S^{\prime}\to S\), we have \((\mathscr{P}_{S^{\prime}},\mathscr{C}_{S^{\prime}})\simeq(\mathbb{P}^{2}\times S ^{\prime},C\times S^{\prime})\). If \(Z=\sum_{i}n_{i}C_{i}\) is a cycle, a twisted form of \(Z\) over \(S\) is a projective bundle \(\mathscr{P}\to S\) and a formal sum \(\sum_{i}n_{i}\mathscr{C}_{i}\) where \(\mathscr{C}_{i}\subset\mathscr{P}\) is a twist of \(C_{i}\). The residual gerbe \(\mathscr{G}_{Z}\) is the functor \(S\mapsto\){twisted forms of \(Z\)}; if \(\operatorname{Aut}(\mathbb{P}^{2},Z)\) is finite then \(\mathscr{G}_{Z}\) is a Deligne-Mumford stack which is a gerbe and \(\mathscr{P}_{Z}\to\mathscr{G}_{Z}\) is the corresponding universal bundle given by Yoneda's lemma. The coarse moduli space of \(\mathscr{G}_{Z}\) is the spectrum of the field of moduli of \(Z\).
### Showing that \(\xi\) is defined over \(\mathbb{P}_{k}^{2}\)
The obvious question is: ho do we find rational points on \(\mathscr{P}_{\xi}\)? Let \(\mathbf{P}_{\xi}\) be the coarse moduli space of \(\mathscr{P}_{\xi}\), it is called the _compression_ of \(\xi\). Since \(\mathscr{P}_{\xi,K}=[X/G]\), then \(\mathbf{P}_{\xi,K}=X/G\). Since the action of \(G\) on \(\mathbb{P}_{K}^{2}\) is faithful, the natural morphism \(\mathscr{P}_{\xi}\to\mathbf{P}_{\xi}\) is birational, hence we have a rational map \(\mathbf{P}_{\xi}\dashrightarrow\mathscr{P}_{\xi}\). Suppose that we find a rational point \(p\in\mathbf{P}_{\xi}(k)\) which lifts to a rational point of a resolution of singularities of \(\mathbf{P}_{\xi}\) (by the Lang-Nishimura theorem, this condition does not depend on the resolution). The Lang-Nishimura theorem for tame stacks [Bva, Theorem 4.1] then implies that \(\mathscr{P}_{\xi}(k)\neq\emptyset\).
So we want to find rational points on the compression \(\mathbf{P}_{\xi}\). A closed subspace \(Z\subset\mathbb{P}_{K}^{2}\) is _distinguished_[Brec, Definition 17] if \(\tau(Z)=Z\) for every \(\tau\in\operatorname{PGL}_{3}(K)\) such that \(\tau^{-1}G\tau=G\); in particular, \(Z\) is \(G\)-invariant. If \(Z\) is distinguished, then \(Z/G\subset\mathbb{P}^{2}/G\) descends to a closed subset of \(\mathbf{P}_{\xi}\)[Brec, Lemma 18]. In [Brec, Example 19] we give some strategies to construct distinguished subsets. Assume that the action of \(G\) on \(Z\) is transitive, then \(Z/G\) descends to a rational point \(p\in\mathbf{P}_{\xi}(k)\).
A rational point of a variety is _liftable_[BVb, Definition 6.6] if it lifts to a resolution of singularities. A singularity is _of type_ R if every twisted form of it is liftable. To show that \(p\) is liftable, it is enough to check that its singularity is of type R using the classification given in [Brea]. Let \(z\in Z\) be any point, \(G_{z}\in G\) the stabilizer, the singularity of \(X/G=\mathbf{P}_{\xi,K}\) in \(p\) is equivalent [BVb, SS6.2] to the one of \(T_{z}\mathbb{P}^{2}/G_{z}\) in the origin, hence it is enough to show that \(T_{z}\mathbb{P}^{2}/G_{z}\) is of type R. Finally, a group is of type R\({}_{2}\) if the quotient of every faithful 2-dimensional representation is of type R, hence sometimes it is sufficient to check that \(G_{z}\) is R\({}_{2}\) using the classification given in [Brea].
### Finding \(\xi\) so that it is not defined over \(k\)
Fix \(G\subset\operatorname{PGL}_{3}(\bar{\mathbb{Q}})\) finite. Suppose that we want to find a field \(k\) and a structure \(\xi\) on \(\mathbb{P}_{K}^{2}\) with \(G=\operatorname{Aut}(\mathbb{P}^{2},\xi)\subset\operatorname{PGL}_{3}(K)\) such that the field of moduli of \(\xi\) is \(k\) but \(\xi\) is not defined over \(k\). First, we want to make sure that \(G\) descends to a group scheme \(\mathfrak{G}\subset\operatorname{PGL}_{3,k}\) acting on \(\mathbb{P}_{k}^{2}\), for instance we might restrict ourselves to fields containing enough elements to define each matrix of \(G\).
If we find \(k\) and a subgroup \(\mathfrak{N}\subset\operatorname{PGL}_{3,k}\) containing and normalizing \(\mathfrak{G}\) so that \(\operatorname{H}^{1}(k,\mathfrak{N})\to\operatorname{H}^{1}(k,\mathfrak{N}/ \mathfrak{G})\) is not surjective, then by [Brec, Theorem 4] there exists
a structure \(\xi\) on \(\mathbb{P}_{K}^{2}\) with field of moduli \(k\) which is not defined over \(k\). By [Brec, Theorem 3], \(\xi\) can be interpreted as the structure of some \(0\)-cycle.
## 3. Finite subgroups of \(\mathrm{PGL}_{3}\)
Since we are in characteristic \(0\), the finite subgroups of \(\mathrm{PGL}_{3}(\bar{\mathbb{Q}}),\mathrm{PGL}_{3}(K)\) and \(\mathrm{PGL}_{3}(\mathbb{C})\) coincide, and they are completely classified, see [1, Chapter XII]. Before giving the list, for the convenience of the reader we recall some facts which will later play an important role.
### Abelian subgroups
**Lemma 7**.: _Let \(\alpha\in K\smallsetminus\{0,1\}\) be an element. The centralizer of \(\mathrm{diag}(\alpha,\alpha,1)\) is \(\mathrm{GL}_{2}(K)\subset\mathrm{PGL}_{3}(K)\)._
Proof.: Let \(g\) be an element of the centralizer, then \(g\) must stabilize the fixed locus of \(\mathrm{diag}(\alpha,\alpha,1)\), namely the point \((0:0:1)\) and the line \(\{(s:t:0)\}\). The statement follows.
**Lemma 8**.: _Let \(\alpha\neq\beta\in K\smallsetminus\{0,1\}\) be different elements. If \(\{\alpha,\beta\}=\{\zeta_{3},\zeta_{3}^{2}\}\), the centralizer of \(\mathrm{diag}(\alpha,\beta,1)\) in \(\mathrm{PGL}_{3}(K)\) is generated by the diagonal matrices and by a permutation matrix of order \(3\), otherwise it is the group of diagonal matrices._
Proof.: Assume that \(g\in\mathrm{PGL}_{3}(K)\) is in the centralizer, then \(g\) must stabilize the fixed locus of \(\mathrm{diag}(\alpha,\beta,1)\), i.e. the three points \((0:0:1),(0:1:0),(1:0:0)\). Up to a diagonal matrix, we may then assume that \(g\) is a permutation matrix. If \(g\) fixes the three points, then it is the identity. If \(g\) acts as a transposition, then it is immediate to check that \(g\) does not commute with \(\mathrm{diag}(\alpha,\beta,1)\) since \(\alpha,\beta,1\) are pairwise different. If \(g\) acts as a \(3\)-cycle, then \(\mathrm{diag}(\alpha,\beta,1)=\mathrm{diag}(1,\alpha,\beta)\) which implies \(\{\alpha,\beta\}=\{\zeta_{3},\zeta_{3}^{2}\}\).
Write \(H_{1}\) for the group isomorphic to \(C_{3}^{2}\) generated by \(\mathrm{diag}(\zeta_{3},\zeta_{3}^{2},1)\) and by a permutation matrix of order \(3\). The group \(H_{1}\) is not diagonalizable, since it has no fixed points.
**Corollary 9**.: _A finite, abelian subgroup of \(\mathrm{PGL}_{3}(K)\) is either conjugate to \(H_{1}\) or diagonalizable._
### The Hessian groups
Consider the six matrices
\[M_{0}=\left(\begin{array}{ccc}1&0&0\\ 0&\zeta_{3}&0\\ 0&0&\zeta_{3}^{2}\end{array}\right),\ M_{1}=\left(\begin{array}{ccc}0&0&1 \\ 1&0&0\\ 0&1&0\end{array}\right),\ M_{2}=\left(\begin{array}{ccc}1&0&0\\ 0&0&1\\ 0&1&0\end{array}\right),\]
\[M_{3}=\left(\begin{array}{ccc}1&1&1\\ 1&\zeta_{3}&\zeta_{3}^{2}\\ 1&\zeta_{3}^{2}&\zeta_{3}\end{array}\right),\ M_{4}=\left(\begin{array}{ccc} 1&1&\zeta_{3}\\ 1&\zeta_{3}&1\\ \zeta_{3}^{2}&\zeta_{3}&\zeta_{3}\end{array}\right),M_{5}=\left(\begin{array} []{ccc}1&0&0\\ 0&1&0\\ 0&0&\zeta_{3}\end{array}\right).\]
For \(i=1,\ldots,5\), let \(H_{i}\subset\mathrm{PGL}_{3}(\mathbb{Q}(\zeta_{3}))\) be the group generated by \(M_{0},\ldots,M_{i}\) for \(i=1,\ldots,5\). We call the subgroups \(H_{1},\ldots,H_{5}\) the _Hessian groups_[1, Chapter XII] of degrees \(9,18,36,72,216\) respectively. In the literature, only \(H_{5}\) is consistently called Hessian, while only some authors call Hessian the others. Notice that our matrices differ by a scalar from the ones given in [1, Chapter XII]: since we work in \(\mathrm{PGL}_{3}(K)\), as opposed to \(\mathrm{GL}_{3}(K)\), we can forget about some scalars and simplify everything.
The group \(H_{3}\) is not normal in \(H_{5}\), but in all the other cases \(H_{i}\) is normal in \(H_{j}\) for \(j>i\). We have isomorphisms
\[H_{1}\simeq C_{3}^{2},\ H_{2}\simeq(C_{3}^{2})\rtimes C_{2},\ H_{3}\simeq(C_{3} ^{2})\rtimes C_{4},\]
where the action of \(C_{4}=\langle M_{3}\rangle\) on \(C_{3}^{2}\) is given by the matrix \(\left(\begin{smallmatrix}&-1\\ 1\end{smallmatrix}\right)\) in \(\operatorname{SL}(2,3)\), and \(C_{2}\subset C_{4}\) acts as \(-1\). Furthermore, we have isomorphisms
\[H_{5}/H_{1}\simeq\operatorname{SL}(2,3),\ H_{4}/H_{1}\simeq Q_{8}\subset \operatorname{SL}(2,3),\]
\[H_{5}/H_{2}\simeq\operatorname{PSL}(2,3)\simeq A_{4},\ H_{4}/H_{2}= \operatorname{Kl}\subset A_{4}\]
where \(Q_{8}\) is the quaternion group and \(\operatorname{Kl}\subset A_{4}\) is the Klein group.
The fixed subset of any non-trivial cyclic subgroup of \(H_{1}\simeq C_{3}^{2}\) consists of three non-collinear points; these four triangles are pairwise disjoint. Any line connecting two points of two different triangles contains exactly one point of each triangle: in particular, the union of one triangle and one point of another triangle is in general position.
Since these four triangles correspond to the cyclic subgroups of \(H_{1}\simeq\mathbb{F}_{3}^{2}\), it is natural to identify them with the four points of \(\mathbb{P}(\mathbb{F}_{3}^{2})\). The group \(\operatorname{SL}(2,3)\) is a non-split central extension of \(\operatorname{PSL}(2,3)\simeq A_{4}\) by \(C_{2}=\langle-1\rangle\), and the induced action of \(\operatorname{PSL}(2,3)\simeq A_{4}\) on \(\mathbb{P}(\mathbb{F}_{3}^{2})\) is the standard one of \(A_{4}\) on four points.
**Lemma 10**.: _The normalizer of \(H_{1}\subset\operatorname{PGL}_{3}(K)\) is \(H_{5}\)._
Proof.: Let \(g\in\operatorname{PGL}_{3}(K)\) be an element normalizing \(H_{1}\), in particular \(g\) acts on the set of four triangles \(\mathbb{P}^{2}(\mathbb{F}_{3}^{2})\). Since \(H_{5}\) acts as \(A_{4}\) on \(\mathbb{P}^{2}(\mathbb{F}_{3}^{2})\), up to multiplying \(g\) by an element of \(H_{5}\) we may assume that \(g\) acts trivially on \(\mathbb{P}^{2}(\mathbb{F}_{3}^{2})\), in particular it stabilizes the two triangles of fixed points of \(M_{0}\) and \(M_{1}\). Furthermore, up to multiplying by an element of \(H_{2}\) we may assume that the points \((0:0:1)\), \((0:1:0)\), \((1:0:0)\) are fixed by \(g\), hence \(g\) is diagonal. Since an element of \(\operatorname{PGL}_{3}(K)\) fixing four points in general position is trivial, there are at most three elements of \(\operatorname{PGL}_{3}(K)\) fixing \((0:0:1)\), \((0:1:0)\), \((1:0:0)\) and permuting the fixed locus of \(M_{1}\). The three powers of \(M_{0}\) do this, so \(g\) is one of them.
### The Hessian group scheme
Denote by \(\bar{M}_{i}\) the Galois conjugate of \(M_{i}\) with respect to the only non-trivial automorphisms of \(\mathbb{Q}(\zeta_{3})/\mathbb{Q}\), we have identities in \(\operatorname{PGL}_{3}(\mathbb{Q}(\zeta_{3}))\)
\[\bar{M}_{0}=M_{0}^{-1},\ \bar{M}_{1}=M_{1},\ \bar{M}_{2}=M_{2},\]
\[\bar{M}_{3}=M_{3}^{-1},\ \bar{M}_{4}=M_{4}\cdot M_{3},\ \bar{M}_{5}=M_{5}^{-1}.\]
In particular, the Galois action stabilizes each \(H_{i}\), hence for every \(i=1,\dots,5\) we get a finite subgroup scheme \(\mathfrak{H}_{i}\subset\operatorname{PGL}_{3,\mathbb{Q}}\) with \(\mathfrak{H}_{i}(\mathbb{Q}(\zeta_{3}))=H_{i}\). Write
\[\mathfrak{C}_{4}=\mathfrak{H}_{3}/\mathfrak{H}_{1},\mathfrak{Q}_{8}=\mathfrak{ H}_{4}/\mathfrak{H}_{1},\mathfrak{K}=\mathfrak{H}_{4}/\mathfrak{H}_{2},\]
they are twisted forms over \(\mathbb{Q}\) of \(C_{4},Q_{8},\operatorname{Kl}\) respectively and there is a short exact sequence
\[1\to\mathfrak{C}_{4}\to\mathfrak{Q}_{8}\to\mathfrak{K}\to 1.\]
Write \(\{\pm 1,\pm i,\pm j,\pm k\}\) for the elements of \(Q_{8}\), we may assume that \(M_{3},M_{4}\) map respectively to \(i,j\) in \(Q_{8}=H_{4}/H_{1}\) and that \(C_{4}=\langle i\rangle\subset Q_{8}\). Using the characterizations of \(\bar{M}_{i}\) written above, the non-trivial element of \(\operatorname{Gal}(\mathbb{Q}(\zeta_{3})/\mathbb{Q})\) acts on \(C_{4}\subset Q_{8}\) as \(i\mapsto-i\), \(j\mapsto-k\), \(k\mapsto-j\) and on \(\operatorname{Kl}=C_{2}^{2}\) as \((1,0)\mapsto(0,1)\), \((0,1)\mapsto(1,0)\).
There is an induced action of \(\mathfrak{Q}_{8}\) on \(\mathfrak{H}_{1}=\mu_{3}\times C_{3}\), and we have isomorphisms
\[\mathfrak{H}_{1}\simeq\mu_{3}\times C_{3},\ \mathfrak{H}_{2}\simeq(\mu_{3} \times C_{3})\rtimes C_{2},\]
\[\mathfrak{H}_{3}\simeq(\mu_{3}\times C_{3})\rtimes\mathfrak{C}_{4},\ \mathfrak{H}_{4}/\mathfrak{H}_{1}\simeq\mathfrak{Q}_{8},\ \mathfrak{H}_{4}/\mathfrak{H}_{2}=\mathfrak{K}.\]
### The list
Here is the list of all finite subgroups of \(\operatorname{PGL}_{3}(K)\) up to conjugation [12, Chapter XII].
1. Finite subgroups of \(\operatorname{GL}_{2}(K)\subset\operatorname{PGL}_{3}(K)\).
2. A group generated by a non-trivial finite diagonal subgroup and by a permutation matrix of order \(3\). A group of this type is either conjugate to \(H_{1}\) or it has exactly one invariant triangle.
3. A group generated by a group of type **(B)** and a matrix of the form \(\left(\begin{smallmatrix}\alpha&\beta\\ &1\end{smallmatrix}\right)\). A group of this type is either conjugate to \(H_{2}\) or it has exactly one invariant triangle.
4. The groups \(H_{3}\subset H_{4}\subset H_{5}\).
5. The simple groups \(A_{5}\), \(A_{6}\) and \(\operatorname{PSL}(2,7)\).
## 4. On the automorphism groups of structures on \(\mathbb{P}^{2}\)
Given a variety \(X\) over \(k\) and a subgroup \(G\subset\operatorname{Aut}_{K}(X)\), a \(G\)-structure is an algebraic structure \(\xi\) on \(X\) in the sense of [11] such that \(\operatorname{Aut}_{K}(X,\xi)\subset\operatorname{Aut}_{K}(X)\) is conjugate to \(G\).
Given a finite subgroup \(G\subset\operatorname{PGL}_{2}(K)\), if there exists a \(G\)-structure \(\xi\) on \(\mathbb{P}^{1}\) which is not defined over its field of moduli then \(G\) is cyclic of even order, see [10, Theorem 5] (while the result is stated for effective, reduced divisors, the proof works without modifications for any structure on \(\mathbb{P}^{1}\)).
Similarly, if \(G\subset\operatorname{PGL}_{3}(K)\) is finite, the existence of a \(G\)-structure on \(\mathbb{P}^{2}\) not defined over its field of moduli puts strong constraints on \(G\). As we are going to prove, such a \(G\)-structure only exists if \(G\) is critical.
**Theorem 11**.: _Let \(G\subset\operatorname{PGL}_{3}(K)\) be a finite subgroup and \(\xi\) a \(G\)-structure over \(\mathbb{P}^{2}_{K}\) with field of moduli \(k\). If \(G\) is not critical, then \(\xi\) descends to a structure over some Brauer-Severi surface over \(k\). If \(G\) is lucky, \(\xi\) descends to \(\mathbb{P}^{2}_{k}\)._
_On the other hand, if \(G\subset\operatorname{PGL}_{3}(\bar{\mathbb{Q}})\) is critical there exists a field \(k\) of characteristic \(0\) with algebraic closure \(K\) and a \(G\)-structure \(\xi\) on \(\mathbb{P}^{2}_{K}\) not defined over its field of moduli. If \(G\) is not conjugate to \(H_{3}\), we may choose \(k\) so that it contains \(\mathbb{C}\)._
The first part of Theorem 6 is a direct consequence of Theorem 11, while the second part follows from Theorem 11 using [10, Theorem 3].
We spend the rest of this section proving Theorem 11. We apply the two strategies described in SS2, or small variations of them, to each finite subgroup of \(\operatorname{PGL}_{3}(K)\).
In the first half, we prove that if \(G\) is not critical (resp. \(G\) is lucky) then \(\xi\) descends to a structure on \(\mathbb{P}^{2}_{k}\) (resp. on some Brauer-Severi variety) by finding a rational point of \(\mathbf{P}_{\xi}\) whose corresponding singularity in \(X/G\) is of type R (resp. \(\mathscr{G}_{\xi}(k)\neq\emptyset\)), this gives us a rational point of \(\mathscr{P}_{\xi}\) by the Lang-Nishimura theorem for tame stacks.
In the second half, we construct the various counterexamples for \(G\) critical.
### Type (A), not abelian
Assume that \(G\subset\operatorname{GL}_{2}(K)\subset\operatorname{PGL}_{3}(K)\) is of type **(A)** and not abelian. Let \(Z\subset G\) be the center and write \(\mathscr{G}\to\bar{\mathscr{G}}\) for the rigidification of \(\mathscr{G}\) modulo the center of the inertia, see [1, Appendix C]. Essentially, we can pass to the quotient \(G/Z\) at the level of gerbes; we have a natural identification \(\bar{\mathscr{G}}=\mathscr{B}_{K}(G/Z)\).
Since \(G\subset\operatorname{GL}_{2}(K)\), there is at least one line \(L\) stabilized by \(G\). If there is another one \(L^{\prime}\), then \(p=L\cap L^{\prime}\) is fixed and \(G\) acts faithfully and diagonally on the tangent space of \(p\), which is absurd since \(G\) is not abelian. It follows that \(L\) is the unique line stabilized by the whole \(G\), it is a distinguished subspace and \(L/G\subset\mathbb{P}^{2}/G\) descends to a genus \(0\) curve \(C\subset\mathbf{P}_{\xi}\). If \(C\) is birational to \(\mathbb{P}^{1}\), since \(\mathbf{P}_{\xi}\) is normal we may find a rational point \(c\in C(k)\) which is regular in \(\mathbf{P}_{\xi}\). Assume by contradiction that \(C\) is a non-trivial Brauer-Severi curve over \(k\).
Since \(\mathbf{P}_{\xi}\) is normal, by [1, Corollary 3.2] there is a rational map \(C\dashrightarrow\bar{\mathscr{G}}\). Let \(\Delta\subset G\subset\operatorname{GL}_{2}(K)\) be the subgroup of diagonal matrices, observe that \(\Delta\subset Z\) and that by construction the base change of \(C\dashrightarrow\bar{\mathscr{G}}\) to \(K\) is the composition \(L/(G/\Delta)\dashrightarrow\mathscr{B}_{K}(G/\Delta)\to\mathscr{B}_{K}(G/Z)\). In particular, the geometric fibers of \(C\dashrightarrow\mathscr{G}\) are birational to \(L/(Z/\Delta)\) and hence of genus \(0\). This allows us to apply [1, Proposition 2], which implies that \(\bar{G}\) is cyclic. The fact that the quotient \(\bar{G}\) of \(G\) by its center is cyclic implies that \(G\) is abelian, which is absurd.
### Abelian, not conjugate to \(H_{1}\)
If \(G\) is abelian but not conjugate to \(H_{1}\), it is diagonal by Lemma 9, hence it is isomorphic to \(C_{a}\times C_{an}\) for some positive integers \(a,n\) and, up to conjugation, it is generated by three diagonal matrices of the form \(\operatorname{diag}(\zeta_{a},1,1)\), \(\operatorname{diag}(1,\zeta_{a},1)\), \(\operatorname{diag}(\zeta_{an}^{b},\zeta_{an}^{d},1)\) for some integers \(0\leq b,d<an\) such that \(\gcd(an,b,d)=1\).
If \(a=1\), \(b=d\) and \(n\) is even, then \(G\) is critical. If \(a=1\), \(b=d\) and \(n\geq 3\) is odd, then the point \((0:0:1)\) is distinguished, it descends to a rational point of \(\mathbf{P}_{\xi}\) and the corresponding singularity is of type R by [1, Theorem 4]. If \(a=n=1\), then \(G=\{\operatorname{id}\}\) is not critical, and clearly \(\mathscr{G}_{\xi}=\operatorname{Spec}k\) has a rational point.
Otherwise, the fixed locus of \(G\) consists of the three points \((1:0:0)\), \((0:1:0)\) and \((0:0:1)\). Denote by \(\rho_{1},\rho_{2},\rho_{3}:G\to\operatorname{GL}_{2}(K)\) the three corresponding faithful representations on the tangent spaces, each one splits uniquely as a sum of two characters. We say that two representations \(\rho,\rho^{\prime}\) of \(G\) are equivalent if there exists an automorphism \(\phi\) of \(G\) such that \(\rho\) and \(\rho\circ\phi\) are isomorphic representations. There are three cases: the three representations \(\rho_{1},\rho_{2},\rho_{3}\) are equivalent, only two of them are equivalent or they are pairwise non-equivalent.
#### 4.2.1. Three equivalent representations
If the three representations are equivalent, then the subgroup of \(C_{n}^{2}\) generated by \((b,d)\) is equal to the one generated by \((-d,b-d)\). In particular, \(\gcd(b,n)=\gcd(d,n)=1\) since \(\gcd(b,d,n)=1\). Up to multiplying by the inverse of \(b\) modulo \(n\), we may thus assume \(b\cong 1\pmod{n}\). Furthermore, since \(\operatorname{diag}(\zeta_{a},1,1)\in G\), we may reduce to the case \(b=1\).
The subgroups of \(C_{n}^{2}\) generated by \((1,d)\) and \((-d,1-d)\) are equal if and only if the determinant \(d^{2}-d+1\) is congruent to \(0\) modulo \(n\). It follows that \(G\) is not lucky, and it is critical if and only if \(3\mid an\). If \(3\mid an\), in the second half of the proof we will construct an example in which \(\xi\) does not descend to any Brauer-Severi surface over \(k\). Right now we are interested in the positive result, i.e. if \(3\nmid an\) then \(\xi\) descends to some Brauer-Severi surface, or equivalently \(\mathscr{G}_{\xi}(k)\neq\emptyset\).
Since \(d^{2}-d+1\cong 1\pmod{n}\), then \(n\) is odd (there is no such \(d\) modulo 2). The singularity of \(\mathbb{P}^{2}/G\) in the images of each of the three fixed points is cyclic of type \(\frac{1}{n}(1,d)\), and it is of type R since \(n\) is odd [Brea, Theorem 4]. Let \(F\subset\mathbb{P}^{2}\) be the fixed locus of \(G\), we have that \(F/G\) descends to a reduced, effective 0-cycle \(Z\) of degree 3 on \(\mathbf{P}_{\xi}\), and the singularities in the geometric points of \(Z\) are of type R.
If \(\mathscr{G}_{\xi}(k)=\emptyset\), since the singularities of \(Z\) are of type R then \(Z(k)=\emptyset\) by the Lang-Nishimura theorem for tame stacks, hence \(Z\) has only one point with residue field \(k^{\prime}/k\) of degree 3. It follows that \(\mathscr{G}_{\xi}(k^{\prime})\neq\emptyset\). Let \(A\) be the band of \(\mathscr{G}_{\xi}\), it is a finite, \(an\)-torsion etale abelian group scheme over \(k\). The non-neutral gerbe \(\mathscr{G}_{\xi}\) corresponds to a non-zero cohomology class \(\psi\in\mathrm{H}^{2}(k,A)\) satisfying \(3\psi=\mathrm{cor}_{k^{\prime}/k}(\psi_{k^{\prime}})=\mathrm{cor}_{k^{\prime}/ k}(0)=0\in\mathrm{H}^{2}(k,A)\). Since \(A\) is \(an\)-torsion and \(3\nmid an\), this implies \(\psi=0\), which is absurd.
#### 4.2.2. Two equivalent representations
If only two of the representations are equivalent, up to conjugation the ones corresponding to points \((1:0:0)\) and \((0:1:0)\), then we get the equality \(b^{2}\cong d^{2}\pmod{n}\). In particular, \(\gcd(b,n)=\gcd(d,n)=1\), and up to multiplying by the inverse of \(b\) we may assume \(b\cong 1\pmod{n}\). Furthermore, since \(\mathrm{diag}(\zeta_{a},1,1)\in G\), we may assume \(b=1\).
The third fixed point \((0:0:1)\) is distinguished and it descends to a rational point of \(\mathbf{P}_{\xi}\). If \(G\) is not critical, then by [Brea, Theorem 4] the singularity in \((0:0:1)\) of \(\mathbb{P}^{2}/G\) is of type R, hence the rational point is liftable.
#### 4.2.3. Pairwise non-equivalent representations
If the three representations are pairwise non-equivalent, then each of the fixed points is distinguished, and they descend to three rational points of \(\mathbf{P}_{\xi}\). If by contradiction \(\mathscr{P}_{\xi}(k)=\emptyset\), these three rational points are not liftable, hence the singularities of \(\mathbb{P}^{2}/G\) in \((0:0:1),(0:1:0),(1:0:0)\) are not of type R. The three singularities are of type \(\frac{1}{n_{1}}(b,d)\), \(\frac{1}{n_{2}}(b-d,-d)\) and \(\frac{1}{n_{3}}(d-b,-b)\) respectively for some \(n_{i}|n\), \(\gcd(n_{1},b)=\gcd(n_{1},d)=1\) and similarly for \(n_{2},n_{3}\). Let \(n^{\prime}=\gcd(n_{1},n_{2},n_{3})\) be the greatest common divisor, then \(\gcd(n^{\prime},b)=\gcd(n^{\prime},d)=\gcd(n^{\prime},b-d)=1\). Since the three rational points are not liftable, \(n^{\prime}\) is even by [Brea, Theorem 4]. This is absurd, since it implies that \(b\), \(d\) and \(b-d\), being coprime with \(n^{\prime}\), are all odd.
### \(\boldsymbol{H_{1}}\)
Assume that \(G\) is conjugate to \(H_{1}\simeq C_{3}\times C_{3}\): it is not critical, nor lucky. We want to show that \(\mathscr{G}_{\xi}(k)\neq\emptyset\). If \(g\in G\) is non-trivial, its fixed locus has exactly three points, and if \(h\in G\smallsetminus\langle g\rangle\) then \(g\) permutes the three fixed points of \(h\). It follows that the union \(F\) of the fixed loci of the non-trivial elements of \(G\) is a distinguished subset with 12 points, and \(F/G\subset\mathbb{P}^{2}/G\) descends to a reduced 0-cycle \(Z\subset X\) of degree 4. In particular, there exists a finite extension \(k^{\prime}/k\) of degree prime with 3 such that \(Z(k^{\prime})\neq\emptyset\).
The stabilizer of each point of \(F\) is a cyclic group of order 3, hence the four singularities of \(\mathbb{P}^{2}/G\) in the points of \(F/G\) are all of type R by [Brea, Theorem 4]. Since \(Z(k^{\prime})\neq\emptyset\), by the Lang-Nishimura theorem for tame stacks we have that \(\mathscr{G}_{\xi}(k^{\prime})\neq\emptyset\). Now observe that \(\mathscr{G}_{\xi}\) is abelian, and thus it corresponds to a cohomology class \(\psi\in\mathrm{H}^{2}(k,A)\) where \(A\), the band of \(\mathscr{G}_{\xi}\), is a 3-torsion finite group scheme. We have that \([k^{\prime}:k]\psi=\mathrm{cor}_{k^{\prime}/k}(\psi_{k^{\prime}})=0\), hence \(\psi=0\) since \([k^{\prime}:k]\) is prime with 3 and \(\mathrm{H}^{2}(k,A)\) is 3-torsion. It follows that \(\mathscr{G}_{\xi}\) is neutral.
### Type (B)
Up to conjugation, we may assume that \(G=D\rtimes C_{3}\), where \(D\) is non-trivial and diagonal, and \(C_{3}\) is generated by a permutation matrix of order \(3\). The fact that \(C_{3}\) normalizes \(D\) implies that \(D\) has the form studied in SS4.2.1, and as such it is generated by three matrices \(\operatorname{diag}(\zeta_{a},1,1)\), \(\operatorname{diag}(1,\zeta_{a},1)\), \(\operatorname{diag}(\zeta_{an},\zeta_{an}^{d},1)\) with \(d^{2}-d+1\cong 0\pmod{n}\).
Thanks to the preceding case, we may assume that \(G\) is not \(H_{1}\): under this assumption, \(D\subset G\) is the only diagonalizable subgroup of index \(3\), and the fixed locus \(F\) of \(D\) is a distinguished subset. Since \(D\) is non-trivial, either \(a\neq 1\) or \(d\) is not congruent to \(0,1\) modulo \(n\): in both cases \(F\) has exactly three points, \(G\) acts transitively on it and \(F/G\subset\mathbb{P}^{2}/G\) descends to a rational point \(p\in\mathbf{P}_{\xi}(k)\). The singularity of \(\mathbb{P}^{2}/G\) in the point \(F/G\) is cyclic of type \(\frac{1}{n}(1,d)\).
If \(d^{2}\not\cong 1\pmod{n}\), then the singularity is of type R [Brea, Theorem 4]. If \(d^{2}\cong 1\pmod{n}\), since \(d^{2}-d+1\cong 0\pmod{n}\), then \(d\cong 2\pmod{n}\) and \(n\) is either \(1\) or \(3\). In both cases the singularity is again of type R by [Brea, Theorem 4].
### Type (C), not conjugate to \(H_{2}\)
Assume that \(G\) is of type **(C)** and not conjugate to \(H_{2}\). Up to conjugation there is a diagonal, normal subgroup \(D\subset G\) with \(G/D\simeq S_{3}\) and \(M_{1}\in G\).
Let us show that \(D\) is the only normal, diagonal subgroup of index \(6\). If \(D^{\prime}\) is another such subgroup, then the image of \(D^{\prime}\) in \(S_{3}\) is normal, abelian and non-trivial, i.e. it is \(C_{3}\subset S_{3}\). It follows that \(D^{\prime}\) contains an element of the form \(M_{1}N\), where \(N\in D\) is diagonal. If \(D^{\prime}\cap D\) is non-trivial, since \(M_{1}N\in D^{\prime}\) then \(D^{\prime}\) has no fixed points, hence it is not diagonal. If \(D^{\prime}\cap D\) is trivial, then \(|D^{\prime}|=|D|=3\) and \(D\) is generated by \(M_{0}\), since \(\langle M_{0}\rangle\) is the only diagonal subgroup of order \(3\) normalized by \(M_{1}\). It follows that \(G\) is conjugate to \(H_{2}\).
This implies that the fixed locus \(F\) of \(D\) is distinguished, it contains \(3\) points and \(F/G\) descends to a rational point \(p\in\mathbf{P}_{\xi}(k)\). We want to show that \(p\) is liftable. Let \(x\in F\) be a point with stabilizer \(G_{x}\), we have that \(G_{x}\) is an extension of \(C_{2}\) by \(D\).
As in the previous case, \(D\simeq C_{a}\times C_{an}\) with generators \(\operatorname{diag}(\zeta_{a},1,1)\), \(\operatorname{diag}(1,\zeta_{a},1)\), \(\operatorname{diag}(\zeta_{an},\zeta_{an}^{d},1)\), and again we have \(d^{2}-d+1\cong 0\pmod{n}\). Now we have another condition, which is the fact that \(D\) is normalized by a matrix \(M=\left(\begin{smallmatrix}\alpha&\beta\\ &1\end{smallmatrix}\right)\), and this implies \(d^{2}\cong 1\pmod{n}\), hence \(n\) is either \(1\) or \(3\).
If \(n=1\), then \(M^{2}\in C_{a}\times C_{a}\) and hence \(\alpha\beta\) is a power of \(\zeta_{a}\). Up to multiplying \(M\) by a suitable element of \(C_{a}\times C_{a}\), we may thus assume that \(\alpha\beta=1\). If \(\alpha\beta=1\) then \(M\) is a pseudoreflexion, and since \(C_{a}\times C_{a}\) is generated by pseudoreflexions we get that \(G_{x}\) is generated by pseudoreflexions as well. It follows that \(p\) is smooth, and in particular liftable.
If \(n=3\), then \(G_{x}\) is an extension of \(D_{3}\) by \(C_{a}\times C_{a}\), hence it is \(\operatorname{R}_{2}\) by [BVb, Propositions 6.17, 6.20] and \(p\) is liftable.
### \(\boldsymbol{H_{3}}\), \(\boldsymbol{\zeta_{12}}\in k\)
Observe that \(\mathbb{Q}(\zeta_{12})=\mathbb{Q}(\zeta_{3},\zeta_{4})=\mathbb{Q}(\sqrt{3},i)\), hence \(\sqrt{3}\in k\). We have \(\det M_{0}=\det M_{1}=1\), \(\det M_{2}=-1\), \(\det M_{3}=\det M_{4}=3\sqrt{3}\), hence \(\det M_{i}\in k^{*3}\) for every \(i\leq 4\). Since \(M_{i}\in\operatorname{GL}_{3}(k)\) for every \(i\), we may thus find \(a_{i}\in k^{*}\) such that \(M_{i}^{\prime}=a_{i}M_{i}\in\operatorname{SL}_{3}(k)\) for every \(i\leq 4\). Let \(H_{i}^{\prime}\) be the inverse image of \(H_{i}\) in \(\operatorname{SL}_{3}\), it follows that \(H_{i}^{\prime}\) and \(H_{i}\) are constant group schemes over \(k\) for \(i\leq 4\). Furthermore, \(M_{4}^{\prime 4}=\operatorname{id}\in\operatorname{SL}_{3}(k)\).
Since \(H_{5}/H_{2}\simeq A_{4}\) and \(H_{4}/H_{2}\simeq\operatorname{Kl}\), Lemma 10 implies that the normalizer of \(H_{3}\) in \(\operatorname{PGL}_{3}(K)\) is \(H_{4}\). Observe that \(\operatorname{H}^{1}(k,H_{4}^{\prime})\to\operatorname{H}^{1}(k,H_{4}/H_{3})\) is surjective,
because there is an homomorphism \(C_{4}\to H^{\prime}_{4}\) defined by \(1\mapsto M^{\prime}_{4}\) which lifts the projection \(H^{\prime}_{4}\to H_{4}/H_{3}\simeq C_{2}\), and \(\operatorname{H}^{1}(k,C_{4})\to\operatorname{H}^{1}(k,C_{2})\) is surjective since \(\zeta_{4}\in k\).
Observe that the image of \(\operatorname{H}^{1}(k,H^{\prime}_{4})\to\operatorname{H}^{1}(k,H_{4})\) is contained, by construction, in the kernel of \(\operatorname{H}^{1}(k,H_{4})\to\operatorname{H}^{1}(k,\operatorname{PGL}_{3})\), since the composition \(\operatorname{H}^{1}(k,\operatorname{SL}_{3})\to\operatorname{H}^{1}(k, \operatorname{GL}_{3})\to\operatorname{H}^{1}(k,\operatorname{PGL}_{3})\) is trivial (notice that these are sets, not groups, but they have a preferred object and it makes sense to consider kernels). We may then apply [Brec, Theorem 16] and obtain that every \(H_{3}\)-structure with field of moduli \(k\) descends to \(\mathbb{P}_{k}^{2}\).
### \(\boldsymbol{H_{4}}\)
Assume \(G=H_{4}\), we have that \(H_{1}\simeq C_{3}\times C_{3}\) is the only \(3\)-Sylow subgroup (it is normal) and hence it is characteristic. It follows that the union \(F\) of the fixed loci of the non-trivial elements of \(H_{1}\) is a finite, distinguished subset of degree \(12\) which is the union of \(4\) triangles corresponding to the \(4\) cyclic subgroups of \(C_{3}\times C_{3}\) (see 4.3). The action of \(H_{4}\) on \(F\) is transitive since \(H_{1}\) acts transitively on each triangle and \(H_{4}/H_{2}\subset H_{5}/H_{2}\simeq A_{4}\) acts as the Klein group, and hence transitively, on the set of \(4\) triangles.
It follows that \(F/H_{4}\) descends to a rational point \(p\in\mathbf{P}_{\xi}(k)\). The stabilizer of a point of \(F\) has order \(72/12=6\) and is isomorphic \(D_{3}\) (it's easy to see that the element of order \(2\) maps to \(-\operatorname{Id}\in\operatorname{SL}(2,3)\)). Since \(D_{3}\) is \(\operatorname{R}_{2}\)[BVb, Proposition 6.17], then \(p\) is liftable.
### \(\boldsymbol{H_{5}}\)
Assume \(G=H_{5}\), it has order \(|H_{5}|=216=2^{3}\cdot 3^{3}\). Let \(g\in H_{5}\) be an element mapping to \(-\operatorname{Id}\in\operatorname{SL}(2,3)\), conjugation by \(g\) acts as \(-1\) on \(H_{1}\) and as the identity on \(H_{5}^{\operatorname{ab}}\). Because of this, and since \(H_{1}\) has odd order, the homomorphism \(H_{1}\to H_{5}^{\operatorname{ab}}\) is trivial (an element in the image has odd order and is equal to its inverse, i.e. it is the identity). Moreover, we have a projection \(H_{5}^{\operatorname{ab}}\to H_{5}/H_{4}\simeq C_{3}\). This implies that \(9\) is the largest power of \(3\) dividing the order of \([H_{5},H_{5}]\), hence \(H_{1}\) is the unique \(3\)-Sylow subgroup of \([H_{5},H_{5}]\) (it is normal), which in turn implies that \(H_{1}\) is characteristic in \(H_{5}\).
Since \(H_{1}\) is characteristic, the union \(F\) of the fixed loci of the non-trivial elements of \(H_{1}\) is a finite, distinguished subset of degree \(12\). As in the previous case, the action of \(H_{5}\) on \(F\) is transitive and hence \(F/H_{5}\subset\mathbb{P}^{2}/H_{5}\) descends to a rational point \(p\in\mathbf{P}_{\xi}(k)\). Let \(x\in F\) be a point and \(G_{x}\subset H_{5}\) its stabilizer, the degree of \(G_{x}\) is \(216/12=18\). Since \(H_{4}\subset H_{5}\) is normal, \(H_{4}\cap G_{x}\) is a normal subgroup of \(G_{x}\), it is isomorphic to \(D_{3}\) by the argument given in the previous case and \(G_{x}/(H_{4}\cap G_{3})\simeq C_{3}\). By [BVb, 6.17, 6.19, 6.20] we get that \(G_{x}\) is of type \(\operatorname{R}_{2}\), hence \(p\) is liftable.
### \(\boldsymbol{A_{5}}\) or \(\boldsymbol{A_{6}}\)
If \(G\simeq A_{n}\) for \(n=5,6\), let us first show that the eigenvalues of \((1,2,3)\in A_{n}\) as an element of \(G\subset\operatorname{PGL}_{3}(K)\) are pairwise different. If they are not, up to conjugation we may assume that the matrix of \((1,2,3)\) is \(\operatorname{diag}(\zeta_{3},\zeta_{3},1)\). Since \((1,2)(4,5)\) normalizes \(\langle(1,2,3)\rangle\), then it must stabilize its fixed locus, namely \((0:0:1)\) and \(\{(s:t:0)\}\). It follows that \((1,2)(4,5)\in\operatorname{GL}_{2}\subset\operatorname{PGL}_{3}\) and hence it commutes with \((1,2,3)=\operatorname{diag}(\zeta_{3},\zeta_{3},1)\), which is absurd.
Every automorphism of \(A_{n}\) maps a \(3\)-cycle to a \(3\)-cycle. For \(n=5\), this is obvious. For \(n=6\), the only other elements of order \(3\) are double \(3\)-cycles, but there is only one conjugacy class of \(3\)-cycles with \(40\) elements and two conjugacy classes of double cocycles with \(20\) elements each. Furthermore, the conjugacy action of \(A_{n}\) on \(3\)-cycles is transitive for both \(n=5,6\).
Let \(x_{1},x_{2},x_{3}\) be the fixed points of \((1,2,3)\). Since \((1,2)(4,5)\) normalizes \(\langle(1,2,3)\rangle\) but it does not commute with \((1,2,3)\), it fixes exactly one of the \(x_{i}\)s, say \(x_{1}\), and swaps \(x_{2},x_{3}\). Let \(F\) be the orbit of \(x_{1}\) and \(F^{\prime}\) the orbit of \(x_{2},x_{3}\): since there is only one conjugacy class of \(3\)-cycles, the union of the fixed loci of the \(3\)-cycles is \(F\cup F^{\prime}\) and we have either \(F=F^{\prime}\) or \(|F^{\prime}|=2|F|\neq|F|\). In any case, \(F\) is a distinguished subset, \(x_{1}\in F\) and \(A_{n}\) acts transitively on it. It follows that \(F/A_{n}\) descends to a rational point \(p\in\mathbf{P}_{\xi}(k)\).
Let \(G_{x_{1}}\) be the stabilizer of \(x_{1}\), since it contains both \((1,2,3)\) and \((1,2)(4,5)\) then it has trivial center (their respective centralizers have trivial intersection). The action of \(G_{x_{1}}\) on the tangent space of \(x_{1}\) gives us an embedding \(G_{x_{1}}\subset\operatorname{GL}_{2}(K)\). Since \(G_{x_{1}}\) has trivial center, the composition \(G_{x_{1}}\to\operatorname{GL}_{2}(K)\to\operatorname{PGL}_{2}(K)\) is injective, and hence \(G_{x_{1}}\) is isomorphic either to \(D_{n},A_{4},S_{4}\) or \(A_{5}\). This implies that \(G_{x_{1}}\simeq D_{n}\) with \(n\) odd, since in all the other cases \(G_{x_{1}}\subset\operatorname{GL}_{2}(K)\) would contain a subgroup isomorphic to \(C_{2}\times C_{2}\) and hence the matrix \(-\operatorname{Id}\in\operatorname{GL}_{2}(K)\), which is central. It follows that \(G_{x_{1}}\) is \(\operatorname{R}_{2}\) by [3, Propositions 6.17], hence \(x\) is liftable.
### Psl(2,7)
Assume \(G\simeq\operatorname{PSL}(2,7)\), we have \(|\operatorname{PSL}(2,7)|=168=2^{3}\cdot 3\cdot 7\). We may choose an embedding of \(\operatorname{PSL}(2,7)\) in \(\operatorname{PGL}_{3}(K)\) so that it contains the matrix \(M_{7}=\operatorname{diag}(1,\zeta_{7},\zeta_{7}^{3})\) and the permutation matrix \(M_{1}\) of order \(3\) given in SS3.2, see [12, Chapter XII, SS123]. Furthermore, we have \(M_{1}^{-1}M_{7}M_{1}=M_{7}^{4}\).
Let \(n_{7}\) be the number of \(7\)-Sylow subgroups, by Sylow's third theorem \(n_{7}\) divides \(2^{3}\cdot 3\) and \(n_{7}\cong 1\pmod{7}\), i.e. \(n_{7}\) is either \(1\) or \(8\). The upper or lower triangular matrices in \(\operatorname{PSL}(2,7)\) with \(1\)-s on the diagonal are two different \(7\)-Sylow subgroups, hences \(n_{7}=8\). It follows that the normalizer of a \(7\)-Sylow subgroup has order \(168/8=21\). Since \(M_{1}\) normalizes \(M_{7}\) and has order \(3\), then the normalizer of \(\langle M_{7}\rangle\) is \(\langle M_{7},M_{1}\rangle\) and hence the centralizer of \(M_{7}\) is \(\langle M_{7}\rangle\).
Let \(F\) be the union of the fixed loci of all the \(7\)-Sylow subgroups, it is a distinguished subset. Since \(M_{1}\) permutes the three fixed points of \(M_{7}\) and \(\operatorname{PSL}(2,7)\) acts transitively by conjugation on the \(7\)-Sylow subgroups, then the action of \(G\) on \(F\) is transitive, and \(F/G\) descends to a rational point \(p\in\mathbf{P}_{\xi}(k)\).
Let \(x=(0:0:1)\in F\) and let \(G_{x}\) be its stabilizer, then \(M_{7}\in G_{x}\). Since the centralizer of \(M_{7}\) is \(\langle M_{7}\rangle\), either \(G_{x}\simeq C_{7}\) or \(G_{x}\) has trivial center. If \(G_{x}\simeq C_{7}\), then it is \(\operatorname{R}_{2}\) by [3, Theorem 6.19] and hence \(p\) is liftable. If \(G_{x}\) has trivial center, the homomorphism \(G_{x}\to\operatorname{GL}_{2}(K)\to\operatorname{PGL}_{2}(K)\) given by the action on the tangent space of \(p\) is injective, hence \(G_{x}\) is dihedral since the other finite subgroups of \(\operatorname{PGL}_{2}(K)\) are either abelian or of order prime with \(7\). If \(G_{x}\) is dihedral, then it is of type \(\operatorname{R}_{2}\) by [3, Propositions 6.17], hence we get that \(p\) is liftable in this case, too.
This concludes the proof of the first half of the theorem. Let us now construct the counterexamples. As we explained in SS2, we will do this by applying the first half of [1, Theorem 4].
(\boldsymbol{C_{a}\times C_{an}}\), \(\boldsymbol{d^{2}-d+1\cong 0\pmod{n}}\), \(\boldsymbol{3|an}\)
With an abuse of notation, for every integer \(m\) we write \(\Delta\) for the diagonal subgroup of \(C_{m}^{3}\), we have a preferred embedding \(C_{m}^{3}/\Delta\subset\operatorname{PGL}_{3}(K)\) by diagonal matrices.
#### 4.11.1. \(\boldsymbol{3|n}\)
Consider the action by permutation of \(C_{3}\) on \(C_{3}^{3}/\Delta\), the semidirect product \(E=(C_{3}^{3}/\Delta)\rtimes C_{3}\) is a non-abelian group of order \(27\) with a central subgroup \(A=\langle(1,2,0)\rangle\subset C_{3}^{3}/\Delta\subset E\) such that \(E/A\simeq C_{3}^{2}\). Observe that \(E\) is \(3\)-torsion:
if \((c,\phi)\) is an element, then \(3(c,\phi)=(c+\phi(c)+\phi^{2}(c),0)\) is trivial since clearly \(c+\phi(c)+\phi^{2}(c)\in\Delta\subset C_{3}^{3}\). Since \(E/A\simeq C_{3}^{2}\) is abelian, \(A\) is central and \(E\) is non-abelian, the extension is non-split.
Since \(3|n\), then \(d^{2}-d+1\cong 0\pmod{n}\) implies that \(d\cong 2\pmod{3}\) and \(9\nmid n\). Let \(N_{0}\) be the subgroup of \(C_{an}^{3}/\Delta\subset\operatorname{PGL}_{3}(K)\) generated by \(G\) and by \(C_{3a}^{3}/\Delta\subset\operatorname{PGL}_{3}(K)\), clearly \(G\subset N_{0}\subset C_{an}^{3}/\Delta\) and \(N_{0}/G\simeq C_{3}\). The natural projection \(C_{an}^{3}/\Delta\to C_{3}^{3}/\Delta\) maps \(G\) onto \(A=\langle(1,2,0)\rangle\) since \(d\cong 2\pmod{3}\), while \(N_{0}\to C_{3}^{3}/\Delta\) is surjective by construction.
The condition \(d^{2}-d+1\cong 0\pmod{n}\) implies that the action by permutation of \(C_{3}\) on \(C_{an}^{3}/\Delta\) restricts to \(G\), and hence to \(N_{0}\). Denote by \(N\) the semidirect product \(N_{0}\rtimes C_{3}\subset\operatorname{PGL}_{3}(K)\) generated by \(N_{0}\) and by \(M_{1}\), clearly \(N/G\simeq C_{3}^{2}\) and we have a commutative diagram of short exact sequences
Choose \(k=\mathbb{C}((s))((t))\), and consider \(G\subset N\subset\operatorname{PGL}_{3}(k)\) as constant group schemes, so that torsors correspond to homomorphisms from the Galois group. The natural projection \(\operatorname{Gal}(K/k)=\hat{\mathbb{Z}}^{2}\to C_{3}^{2}\) defines an \(N/G\)-torsor \(T\) which does not lift to \(E\), since \(E\) is \(3\)-torsion and hence a lifting \(\hat{\mathbb{Z}}^{2}\to E\) would give us a section \(C_{3}^{2}\to E\). In particular, \(T\) does not lift to \(N\), as desired.
#### 4.11.2. \(\boldsymbol{3\nmid n}\), \(\boldsymbol{3\mid a}\)
Now assume \(3\nmid n\), \(3|a\). Since \(3\) does not divide \(n\) and \(2^{2}-2+1\cong 0\pmod{3}\), we may assume that \(d\) is such that \(d^{2}-d+1\cong 0\pmod{3n}\). Let \(N_{0}\subset C_{3an}^{3}/\Delta\subset\operatorname{PGL}_{3}(K)\) be the subgroup generated by \((3n,0,0)\), \((0,3n,0)\) and \((1,d,0)\), then \(G\subset N_{0}\) and \(N_{0}/G\simeq C_{3}\). Since \(d^{2}-d+1\cong 0\pmod{3n}\), then both \(G\) and \(N_{0}\) are stabilized by the action of \(C_{3}\) on \(C_{3an}^{3}/\Delta\). Let \(N\) be the semidirect product \(N_{0}\rtimes C_{3}\), again we have \(N/G\simeq C_{3}^{2}\).
Observe that every element of \(N_{0}\subset C_{3an}^{3}/\Delta\) fixed by \(C_{3}\) is contained in \(C_{3}^{3}/\Delta\subset C_{3an}^{3}/\Delta\), and since \(3|a\) then \(C_{3}^{3}/\Delta\subset G\). Because of this, there are no abelian subgroups of \(N\) which map surjectively on \(N/G\simeq C_{3}^{2}\). Hence, if we choose \(k=\mathbb{C}((s))((t))\), \(\operatorname{Gal}(K/k)=\hat{\mathbb{Z}}^{2}\to N/G=C_{3}^{2}\) as in the previous case, there is no lifting \(\operatorname{Gal}(K/k)\to N\).
(\boldsymbol{C_{a}\times C_{a2^{b}n}}\), \(\boldsymbol{d^{2}\cong 1\pmod{n}}\), \(\boldsymbol{d\cong\pm 1\pmod{2^{b}}}\)
Assume that \(G\simeq C_{a}\times C_{a2^{b}n}\) is generated by \(\operatorname{diag}(\zeta_{a},1,1)\), \(\operatorname{diag}(1,\zeta_{a},1)\) and \(\operatorname{diag}(\zeta_{a2^{b}n},\zeta_{a2^{b}n}^{d},1)\) for some positive integers \(a,b,n,d\) with \(d^{2}\cong 1\pmod{n}\), \(d\cong\pm 1\pmod{2^{b}}\) and \(n\) odd.
Consider the semidirect product \(C_{2^{b}}^{2}\rtimes C_{2}\) where the action swaps the coordinates, define \(E_{1}\) as the subgroup generated by \((1,1,0),(2^{b-1},0,0),(0,0,1)\) and \(E_{-1}\) as the one generated by \((1,-1,0),(2^{b-1},0,0),(0,0,1)\), and let \(A_{\pm 1}\subset E_{\pm 1}\) be the subgroup generated by \((1,\pm 1,0)\). We have that \(E_{\pm 1}\) is an extension of \(C_{2}^{2}\) by \(A_{\pm 1}\simeq C_{2^{b}}\), and there is no abelian subgroup of \(E_{\pm 1}\) mapping surjectively on \(C_{2}^{2}=E_{\pm 1}/A_{\pm 1}\).
Let \(N_{0}\subset\operatorname{PGL}_{3}(K)\) be the subgroup generated by \(G\) and by \(\operatorname{diag}(\zeta_{a2^{b-1}n}^{2^{b-1}n},1,1)=\operatorname{diag}( \zeta_{2a},1,1)\), it is abelian and \(G\) has index \(2\) in \(N_{0}\). Observe that \(\operatorname{diag}(\zeta_{2a},\zeta_{2a}^{d},1)\in G\), and since \(d\) is odd then \(\operatorname{diag}(\zeta_{2a},\zeta_{2a}^{-1},1)\in G\) and \(\operatorname{diag}(1,\zeta_{2a},1)\in N_{0}\). Since \(d^{2}\cong 1\pmod{2^{b}n}\), a permutation matrix swapping the first two coordinates normalizes both \(G\) and \(N_{0}\), let \(N\simeq N_{0}\rtimes C_{2}\) be the subgroup generated by this permutation matrix and \(N_{0}\).
Consider the natural projection \(N_{0}\to C^{2}_{2^{b}}\), if \(d\cong 1\pmod{2^{b}}\) then it extends to a surjective map \(N\to E_{1}\), while if \(d\cong-1\pmod{2^{b}}\) it extends to a surjective map \(N\to E_{-1}\). In both cases, \(G\) maps surjectively on \(A_{\pm 1}\). We thus have a commutative diagram of short exact sequences
We may then choose \(k=\mathbb{C}((s))((t))\), \(\operatorname{Gal}(K/k)=\hat{\mathbb{Z}}^{2}\to N/G\simeq C_{2}^{2}\) similarly to the previous cases, there is no lifting \(\operatorname{Gal}(K/k)\to N\).
### \(\boldsymbol{H_{2}}\)
Again, we choose \(k=\mathbb{C}((s))((t))\) and use constant group schemes. Observe that \(H_{2}\subset H_{4}\) is normal, \(H_{4}/H_{1}\subset\operatorname{SL}(2,3)\) is the quaternion group, \(H_{2}/H_{1}\subset H_{4}/H_{1}\) is the center and \(H_{4}/H_{2}\simeq C_{2}^{2}\). Since the quaternion group has no abelian subgroups of rank \(2\), the natural projection \(\operatorname{Gal}(K/k)=\hat{\mathbb{Z}}^{2}\to H_{4}/H_{2}=C_{2}^{2}\) does not lift to \(H_{4}\).
### \(\boldsymbol{H_{3}}\), \(\boldsymbol{\zeta_{12}\notin k}\)
Take \(k=\mathbb{R}\), and consider the group scheme structures \(\mathfrak{H}_{3},\mathfrak{H}_{4}\) on \(H_{3},H_{4}\) given in SS3.3. We have a factorization
\[\mathfrak{H}_{4}\to\mathfrak{H}_{4}/\mathfrak{H}_{2}\simeq\mathfrak{K}\to \mathfrak{H}_{4}/\mathfrak{H}_{3}\simeq C_{2}.\]
It is enough to show that \(\operatorname{H}^{1}(\mathbb{R},\mathfrak{K})\) is trivial, since this implies that the only non-trivial \(C_{2}\)-torsor over \(\mathbb{R}\) does not lift to \(\mathfrak{H}_{4}\).
By contradiction, let \(T\to\operatorname{Spec}\mathbb{R}\) be a non-trivial \(\mathfrak{K}\)-torsor. Since it is non-trivial and has degree \(4\), then clearly \(T=\operatorname{Spec}(\mathbb{C}\times\mathbb{C})\) as a scheme and \(\mathfrak{K}\subset\mathfrak{S}_{4}\) where \(\mathfrak{S}_{4}\) is the group scheme of automorphisms of the etale scheme \(T\); the group scheme \(\mathfrak{S}_{4}\) is a twisted for of \(S_{4}\). The two automorphisms \((a,b)\mapsto(b,a)\) and \((a,b)\mapsto(\bar{a},\bar{b})\) of \(\operatorname{Spec}(\mathbb{C}\times\mathbb{C})\) act on the four geometric points of \(T\) as different double transpositions, hence we have an embedding \(\operatorname{Kl}\subset\mathfrak{S}_{4}\), where \(\operatorname{Kl}\) is the Klein group with trivial Galois action. It follows that \(\mathfrak{K}=\operatorname{Kl}\), which is absurd: since \(\zeta_{3}\not\in\mathbb{R}\), by construction the Galois action on \(\mathfrak{K}(\mathbb{C})\) is non-trivial.
## 5. Plane curves
Let \(j:C\hookrightarrow\mathbb{P}^{2}_{K}\) be a smooth plane curve of degree \(d\geq 4\) defined over \(K\). The embedding in \(\mathbb{P}^{2}_{K}\) is unique up to composition with elements of \(\operatorname{PGL}_{3}(K)\)[1, Appendix A, SS1, Exercise 18] and hence \(\operatorname{Aut}(C)=\operatorname{Aut}(\mathbb{P}^{2}_{K},C)\). The field of moduli \(k_{(\mathbb{P}^{2},C)}\) of the pair clearly contains the field of moduli \(k_{C}\) of \(C\). Let \(\mathscr{G}_{(\mathbb{P}^{2},C)}\to\operatorname{Spec}k_{(\mathbb{P}^{2},C)}\) be the residual gerbe of \((\mathbb{P}^{2},C)\) and \(\mathscr{G}_{C}\to\operatorname{Spec}k_{C}\) the residual gerbe of \(C\), there is a natural forgetful morphism \(\mathscr{G}_{(\mathbb{P}^{2},C)}\to\mathscr{G}_{C}\).
**Lemma 12**.: _The fields of moduli \(k_{(\mathbb{P}^{2},C)}\) and \(k_{C}\) are equal, and \(\mathscr{G}_{(\mathbb{P}^{2},C)}\to\mathscr{G}_{C}\) is an isomorphism._
Proof.: Let \(\sigma\in\operatorname{Gal}(K/k_{C})\) be an element, there exists an isomorphism \(\phi:C\to\sigma^{*}C\). Choose \(\tau:\mathbb{P}^{2}_{K}\to\sigma^{*}\mathbb{P}^{2}_{K}\) any isomorphism. The composition
\[\tau^{-1}\circ\sigma^{*}j\circ\phi:C\to\mathbb{P}^{2}\]
is a plane embedding of \(C\), hence it is equal to \(g\circ j\) for some \(g\in\operatorname{PGL}_{3}(K)\). We thus have a commutative diagram
which shows that \(\sigma\in\operatorname{Gal}(K/k_{(\mathbb{P}^{2},C)})\) and hence \(k_{C}=k_{(\mathbb{P}^{2},C)}\). The fact that \(\mathscr{G}_{(\mathbb{P}^{2},C)}\to\mathscr{G}_{C}\) is an isomorphism can now be checked after base changing from \(k_{C}=k_{(\mathbb{P}^{2},C)}\) to \(K\), where it follows from the equality \(\operatorname{Aut}(C)=\operatorname{Aut}(\mathbb{P}^{2},C)\).
Up to enlarging \(k\), we may then assume \(k_{(\mathbb{P}^{2},C)}=k_{C}=k\). In particular, we get that \(C\) is defined over \(k\) if and only if the pair \((\mathbb{P}^{2},C)\) is defined over \(k\). As a direct consequence of Lemma 12, we obtain a new proof of the following result by J. Roe and X. Xarles.
**Corollary 13** ([18, Theorem 5]).: _For every model \(\mathfrak{C}\) of \(C\) over \(k\), there exists a unique Brauer-Severi surface \(P_{\mathfrak{C}}\) over \(k\) with an embedding \(\mathfrak{C}\hookrightarrow P_{\mathfrak{C}}\)._
Theorem 4 follows simply by cross-checking Theorem 6 with the list of possible groups of automorphisms of plane sextics [1]. It remains to prove Theorem 1. If the degree of \(C\) is prime with \(3\), then we have \(P_{\mathfrak{C}}=\mathbb{P}^{2}_{k}\) for every model \(\mathfrak{C}\) of \(C\) over \(k\), since the index of \(P_{\mathfrak{C}}\) divides both \(3\) and \(\deg C\).
Proof of Theorem 1.: Let \(C\subset\mathbb{P}^{2}_{K}\) be a smooth plane curve of degree \(d\) prime with \(3\) with field of moduli \(k\). Assume that \(C\) does not descend to \(k\), we want to show that \(\operatorname{Aut}(C)\) has the form \(C_{a}\times C_{2an}\) with \(2an\mid d\) if \(a\neq 1\) and \(4n\mid d(d-2)\) otherwise. Clearly, we may assume \(d\geq 4\). Observe that, since \(3\nmid d\), then \(3\) divides the genus \((d-1)(d-2)/2\) of \(C\).
Notice that the stabilizer of a point of \(C\) acts faithfully on the tangent space, in particular it is cyclic. Because of this, if \(L\) is a line fixed by a non-trivial element \(g\) of \(\operatorname{Aut}(C)\) then \(C\) has normal crossing with \(L\): if \(p\in L\cap C\) and \(L\) is tangent to \(C\) in \(p\), then \(g\) acts trivially on the tangent space of \(C\) in \(p\), which is absurd.
By Theorem 6, \(\operatorname{Aut}(C)\) is critical. There are four types of critical subgroups of \(\operatorname{PGL}_{3}\), let us check them all.
### \(\boldsymbol{C_{a}\times C_{an}}\), \(\boldsymbol{3|an}\)
Since \(3\) divides both \(an\) and the genus, by Riemann-Hurwitz we have
\[-2\cong\deg R\pmod{3},\]
where \(R\subset C\) is the ramification divisor of \(C\to C/\operatorname{Aut}(C)\). Let us show that \(\deg R\) is a multiple of \(3\), this gives the desired contradiction.
We have to study the stabilizer of the action of \(\operatorname{Aut}(C)\) on \(C\), and we know the stabilizers of the action of \(\operatorname{Aut}(C)\) on \(\mathbb{P}^{2}_{K}\). There are three lines \(L_{1},L_{2},L_{3}\) containing all the points with non-trivial stabilizer, see SS4.2.1. Let \(p_{1},p_{2},p_{3}\) be the intersection points, the stabilizers of \(p_{1},p_{2},p_{3}\) are isomorphic to \(C_{a}\times C_{an}\) for every \(i\) while the other points of \(L_{1},L_{2},L_{3}\) have stabilizers isomorphic to \(C_{a}\). In particular, \(p_{1},p_{2},p_{3}\) are the only orbits of degree prime with \(3\).
Let \(\mathbf{P}_{C}\) be the compression of \((\mathbb{P}^{2}_{K},C)\), it is a twisted form of \(\mathbb{P}^{2}_{K}/\operatorname{Aut}(C)\) over \(k\). The singularities of \(\mathbb{P}^{2}_{K}/\operatorname{Aut}(C)\) in \(p_{1},p_{2},p_{3}\) are of type R, see SS4.2.1. Since \(C\) is not defined over the field of moduli, these three points are not \(k\)-rational, hence they map to a unique \(k^{\prime}\)-rational point of \(\mathbf{P}_{C}\) for some extension \(k^{\prime}/k\) of degree \(3\)
Let \(\mathbf{C}\subset\mathbf{P}_{C}\) be the coarse moduli space of the universal curve \(\mathscr{C}\subset\mathscr{P}_{C}\to\mathscr{G}_{C}\), either \(\mathbf{C}\) contains said \(k^{\prime}\)-rational point or not, i.e. either \(C\) contains all three points \(p_{1},p_{2},p_{3}\) or none. In any case, the degree of the ramification divisor is clearly a multiple of \(3\).
### \(\boldsymbol{H_{2},H_{3}}\)
As in the preceding case, it is enough to show that the degree of the ramification divisor is a multiple of \(3\). Recall that the degrees of \(H_{2}\), \(H_{3}\) are \(18,36\) respectively.
Let \(p\in C\) be a point, since the stabilizer acts faithfully on the tangent space of \(p\) then it is cyclic of order \(m\geq 1\). There are no elements of order \(9\) in \(H_{3}\), it follows that \(9\nmid m\) and hence \(|\operatorname{Aut}(C)|/m\) is a multiple of \(3\), i.e. the degree of every orbit is a multiple of \(3\). This clearly implies that the degree of the ramification divisor is a multiple of \(3\).
### \(\boldsymbol{C_{a}\times C_{a2^{b}n}}\)
In this case, we want to show that if \(a\neq 1\) then \(2^{b}an\mid d\) and \(2^{b+1}n\mid d(d-2)\) otherwise. The group \(C_{a}\times C_{a2^{b}n}\) is generated by \(\operatorname{diag}(\zeta_{a},1,1)\), \(\operatorname{diag}(1,\zeta_{a},1)\) and \(\operatorname{diag}(\zeta_{a2^{b}n},\zeta_{a2^{b}n}^{e},1)\) with \(n\) odd and, for every prime power \(q\mid 2^{b}n\), we have \(e\cong\pm 1\pmod{q}\).
Let \(L_{i}\) be the line \(x_{i}=0\) for \(i=1,2,3\), the points of \(\mathbb{P}^{2}\) with non-trivial stabilizer are contained in \(L_{1},L_{2},L_{3}\), see SS4.2.2. If \(a\neq 1\), the three intersection points of these lines have non-cyclic stabilizers and hence are not in \(C\). Since the orbits of \(L_{2}\) different from \((1:0:0)\), \((0:0:1)\) have cardinality equal to \(a2^{b}n\), we get that \(a2^{b}n\mid d=\deg(C\cap L_{2})\). Assume \(a=1\). We want to prove that \(2^{b+1}n\mid d(d-2)\); equivalently, \(d\) is even and every prime power \(q\mid 2^{b}n\) divides either \(d\) or \(d-2\).
The point \((0:0:1)\) is distinguished and descends to a rational point of the compression \(\mathbf{P}_{C}\). Let \(\mathbf{C}\subset\mathbf{P}_{C}\) be the coarse moduli space of the universal curve \(\mathscr{C}\subset\mathscr{P}_{C}\to\mathscr{G}_{C}\), there is a rational map \(\mathbf{C}\dashrightarrow\mathscr{G}_{C}\). If the image of \((0:0:1)\) is in \(\mathbf{C}(k)\), then \(\mathscr{G}_{C}(k)\neq\emptyset\) by the Lang-Nishimura theorem for tame stacks, hence \((0:0:1)\not\in C\).
Furthermore, \(L_{3}/\operatorname{Aut}(C)\) descends to a non-trivial Brauer-Severi curve \(\mathbf{L}_{3}\subset\mathbf{P}_{C}\): if \(\mathbf{L}_{3}\simeq\mathbb{P}^{1}\), then \(\mathscr{G}_{C}(k)\neq\emptyset\) by the Lang-Nishimura theorem for stacks. Notice that \(\operatorname{diag}(-1,-1,1)\in\operatorname{Aut}(C)\) fixes \(L_{3}\), hence \(C\) has normal crossing with \(L_{3}\). This implies that \(d=\deg(C\cap L_{3})\) is even, since \((C\cap L_{3})/\operatorname{Aut}(C)\) descends to a divisor of \(\mathbf{L}_{3}\).
Fix \(q\mid 2^{b}n\) a prime power. There are two cases: either \(e\cong 1\pmod{q}\), or \(e\cong-1\pmod{q}\). If \(e\cong 1\pmod{q}\), then a generic line \(L\) containing \((0:0:1)\) satisfies \(L\cap L_{3}\cap C=\emptyset\), is stabilized by \(\operatorname{diag}(\zeta_{q},\zeta_{q},1)\in\operatorname{Aut}(C)\) and the \(\langle\operatorname{diag}(\zeta_{q},\zeta_{q},1)\rangle\)-orbits of \(L\cap C\) have cardinality \(q\). It follows that \(q\mid\deg(L\cap C)=d\).
Assume \(e\cong-1\pmod{q}\) (and \(q\neq 2\), otherwise \(e\cong 1\) too), then \(\{(1:0:0),(0:1:0)\}\) is distinguished and descends to a point of \(\mathbf{L}_{3}\) with residue field of degree \(2\). This point either is in \(\mathbf{C}\) or not, hence \(C\) either contains both \((1:0:0)\) and \((0:1:0)\) or none. If \(C\) does not contain them, the orbits of \(L_{2}\cap C\) have cardinality \(2^{b}n\), hence \(2^{b}n\mid\deg(L_{2}\cap C)=d\). Assume that \(C\) contains both.
If \(q\) is odd, since \(e\cong-1\pmod{q}\) the degrees of the orbits of \(L_{3}\) different from \((1:0:0)\), \((0:1:0)\) are multiples of \(q\), hence \(q\) divides \(\deg(L_{3}\cap C)-2=d-2\). If \(q=2^{b}\) is even, said orbits have degree multiple of \(2^{b-1}\). Furthermore, the fact that \(\mathbf{L}_{3}\) is a non-trivial Brauer-Severi variety implies that there is an even number of orbits in \(C\cap L_{3}\). This implies that \(2^{b}\mid\deg(C\cap L_{3})-2=d-2\). |
2301.10320 | The angular momentum of electron radiation in a uniform magnetic field | We study theoretically by means of quantum electrodynamics the vortex
radiation of a relativistic electron in a uniform magnetic field. The exact
expressions for the probability of emission of a photon with a certain angular
momentum are found. The classical asymptotics $\hbar\to 0$ of this probability
does not match the angular momentum flux density calculated by the classical
method using the symmetrized energy-momentum tensor. Although the flux of
angular momentum integrated over the radiation directions is the same in both
cases. We found the angular momentum flux of the radiation field using the
canonical (not symmetrized) energy-momentum tensor and showed that the flux
obtained in this way coincide with the classical limit for the probability of
photon emission. | Vladimir Epp, Ulyana Guselnikova | 2023-01-20T04:47:25Z | http://arxiv.org/abs/2301.10320v1 | # The angular momentum of electron radiation in a uniform magnetic field
###### Abstract
We study theoretically by means of quantum electrodynamics the vortex radiation of a relativistic electron in a uniform magnetic field. The exact expressions for the probability of emission of a photon with a certain angular momentum are found. The classical asymptotics \(\hbar\to 0\) of this probability does not match the angular momentum flux density calculated by the classical method using the symmetrized energy-momentum tensor. Although the flux of angular momentum integrated over the radiation directions is the same in both cases. We found the angular momentum flux of the radiation field using the canonical (not symmetrized) energy-momentum tensor and showed that the flux obtained in this way coincide with the classical limit for the probability of photon emission.
keywords:vortex radiation, angular momentum, electron, synchrotron radiation, Dirac equation Msc: 78A40, 33E20 +
Footnote †: journal:
## Introduction
Electromagnetic radiation carrying an angular momentum of more than one Planck constant per photon attracts considerable attention of researchers due to the possibility of application in various fields of physics: information transfer, interaction with atoms, high-energy particle collisions and radiation processes. Vortex light beams have opened up a wide range of applications such as spatial optical confinement of atoms or microscopic objects and phase contrast microscopy [1; 2]. The first theoretical and experimental studies of twisted light or vortex radiation were devoted to laser radiation modified by an astigmatic optical system or numerically calculated holograms [3; 4]. These publications stimulated extensive research of the vortex optical beams. A more extensive bibliography on the history in this area can be found in the reviews [1; 5].
One of the most promising sources of vortex radiation in the X-ray and the gamma range of the spectrum are the charged particles moving in a spiral or vortex beams of charged particles [6; 7]. In particular, spiral motion is realized in a uniform magnetic field and in a helical undulator. Various schemes for obtaining a beam of twisted photons in undulators [8; 9; 10] and free electron lasers [11; 12; 13] have been proposed. Analytical calculations have shown that a particle moving in a spiral emits photons carrying an average angular momentum equal to \(n\hbar\) with \(n\) being the harmonic number [14; 15; 16]. These theoretical results have been confirmed by a number of experimental studies of radiation in helical undulators [17; 18; 19].
In the works listed above the angular momentum of the radiation of a particle moving along a spiral was studied within the framework of classical electrodynamics. The quantum aspects of this problem were studied in [20; 21] by use of semiclassical approach.
In the present work, we calculated the probability of radiation of photons with a high angular momentum using the exact solution of the Dirac equation for an electron in a uniform magnetic field. This article is a continuation of our work [15], where the angular momentum of radiation at spiral motion was studied using classical methods.
The article is organized as follows. The section 1 contains the well-known solution of the Dirac equation for an electron in a uniform magnetic field. The main purpose of this section is to remind the wave function of the electron and to introduce the notation that will be used in what follows. In the section 2, we calculate the probability of spontaneous transitions with the emission of a photon in a given direction, carrying an orbital angular momentum. The classical limit (\(\hbar\to 0\)) is discussed in section 3.
The result obtained here differs from the angular momentum flux density calculated earlier by the classical method. Therefore, in section 4, we find the canonical angular momentum flux density in the Coulomb gauge of the vector potential by use of the classical method and show that this angular momentum flux is consistent with the quantum calculation. Finally, in the last section, we discuss the obtained results.
## 1 Solution to the Dirac equation
Let an electron with charge \(e=-e_{0}\) and mass \(m\) be in a uniform magnetic field \(\mathbf{H}\). In a cylindrical coordinate system \(r,\varphi,z\) with the \(z\) axis directed along the magnetic field, the vector potential can be given as
\[\mathbf{A}=\frac{1}{2}Hr\mathbf{\hat{\varphi}},\]
where \(\mathbf{\hat{\varphi}}\) is the unit vector corresponding to the \(\varphi\) coordinate. The solution to the Dirac equation in such a field has the form [22; 23; 24]
\[\Psi_{nl\zeta k_{z}}(\mathbf{r},t)=\sqrt{\frac{\gamma}{\pi L}}\exp[i(-Kct+(l-1/2) \varphi+k_{z}z)]\left(\begin{array}{c}c_{1}e^{-i\varphi/2}I_{n-1,s}(\rho)\\ c_{2}e^{i\varphi/2}I_{n,s}(\rho)\\ c_{3}e^{-i\varphi/2}I_{n-1,s}(\rho)\\ c_{4}e^{i\varphi/2}I_{n,s}(\rho)\end{array}\right), \tag{1}\]
where \(n\) and \(l\) are the principal and orbital quantum numbers, respectively, \(n=0,1,2..\), \(s\) is the radial quantum number: \(s=n-l\). Spin number \(\zeta=1\) if the electron spin is directed along the magnetic field and \(\zeta=-1\) - vice versa. The wave function (1) is an eigenfunction of the \(z\)-component of the angular momentum operator \(\widehat{L}_{z}\) and of the \(z\)-component of the linear momentum operator \(\widehat{p}_{z}\)
\[\widehat{L}_{z}\Psi=\left(l-\frac{i}{2}\right)\hbar\Psi,\quad\hat{p}_{z}\Psi= k_{z}\Psi,\]
\(\hbar\) is the Planck constant. The following notations are used in (1)
\[\gamma=\frac{e_{0}H}{2c\hbar},\quad K=\frac{E}{c\hbar},\quad\rho=\gamma r^{2},\]
\(c\) is the speed of light, \(E\) is the particle energy
\[E=c\sqrt{m^{2}c^{2}+\hbar^{2}(k_{z}^{2}+4\gamma n)}, \tag{2}\]
\(L\) is the formal length restricting motion along the \(z\) axis, \(I_{ns}(\rho)\) are the Laguerre functions
\[I_{ns}(\rho)=\sqrt{\frac{s!}{n!}}e^{-\rho/2}\rho^{(n-s)/2}Q_{s}^{n-s}(\rho), \tag{3}\]
and \(Q_{s}^{l}\) are the generalized Laguerre polynomials
\[Q_{s}^{l}=\frac{1}{s!}e^{\rho}\rho^{-l}\frac{\mathrm{d}^{s}}{\mathrm{d}\rho^{s }}(\rho^{s+l}e^{-\rho}).\]
The \(c_{i}\) matrix determines the electron spin polarization
\[\begin{pmatrix}c_{1}\\ c_{2}\\ c_{3}\\ c_{4}\end{pmatrix}=\frac{1}{2\sqrt{2}}\begin{pmatrix}B_{+}(A_{+}+A_{-})\\ B_{-}(A_{-}-A_{+})\\ B_{+}(A_{+}-A_{-})\\ B_{-}(A_{+}+A_{-})\end{pmatrix},\]
where
\[A_{+}= \sqrt{1+k_{z}/K}\,,\quad A_{-}=\zeta\sqrt{1-k_{z}/K}\,, \tag{4}\] \[B_{+}= \sqrt{1+\zeta k_{0}/K_{0}}\,,\quad B_{-}=\zeta\sqrt{1-\zeta k_{0}/K _{0}}\,, \tag{5}\]
and
\[K_{0}=\sqrt{K^{2}-k_{z}^{2}}\,,\quad k_{0}=mc/\hbar.\]
## 2 Probability of the transitions \(l\to l^{\prime}\)
Let us find the probability of emission of a photon with a wave vector \(k\) and a \(z\)-component of angular momentum equal to \(L_{z}\). By virtue of the angular momentum conservation law, this is the probability of transition from the state with the quantum number \(l\) to the state with the quantum number \(l^{\prime}=l-L_{z}/\hbar\). We will specify the photon polarization by the basis
\[\mathbf{e}_{\pm}=\frac{1}{\sqrt{2}}(\mathbf{e}_{1}\pm i\mathbf{e}_{2}), \tag{6}\]
where \(\mathbf{e}_{1}\) and \(\mathbf{e}_{2}\) are mutually orthogonal real vectors orthogonal to the wave vector \(k\), so that the vectors listed in the order \(\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{k}\) follow the right-hand rule. The vector \(\mathbf{e}_{+}\) corresponds to the right-handed polarization, and \(\mathbf{e}_{-}\) to the left-handed one.
The average value of \(z\)-component of the photon angular momentum is found by
\[\frac{dL_{z}}{\mathrm{d}t}=\sum_{m^{\prime}}\hbar(l-l^{\prime})w_{\pm}, \tag{7}\]
where \(m^{\prime}=n^{\prime},l^{\prime},k_{z}^{\prime},\zeta^{\prime}\) are the quantum numbers of the final state, and \(w_{\pm}\) is the probability of transitions from state \(|nlk_{z}\zeta\rangle\) to state \(|n^{\prime}l^{\prime}k_{z}^{\prime}\zeta^{\prime}\rangle\). The transition probability \(w_{\pm}\) was calculated in [23; 24; 25]. Here we briefly repeat the main steps of the calculations.
\[w_{\pm}=\frac{e_{0}^{2}}{2\pi\hbar}\int\frac{\mathrm{d}^{3}k}{k}\delta(K-K^{ \prime}-k)S_{\pm}. \tag{8}\]
The matrix elements \(S_{\pm}\) in the basis (6) take the form
\[S_{\pm}=\frac{1}{2}[((\mathbf{n}\times\langle\mathbf{\alpha}\rangle^{*})(\mathbf{n}\times \langle\mathbf{\alpha}\rangle))\pm((\mathbf{n}(\langle\mathbf{\alpha}\rangle\times\langle \mathbf{\alpha}\rangle^{*}))], \tag{9}\]
The asterisk denotes complex conjugate. The vector \(\langle\mathbf{\alpha}\rangle\) represents the matrix elements of the Dirac matrix \(\mathbf{\alpha}\)
\[\langle\mathbf{\alpha}\rangle=\int\psi_{n^{\prime}l^{\prime}k_{z}^{\prime}}\zeta ^{l-(\mathbf{k}\mathbf{r})}\mathbf{\alpha}\Psi_{nl\zeta k_{z}}\mathrm{d}^{3}x, \tag{10}\]
The vector \(\mathbf{n}=\mathbf{k}/k\) is the unit vector in the direction of photon linear momentum. By virtue of axial symmetry, the vector \(\mathbf{n}\) can be specified as \((n_{x},n_{y},n_{z})=(0,\sin\theta,\cos\theta)\) where \(\theta\) is the angle between the \(z\)-axis and vector \(\mathbf{n}\). Integration in (8) over \(d^{3}k\) leads to the expression
\[w_{\pm}=\frac{e_{0}^{2}}{\hbar}\int S_{\pm}\frac{k\sin\theta d\theta}{1-\beta_ {||}\cos\theta}, \tag{11}\]
where
\[\beta_{\parallel}=-\frac{\partial K^{\prime}}{\partial k}=\frac{k_{z}-k\cos \theta}{K-k}.\]
In the spherical coordinate system \(r,\theta,\varphi\) the matrix elements (9) are
\[S_{\pm}= \frac{1}{2}\left[|\langle\alpha_{1}\rangle|^{2}+|\langle\alpha_{2} \rangle\cos\theta-\langle\alpha_{3}\rangle\sin\theta|^{2}\mp iS_{\parallel} \right], \tag{12}\] \[S_{\parallel}= \big{(}\langle\alpha_{1}\rangle^{*}\langle\alpha_{2}\rangle- \langle\alpha_{2}\rangle^{*}\langle\alpha_{1}\rangle\big{)}\cos\theta\] \[-\big{(}\langle\alpha_{1}\rangle^{*}\langle\alpha_{3}\rangle- \langle\alpha_{3}\rangle^{*}\langle\alpha_{1}\rangle\big{)}\sin\theta.\]
Integration in (10) over the volume gives
\[\langle\alpha_{1}\rangle =i[(c_{1}^{\prime}c_{4}+c_{3}^{\prime}c_{2})I_{n,n^{\prime}-1}-(c _{1}c_{4}^{\prime}+c_{3}c_{2}^{\prime})I_{n-1,n^{\prime}}]f,\] \[\langle\alpha_{1}\rangle =[(c_{1}^{\prime}c_{4}+c_{3}^{\prime}c_{2})I_{n,n^{\prime}-1}-(c _{1}c_{4}^{\prime}+c_{3}c_{2}^{\prime})I_{n-1,n^{\prime}}]f,\] \[\langle\alpha_{3}\rangle =[(c_{1}^{\prime}c_{3}+c_{1}c_{3}^{\prime})I_{n-1,n^{\prime}-1}-( c_{2}c_{4}^{\prime}+c_{2}^{\prime}c_{4})I_{n,n^{\prime}}]f, \tag{13}\]
where the functions \(I_{n,n^{\prime}}=I_{n,n^{\prime}}(x)\) are the functions of the argument \(x\)
\[x=\frac{k^{2}\sin^{2}\theta}{4\gamma},\quad f=I_{ss^{\prime}}(x)\delta_{k_{z}^ {\prime},k_{z}-k\cos\theta}. \tag{14}\]
Finally, substituting (11) into (7), we obtain the rate of radiation of the \(z\)-component of angular momentum
\[\frac{\mathrm{d}L_{z}}{\mathrm{d}t}=e_{0}^{2}\sum_{m^{\prime}}(l-l^{\prime}) \int S_{\pm}\frac{k\sin\theta\mathrm{d}\theta}{1-\beta_{||}\cos\theta}. \tag{15}\]
This expression differs from the formula for the intensity of radiation [24; 25]
\[W=ce_{0}^{2}\sum_{m^{\prime}}\int S_{\pm}\frac{k^{2}\sin\theta \mathrm{d}\theta}{1-\beta_{\parallel}\cos\theta}\]
by replacing the \(z\)-component of the photon angular momentum \(\hbar(l-l^{\prime})\) by the photon energy \(\hbar\omega\).
The modulus of the wave vector \(k\) in (15) is found as a solution to the system of equations expressing the energy and momentums conservation law
\[k_{z}-k_{z}^{\prime}-k\cos\theta=0, \tag{16}\] \[K-K^{\prime}-k=0. \tag{17}\]
This gives
\[k=K\frac{1-\beta_{\parallel}\cos\theta}{\sin^{2}\theta}\left(1-\sqrt{1-\frac{ 4\gamma(n-n^{\prime})\sin^{2}\theta}{K^{2}(1-\beta_{\parallel}\cos\theta)^{2 }}}\,\right). \tag{18}\]
Equations (12) - (15) determine the angular momentum flux carried by photons during spontaneous transitions to lower energy levels.
## 3 The classic limit
Next, we find the asymptotic expression for the angular momentum flux (15) in the limit \(\hbar\to 0\) and \(n,n^{\prime}\gg 1\). In this approximation, the wave vector (18) is equal to
\[k\approx\frac{(n-n^{\prime})\omega_{0}}{c(1-\beta_{\parallel}\cos\theta)}, \tag{19}\]
where \(\omega_{0}=eHc/E\) is the frequency of electron motion in a circle. It can be seen from (2) that the linear momentum component orthogonal to the \(z\) axis is equal to \(p_{\perp}=2\hbar\sqrt{\gamma n}\). In the classical limit \(p_{\perp}=Ev_{\perp}/c^{2}\), where \(v_{\perp}\) is the velocity part orthogonal to the \(z\) axis. Hence, it is convenient to introduce the notation
\[\beta_{\perp}=\frac{v_{\perp}}{c}=\frac{2c\hbar\sqrt{\gamma n}}{E} \tag{20}\]
Substituting (19) and (20) into the argument of the Laguerre functions (14), we obtain
\[x\approx\frac{(n-n^{\prime})^{2}\beta_{\perp}\sin^{2}\theta}{4n(1-\beta_{ \parallel}\cos\theta)}. \tag{21}\]
In the classical limit, the photon energy is much less than the electron energy, i.e. \(n-n^{\prime}\ll n\). Therefore, for \(n\to\infty\) and fixed \(n-n^{\prime}\) the variable \(x\) decreases as \(1/n\). The asymptotic expression for Laguerre polynomials of the form \(Q_{n}^{l}(z/n)\) for large \(n\) and fixed \(l\) and \(z\) has the form [26, 27]
\[Q_{n}^{l}\left(\frac{z}{n}\right)\approx n^{l}e^{z/2n}z^{-l/2}J_{l}(2\sqrt{z}), \tag{22}\]
where \(J_{l}\) are the Bessel functions. Hence,
\[I_{n,n^{\prime}}\approx J_{n-n^{\prime}}(\xi),\quad\xi=\frac{\nu\beta_{\perp} \sin\theta}{1-\beta_{\parallel}\cos\theta}. \tag{23}\]
This asymptotics does not apply to the Laguerre functions \(I_{s,s^{\prime}}(x)\), since in the latter case \(x\) tends to zero regardless of the indices \(s\) and \(s^{\prime}\). Assuming \(s\) and \(s^{\prime}\) to be fixed and letting \(x\) tend to zero, we obtain for the Laguerre polynomials
\[\lim_{x\to 0}Q_{n}^{l}(x)=\frac{1}{n!}(l+1)(l+2)\ldots(l+n). \tag{24}\]
Substituting this into the Laguerre function, we get
\[\lim_{x\to 0}I_{s,s^{\prime}}(x)=\delta_{s,s^{\prime}}. \tag{25}\]
Thus, we can the neglect the change in quantum number \(s\) in transitions between states with large numbers \(n\). In this case \(n-n^{\prime}=l-l^{\prime}\). This means that the \(z\)-component of the photon angular momentum can only be positive (if \(z\)-axis is directed along the magnetic field). Substituting the resulting expansions into (12), averaging the coefficients \(c_{i}\) over the initial spin states, and summing up \(c_{i}^{\prime}\) over the final states, we obtain
\[S_{\pm}=\frac{1}{2}\left[\beta_{\perp}J_{\nu}^{\prime}(\xi)\pm\frac{\cos \theta-\beta_{\parallel}}{\sin\theta}J_{\nu}(\xi)\right]^{2}. \tag{26}\]
As a result, the flux of angular momentum in the radiation field (15) takes the form
\[\frac{\mathrm{d}L_{z\pm}}{\mathrm{d}t}=\frac{e_{0}^{2}\omega_{0}}{2c}\sum_{ \nu=1}^{\infty}\nu^{2}\int\limits_{0}^{\pi}\left[\beta_{\perp}J_{\nu}^{\prime} (\xi)\pm\frac{\cos\theta-\beta_{\parallel}}{\sin\theta}J_{\nu}(\xi)\right]^{2 }\frac{\sin\theta\mathrm{d}\theta}{(1-\beta_{\parallel}\cos\theta)^{2}}. \tag{27}\]
This expression was obtained in [20] by use of semiclassical method.
The integrand in Eq. (27) differs from the flux density of angular momentum obtained by use of classical electrodynamics [15], which reads (for simplicity, we put \(\beta_{\parallel}=0\)):
\[\frac{\mathrm{d}L_{z}}{\mathrm{d}\Omega\mathrm{d}t}=\frac{e_{0}^{2}\omega_{0} \beta\sin\theta}{2\pi c}\sum_{\nu=1}^{\infty}\nu\left\{\xi\left[{J_{\nu}^{ \prime}}^{2}(\xi)+\cot^{2}\theta\,J_{\nu}^{2}(\xi)\right]+J_{\nu}(\xi)J_{\nu} ^{\prime}(\xi)\right\}. \tag{28}\]
The main difference between the Eqs (27) and (28) is that the angular momentum flux (28) in the direction of the \(z\) axis is equal to zero, while the flux (27) is not. According to standard electrodynamics, the flux of angular momentum component (\(\mathbf{L}\mathbf{n}\)) in the direction of the vector \(\mathbf{n}\) is equal to zero [28; 29] The angular momentum flux (28) satisfies this condition. But the angular momentum flux of radiation in the direction of the \(z\) axis, obtained by quantum theory, is not equal to zero. In particular, if we put \(\theta=0,\pi\) in Eq. (27), we get
\[\frac{\mathrm{d}L_{z\pm}}{\mathrm{d}t}=\frac{e_{0}^{2}\omega_{0}\beta_{\perp}} {8\pi c(1-\beta_{\parallel})^{2}}\left[1\pm\frac{\cos\theta-\beta_{\parallel} }{1-\beta_{\parallel}\cos\theta}\right]_{\theta=0,\pi}. \tag{29}\]
This means that in the direction of the \(z\) axis only right-handed photons are emitted, and only left-handed ones are emitted in the opposite direction.
More detailed information can be obtained from the exact quantum equations (12) - (15). For \(\theta=0,\pi\) the argument of the Laguerre functions (14) vanishes and \(I_{ss^{\prime}}(0)=\delta_{ss^{\prime}}\). Provided that \(n^{\prime}<n\), the matrix elements (13) take the form
\[\langle\alpha_{1}\rangle =-i(c_{1}c_{4}^{\prime}+c_{3}c_{2}^{\prime})\delta_{n-1,n^{\prime }}\delta_{s,s^{\prime}}\delta_{k^{\prime}_{z},k_{z}-k\cos\theta}, \tag{30}\] \[\langle\alpha_{2}\rangle =i\langle\alpha_{1}\rangle,\quad\langle\alpha_{3}\rangle=0. \tag{31}\]
As we can see, the matrix elements are nonzero only for \(n^{\prime}=n-1\) and \(s^{\prime}=s\) and, consequently, for \(l^{\prime}=l-1\). Thus, in the \(\theta=0,\pi\) directions, photons are emitted only at transitions \(|n,l,\zeta\rangle\rightarrow|n-1,l-1,\zeta^{\prime}\rangle\). Decrease the quantum number \(l\) by one means that the angular momentum of these photons is \(\hbar\), and the projection on the direction of radiation is either \(\hbar\) at \(\theta=0\), or \(-\hbar\) at \(\theta=\pi\). Hence, the angular momentum of photons emitted in the direction of the \(z\) axis is represented only by spin. Their orbital momentum is zero. Which, in fact, is consistent with the classical representation (27), but not consistent with (28). Note again that the results of integration over the angles of the Eqs (27) and (28) are the same. The reasons for this discrepancy are discussed in the next section.
## 4 Canonical angular momentum
The disagreement between the quantum and classical results obtained in the previous section is due to the fact that the angular momentum density in classical electrodynamics is defined ambiguously. The straightforward application of the Noether theorem to electromagnetic field yields the proper integral conservation laws. In this case, the integrands can be transformed in such a way that the integral over the entire space does not change. In particular, the symmetrization of the energy-momentum tensor of the electromagnetic field is based on this [30; 31].
When calculating the angular momentum flux in our paper [15], we proceeded from the symmetrized energy-momentum tensor, which leads to the well-known expression for the angular momentum flux in the wave zone [28; 32]
\[\frac{\mathrm{d}\mathbf{L}}{\mathrm{d}t}=\frac{r^{2}}{c}\int(\mathbf{r}\times\mathbf{P}) \mathrm{d}\Omega, \tag{32}\]
where
\[\mathbf{P}=\frac{c}{4\pi}(\mathbf{E}\times\mathbf{H})\]
is the Poynting vector, and \(\mathbf{E}\) and \(\mathbf{H}\) are the electric and magnetic fields respectively.
Meanwhile, an immediate consequence of Noether's theorem is the conservation of the nonsymmetric canonical energy-momentum tensor [28; 31]
\[T^{\mu\nu}=-\frac{1}{4\pi}g^{\mu\alpha}F_{\alpha\rho}\partial^{\nu}A^{\rho}+ \frac{1}{16\pi}g^{\mu\nu}F_{\rho\sigma}F^{\rho\sigma}\]
The corresponding density of the canonical angular momentum is represented by the third rank tensor [33]
\[M^{\alpha\beta\gamma}=\left(x^{\alpha}T^{\beta\gamma}-x^{\beta}T^{\alpha \gamma}\right)+\frac{1}{4\pi}\left(-A^{\alpha}F^{\beta\gamma}+A^{\beta}F^{ \alpha\gamma}\right).\]
In this case, the angular momentum density of the electromagnetic field is given by the component \(M^{0\beta\gamma}\) or by the vector \(M_{i}=\varepsilon_{ijk}M^{0jk}\), where \(j,k=1,2,3\), and \(\varepsilon_{ijk}\) is the unit antisymmetric tensor of the third rank. Accordingly, the angular momentum flux density in the wave zone is equal to \(c\mathbf{M}\), and the total flux is determined by the integral
\[\frac{\mathrm{d}\mathbf{L}}{\mathrm{d}t}=c{r}^{2}\int\mathbf{M}\mathrm{d}\Omega. \tag{33}\]
The vector \(\mathbf{M}\) in the Coulomb gauge \(\mathbf{\nabla}\mathbf{A}=0\) can be represented as the sum [34; 35; 28]
\[\mathbf{M}=\mathbf{\mathcal{L}}+\mathbf{\mathcal{S}}, \tag{34}\]
where \(\mathbf{\mathcal{L}}\) and \(\mathbf{\mathcal{S}}\) are the "orbital" and "spin" parts respectively
\[\mathbf{\mathcal{L}}=\frac{1}{4\pi c}E_{i}(\mathbf{r}\times\mathbf{\nabla})A_{i},\quad\bm {\mathcal{S}}=\frac{1}{4\pi c}(\mathbf{E}\times\mathbf{A}). \tag{35}\]
The result of integration over the solid angle in Eqs (32) and (33) is the same, however the integrands differ significantly. Indeed, let us calculate the angular momentum flux due to (33), using the expressions for the Fourier components of the electric field and vector potential obtained in [15]
\[\begin{split} A_{\nu r}&=\frac{B}{r}J_{\nu}(\xi), \quad A_{\nu\theta}=\frac{B}{r}\cot\theta J_{\nu}(\xi),\\ A_{\nu\varphi}&=i\frac{B\beta}{r}J_{\nu}^{\prime} (\xi),\quad B=ee^{i(kr+\nu\varphi-\nu\pi/2)}\\ E_{\nu r}&=0,\quad E_{\nu\theta}=\frac{iB\nu\omega \cot\theta}{rc}J_{\nu}(\xi),\quad E_{\nu\varphi}=-\frac{B\nu\omega\beta}{rc} J_{\nu}^{\prime}(\xi).\end{split} \tag{36}\]
When using the vector potential, it suffices to restrict ourselves to quantities of the first order in powers of \(1/r\). In the Coulomb gauge one should put \(A_{\nu r}=0\). Let us substitute these values into (35). In order to find the time-averaged values for the \(\nu\)-th harmonic, we need to make the substitution \(\mathbf{E}\to\mathbf{E}_{\nu}^{*}\), \(\mathbf{A}\to\mathbf{A}_{\nu}\). As expected, the vector \(\mathbf{\mathcal{L}}_{\nu}\) has only a transverse component (\(\mathbf{\mathcal{L}}_{\nu}\cdot\mathbf{r}=0\)), while the vector \(\mathbf{\mathcal{S}}_{\nu}\) is directed radially (\(\mathbf{\hat{r}}=\mathbf{r}/r\)):
\[\mathbf{\mathcal{S}}_{\nu}=\frac{e_{0}^{2}\nu\omega_{0}\beta}{2\pi r^{2}c^{2}} \cot\theta J_{\nu}^{\prime}(\xi)J_{\nu}(\xi)\mathbf{\hat{r}}.\]
The \(z\)-component of the angular momentum flux is equal to
\[\mathcal{L}_{\nu z} =\frac{e_{0}^{2}\nu^{2}\omega_{0}}{4\pi r^{2}c^{2}}\left[\beta^{2 }{J^{\prime}}_{\nu}^{2}(\xi)+\cot^{2}\theta J_{\nu}^{2}(\xi)-\frac{2\beta}{\nu }\frac{\cos^{2}\theta}{\sin\theta}J_{\nu}^{\prime}(\xi)J_{\nu}(\xi)\right], \tag{37}\] \[\mathcal{S}_{\nu z} =\frac{e_{0}^{2}\nu\omega_{0}\beta}{2\pi r^{2}c^{2}}\frac{\cos^{ 2}\theta}{\sin\theta}J_{\nu}^{\prime}(\xi)J_{\nu}(\xi) \tag{38}\]
Substituting these expressions into (33) and (34), we obtain the total flux of the \(z\)-component of angular momentum
\[\frac{\mathrm{d}L_{z}}{\mathrm{d}t}=\frac{e_{0}^{2}\omega_{0}}{2c}\sum_{\nu=1 }^{\infty}\nu^{2}\int\limits_{0}^{\pi}\left[\beta^{2}{J^{\prime}}_{\nu}^{2}( \xi)+\cot^{2}\theta J_{\nu}^{2}(\xi)\right]\sin\theta\mathrm{d}\theta \tag{39}\]
in full accordance with Eq. (27) obtained from the probability of photon emission in the limit \(\hbar\to 0,\,\beta_{\parallel}=0\).
In the direction \(\theta=0\), the \(z\)-component of the "orbital" angular momentum is equal to zero, and in the \(z\)-component of the "spin" angular momentum, only the first harmonic remains
\[\mathbf{\mathcal{S}}_{\nu}\Big{|}_{\theta=0}=\frac{e_{0}^{2}\omega_{0}\beta^{2}}{8 \pi r^{2}c^{2}}\delta_{1\nu}\mathbf{\hat{r}}.\]
Wherein, the average angular momentum per one photon is equal to \(\hbar\). In the direction \(\theta=\pi/2\), which lies in the plane of the electron orbit, on the contrary, the "spin" angular momentum \(\mathcal{S}_{\nu z}\) is equal to zero, and
the "orbital" angular momentum \(\mathcal{L}_{\nu z}\) takes on a maximum value. The absence of "spin" angular momentum is due to the fact that in the classical limit the probability of emission of left-handed photons is equal to the probability of emission of right-handed ones in this direction. In this case, the "orbital" angular momentum is associated with the momentum transfer of the electromagnetic field.
## 5 Conclusion
Using the solution to the Dirac equation for an electron in a uniform magnetic field, we have found exact expressions for the probability of emission of a photon with a certain projection of its angular momentum onto the direction of the magnetic field. We have also found the asymptotic expression for the angular momentum flux density in radiation field in the limit \(\hbar\to 0\). The expressions obtained in this way do not coincide with the previously found equations for the angular momentum flux density derived from the symmetrized energy-momentum tensor of the electromagnetic field. This discrepancy is explained by the fact that the density of the angular momentum of the field and, accordingly, the flux of the angular momentum are determined ambiguously. For comparison, we found the angular momentum flux in radiation using the canonical (not symmetrized) energy-momentum tensor and showed that the formulas obtained in this way coincide with the classical limit of equations for the probability of photon emission. In this case, the flux of angular momentum integrated over the radiation directions following from the canonical and symmetrized energy-momentum tensor, as expected, coincide.
The question of the extent to which the canonical or symmetrized energy-momentum tensor is applicable to real physical measurements is traditionally the subject of hot discussions [9; 34; 36; 37; 38; 39]. This problem is closely related to the determination of the angular momentum density of the field and to the separation of spin and orbital degrees of freedom. Obviously, the applicability of certain formulas to specific physical measurements is determined by the specific conditions of the light-matter interactions. Currently, various methods for registering the orbital angular momentum of radiation are being developed [40; 41; 42]. Moreover, there are indications of the possibility of independent measurement of the spin and orbital angular momentum [43; 44; 45].
## Acknowledgments
This research was supported by Russian Foundation for Basic Research, project No. 19-42-700011.
|
2310.06203 | Graphs with three and four distinct eigenvalues based on circulants | In this paper, we aim to address the open questions raised in various recent
papers regarding characterization of circulant graphs with three or four
distinct eigenvalues in their spectra. Our focus is on providing
characterizations and constructing classes of graphs falling under this
specific category. We present a characterization of circulant graphs with prime
number order and unitary Cayley graphs with arbitrary order, both of which
possess spectra displaying three or four distinct eigenvalues. Various
constructions of circulant graphs with composite orders are provided whose
spectra consist of four distinct eigenvalues. These constructions primarily
utilize specific subgraphs of circulant graphs that already possess two or
three eigenvalues in their spectra, employing graph operations like the tensor
product, the union, and the complement. Finally, we characterize the iterated
line graphs of unitary Cayley graphs whose spectra contain three or four
distinct eigenvalues, and we show their non-circulant nature. | Milan Bašić | 2023-10-09T23:06:23Z | http://arxiv.org/abs/2310.06203v1 | # Graphs with three and four distinct eigenvalues based on circulants
###### Abstract
In this paper, we aim to address the open questions raised in various recent papers regarding characterization of circulant graphs with three or four distinct eigenvalues in their spectra. Our focus is on providing characterizations and constructing classes of graphs falling under this specific category. We present a characterization of circulant graphs with prime number order and unitary Cayley graphs with arbitrary order, both of which possess spectra displaying three or four distinct eigenvalues. Various constructions of circulant graphs with composite orders are provided whose spectra consist of four distinct eigenvalues. These constructions primarily utilize specific subgraphs of circulant graphs that already possess two or three eigenvalues in their spectra, employing graph operations like the tensor product, the union, and the complement. Finally, we characterize the iterated line graphs of unitary Cayley graphs whose spectra contain three or four distinct eigenvalues, and we show their non-circulant nature.
**Keywords:** Circulant graphs; Spectra; Graph operations; Ramanujan functions; Power residues.
**AMS Classification:** 05C50, 05E30, 11A07, 11A15, 11A25.
## 1 Introduction
Circulant graphs are Cayley graphs over a cyclic group. A graph is called integral if all the eigenvalues of its adjacency matrix are integers. In other words, the corresponding adjacency matrix of a circulant graph is the circulant matrix (a special kind of Toeplitz matrix where each row vector is rotated one element to the right relative to the preceding row vector). Integral graphs are extensively studied in the literature and there has been a vast research on some types of classes of graphs with integral spectrum. The interest for circulant graphs in graph theory and applications has grown during the last two decades. They appear in coding theory, VLSI design, Ramsey theory and other areas. Since they possess many interesting properties (such as vertex transitivity called mirror symmetry), circulants are applied in quantum information transmission and proposed as models for quantum spin networks that permit the quantum phenomenon called perfect state transfer [1, 2, 14]. In the quantum communication scenario, the important feature of this kind of quantum graphs (especially those with integral spectrum) is the ability of faithfully transferring quantum states without modifying the network topology.
Graphs with a high multiplicity of eigenvalues in their adjacency matrix spectrum, indicating a limited number of distinct eigenvalues, are commonly acknowledged to possess a distinctive structure. In such cases, specific graph operations can often provide effective representations for
these graphs. Particularly, graphs with coinciding eigenvalues are considered trivial, whereas connected graphs with two distinct eigenvalues are known as complete graphs, and connected regular graphs with precisely three distinct eigenvalues are classified as strongly regular. Non-regular graphs that possess three distinct eigenvalues have received relatively less attention in the existing literature. This is primarily because their appealing combinatorial properties tend to diminish when the graph loses its regularity, leading to increased complexity. A similar observation can be made for graphs that are regular but have four distinct eigenvalues [10, 11]. Finding a characterization of connected regular graphs with four distinct eigenvalues is an extremely challenging problem. Even when we focus on particular types of graphs, like circulant graphs, the topic continues to be attractive.
This paper focuses on investigating spectral characteristics of circulant graphs, a topic that has been explored in several recent papers [3, 6, 7, 23, 22]. Additionally, through examination of the properties and characterization of circulant graphs (both integral and non-integral) possessing three or four distinct eigenvalues, we can make a valuable contribution to the spectral theory of both classes of graphs. Furthermore, by employing graph theoretical operations, such as line operations performed on a class of circulant graphs, one can derive classes of non-circulant integral graphs that exhibit spectra comprising three or four distinct eigenvalues. Starting from certain classes of circulant graphs, we also utilize particular constructions involving graph operations like tensor product, union, and complement to discover new classes of circulant graphs that possess four eigenvalues in their spectra. Our motivation for this approach steams from [9], in which construction of certain classes of strongly regular graphs and symmetric block-designs is presented. The constructions begin with small and simple graphs \(K_{2}\) and \(K_{4}\) and involve performing NEPS operations on them. The paper [10] provides a deeper examination of connected regular graphs with four distinct eigenvalues, presenting some properties, constructions, and examples. Our primary source of motivation arises from [6], which introduces highly specific classes of integral circulant graphs whose spectra exhibit four distinct eigenvalues. These graphs possess a prime power order or an order precisely equal to the product of three distinct primes with a specific divisor set. Additionally, a part of the study conducted in this paper focuses on characterizing strongly regular circulant graphs of prime order. This line of research was initially introduced in [7] and subsequently expanded upon in [3] for integral circulant graphs. In the cited papers [6, 3, 7], there are open questions concerning the characterization of circulant graphs with three or four distinct eigenvalues in their spectra.
The plan of the paper is organized as follows. In the paper, we begin with a preliminary section that introduces the necessary notation and concepts related to circulant graphs, graph operations, and number theoretical tools. In Section 3 (specifically subsection 3.1), it is observed that strongly regular integral circulant graphs can be expressed as a tensor product of two simpler graphs. This concept helps us in identifying classes of circulant and non-circulant graphs in Section 4, where a similar method is employed, resulting in spectra with four distinct eigenvalues. We can deduce from the tensor product representation of strongly regular integral circulant graphs that their order must be composite. Hence, in the remaining part of subsection 3.1 (more precisely in Theorem 3.3), we provide a comprehensive characterization of all strongly regular circulant graphs with prime order. By characterizing all iterated line graphs of unitary Cayley graphs whose spectra contain three distinct eigenvalues, we achieve a class of strongly regular graphs that do not exhibit circulant graph properties, as presented in subsection 3.2. We initiate subsection 4.1 by presenting constructions of two classes of integral graphs whose spectra contain four distinct eigenvalues, achieved through a tensor product of a specific number of complete graphs and complete graphs with loops attached to each vertex (Theorem 4.2 and Theorem 4.3). An example of the class of circulant graphs, as stated in Theorem 4.3, is provided in [6]. This example pertains to graphs with an order that equals precisely the product of three
distinct primes and a specific set of divisors. Moreover, Theorem 4.4 introduces a class of non-integral circulant graphs whose spectra contain four distinct eigenvalues, which is obtained by employing a construction relying on the tensor product of the strongly regular circulant graph with a prime order and the complete graph with loops connected to every vertex. Furthermore, Theorems 4.5 and 4.6 present two classes of integral circulant graphs that possess spectra with four distinct eigenvalues. These graphs are obtained through various graph operations, including the union and the complement, primarily derived from specific subgraphs of strongly regular circulant graphs and complete graphs of certain orders. The instance of a graph class derived in Theorem 4.5, as presented in [6], corresponds to graphs with orders that are powers of prime numbers and specific divisor sets. In Theorem 4.7, at the conclusion of subsection 4.1, we fully characterize circulant graphs with prime orders which possess four distinct eigenvalues in their spectra, whereas all other aforementioned graph classes with four distinct eigenvalues have composite order. Finally, in subsection 4.2, we characterize all iterated line graphs of unitary Cayley graphs whose spectra contain four distinct eigenvalues and prove that they are not circulant graphs.
The proofs in this context typically require comprehensive discussion and rely upon the interplay among (spectral) graph theory, number theory, and polynomial theory. Nevertheless, certain proof demonstrations are exclusively grounded in number theory, particularly those pertaining to the characterization of circulant graphs of prime order with three or four eigenvalues. These proofs employ various tools such as power residues, residue systems, and reciprocity.
## 2 Preliminaries
A _circulant graph_\(G(n;S)\) is a graph on vertices \(\mathbb{Z}_{n}=\{0,1,\ldots,n-1\}\) such that vertices \(i\) and \(j\) are adjacent if and only if \(i-j\equiv s\pmod{n}\) for some \(s\in S\). Such a set \(S\) is called the _symbol_ of graph \(G(n;S)\). As we will consider undirected graphs without loops, we assume that \(S=n-S=\{n-s\ |\ s\in S\}\) and \(0\not\in S\). Note that the degree of the graph \(G(n;S)\) is \(|S|\). The eigenvalues and eigenvectors of \(G(n;S)\) are given by
\[\lambda_{j}=\sum_{s\in S}\omega_{n}^{js},\quad v_{j}=[1\ \omega_{n}^{j}\ \omega_{n}^{2j}\cdots\omega_{n}^{(n-1)j}]^{T}, \tag{1}\]
where \(\omega_{n}=e^{i\frac{2\pi}{n}}\) is the \(n\)-th root of unity [12].
Circulant graphs are a subclass of the wider class of Cayley graphs. Let \(\Gamma\) be a multiplicative group with identity \(e\). For \(S\subset\Gamma\), \(e\not\in S\) and \(S^{-1}=\{s^{-1}\ |\ s\in S\}=S\), the Cayley graph \(X=Cay(\Gamma,S)\) is the undirected graph having vertex set \(V(X)=\Gamma\) and edge set \(E(X)=\{\{a,b\}\ |\ ab^{-1}\in S\}\). It is not hard to see that a graph is circulant if it is a Cayley graph on some cyclic group, i.e. its adjacency matrix is cyclic.
A graph is _integral_ if all its eigenvalues are integers. A circulant graph \(G(n;S)\) is integral if and only if
\[S=\bigcup_{d\in D}G_{n}(d),\]
for some set of divisors \(D\subseteq D_{n}\)[24]. Here, \(G_{n}(d)\) denotes the set of integers \(k\) satisfying \(\gcd(k,n)=d\) and \(1\leq k\leq n-1\). The set \(D_{n}\) encompasses all divisors of \(n\) excluding \(n\) itself. Hence, an _integral circulant graph_ is characterized by its order \(n\) and the set of divisors \(D\). We denote an integral circulant graph with \(n\) vertices and the set of divisors \(D\subseteq D_{n}\) as ICG\({}_{n}(D)\). If \(D=1\), an integral circulant graph of order \(n\) is denoted by \(X_{n}\) and referred to as a unitary Cayley graph, as described in [20].
From the above characterization of integral circulant graphs we have that the degree of an integral circulant graph is \(\deg\mathrm{ICG}_{n}(D)=\sum_{d\in D}\varphi(n/d).\) Here \(\varphi(n)\) denotes the Euler-phi function [17]. If \(D=\{d_{1},\ldots,d_{k}\}\), it can be seen that \(\mathrm{ICG}_{n}(D)\) is connected if and only if \(\gcd(d_{1},\ldots,d_{k})=1\), given that \(G(n;s)\) is connected if and only if \(\gcd(n,S)=1\), see [19]. Moreover, the following lemma holds
**Lemma 2.1**: _If \(d_{1},d_{2},\ldots,d_{k}\) are divisors of \(n\) such that the greatest common divisor \(gcd(d_{1},d_{2},\)\(\ldots,d_{k})\) equals \(d\), then the graph \(\mathrm{ICG}_{n}(d_{1},d_{2},\ldots,d_{k})\) has exactly \(d\) connected components isomorphic to \(\mathrm{ICG}_{n/d}(\frac{d_{1}}{d},\ldots,\frac{d_{n}}{d})\)._
Throughout the paper, we let \(p_{1}^{\alpha_{1}}p_{2}^{\alpha_{2}}\cdots p_{k}^{\alpha_{k}}\) be the prime factorization of \(n\).
Let us define \(c(n,j)\) as follows
\[c(j,n)=\mu(t_{n,j})\frac{\varphi(n)}{\varphi(t_{n,j})},\quad t_{n,j}=\frac{n}{ \gcd(n,j)}, \tag{2}\]
where \(\mu\) denotes the Mobius function defined as
\[\mu(n) = \left\{\begin{array}{rl}1,&\mbox{if $n=1$}\\ 0,&\mbox{if $n$ is not square-free}\\ (-1)^{k},&\mbox{if $n$ is product of $k$ distinct prime numbers.}\end{array}\right. \tag{3}\]
The expression \(c(j,n)\) is known as the _Ramanujan function_ ([17, p. 309]). The spectrum \((\lambda_{0},\ldots,\lambda_{n-1})\) of \(\mathrm{ICG}_{n}(D)\) can be expressed in terms of the Ramanujan function (see [24]) as follows
\[\lambda_{j}=\sum_{d\in D}c(j,\frac{n}{d}). \tag{4}\]
Let us observe the following properties of the Ramanujan function. These basic and useful properties will be extensively used throughout the remainder of the paper.
**Proposition 2.2**: _For any positive integers \(n\), \(j\), \(d\) and prime \(p\) such that \(d\mid n\) and \(p\mid n\), the following are satisfied_
\[c(0,n) = \varphi(n), \tag{5}\] \[c(1,n) = \mu(n),\] (6) \[c(2,n) = \left\{\begin{array}{rl}\mu(n),&n\in 2\mathbb{N}+1\\ \mu(\frac{n}{2}),&n\in 4\mathbb{N}+2\\ 2\mu(\frac{n}{2}),&n\in 4\mathbb{N}\end{array}\right.\] (7) \[c(\frac{n}{p},\frac{n}{d}) = \left\{\begin{array}{rl}\varphi(\frac{n}{2}),&p\mid d\\ -\frac{\varphi(\frac{d}{2})}{p-1},&p\nmid d\end{array}\right.. \tag{8}\]
In the following sections, we will utilize additional tools from number theory, specifically from the theory of quadratic and cubic residues, to prove certain theorems. For a given prime number \(p\) and an integer \(a\), the _Legendre symbol_ is defined as follows
\[\left(\frac{a}{p}\right) = \left\{\begin{array}{rl}1,&\mbox{if $p\nmid a$ and a is quadratic residue modulo $p$;}\\ -1,&\mbox{if $p\nmid a$ and a is quadratic non-residue modulo $p$;}\\ 0,&\mbox{if $p\mid a$.}\end{array}\right.\]
Euler's criterion and the property of the multiplicity of the Legendre symbol will be two crucial facts that will be frequently used. Indeed, these two properties can be formulated as follows
\[\big{(}\frac{a}{p}\big{)}\equiv_{p}a^{\frac{p-1}{2}}\ \mbox{(Euler's criterion)},\] \[\big{(}\frac{ab}{p}\big{)}=\big{(}\frac{a}{p}\big{)}\big{(}\frac{ b}{p}\big{)}=1,\mbox{for integers $a$, $b$ and prime $p>2$}.\]
The following theorem represents a generalization of Euler's criterion and will be used in the theory of cubic residues. For more about the theory of quadratic and cubic residues one may refer to [13, 17].
**Theorem 2.3**: \(x^{k}\equiv_{p}a\) _has a solution if and only if \(a^{\frac{p-1}{d}}\equiv_{p}1\), where \(d=\gcd(k,p-1)\). If the congruence has a solution, then it actually has \(d\) incongruent solutions modulo \(p\)._
For a given prime number \(p\) and an integer \(a\), the _the rational cubic residue symbol_ is defined as follows:
\[\big{[}\frac{a}{p}\big{]}_{3} = \left\{\begin{array}{rl}1,&\mbox{if $p\nmid a$ and a is quadratic residue modulo $p$;}\\ -1,&\mbox{if $p\nmid a$ and a is quadratic non-residue modulo $p$;}\\ 0,&\mbox{if $p\mid a$.}\end{array}\right.\]
Let us remind that the spectral radius of a connected \(r-\)regular graph \(X\) is equal to the regularity \(r\) and it is a simple eigenvalue of \(X\). According to (5), in the case of an integral circulant graph with the spectrum \((\lambda_{0},\ldots,\lambda_{n-1})\) given by (4), \(\lambda_{0}\) is equal to the regularity of the graph.
The well-known characterization of strongly regular graphs will be used in the paper.
**Lemma 2.4**: _[_15_]_ _A connected regular graph is strongly regular if and only if it has exactly three distinct eigenvalues._
If we denote the eigenvalues of a strongly regular graph as \(r\) (representing the regularity of the graph), \(\theta\) (with multiplicity \(m_{\theta}\)), and \(\tau\) (with multiplicity \(m_{\tau}\)), then the following equalities can be observed (referring to equation (10.2) in [15])
\[\theta+\tau = a-c,\qquad\theta\tau=c-r \tag{9}\] \[m_{\theta} = -\frac{(n-1)\tau+r}{\theta-\tau},\quad m_{\tau}=\frac{(n-1) \theta+r}{\theta-\tau}. \tag{10}\]
A _block-design_ is a collection \(\Lambda\) of \(b\) subsets (blocks) of a set of points \(S=x_{1},x_{2},\ldots,x_{v}\). The block design satisfies the following conditions: each subset contains \(k\) elements, and every pair of elements from \(S\) appears in \(\lambda\) subsets.
A block design is considered _symmetric_ when the number of blocks is equal to the number of points, i.e., \(b=v\). A block-design with \(b=v\) is called _symmetric_. Consequently, we represent a symmetric block design using the triple of parameters \((v,k,\lambda)\).
The _incidence graph_\(X\) of a block-design is the graph with vertex set \(S\cup\Lambda\), where two vertices \(x\in S\) and \(B\in\Lambda\) are adjacent if \(x\in B\). The incidence graph of a block-design is bipartite graph with four distinct eigenvalues.
If \(G\) is a graph, then the _line graph_\(L(G)\) of \(G\) s constructed by considering the edges of \(G\) as vertices in \(L(G)\), any two of them being adjacent if the corresponding edges of \(G\) have a vertex of \(G\) in common.
Finally we give a definition of the _tensor product_ of two graphs. The tensor product \(G\otimes H\) of graphs \(G\) and \(H\) is a graph such that the vertex set of \(G\otimes H\) is the Cartesian product \(V(G)\times V(H)\) and any two vertices \((u,u^{\prime})\) and \((v,v^{\prime})\) are adjacent in \(G\otimes H\) if and only if \(u^{\prime}\) is adjacent with \(v^{\prime}\) and \(u\) is adjacent with \(v\).
## 3 Graph matrices with three distinct eigenvalues
### Construction of strongly regular graphs using tensor product
We find strongly regular circulant graphs by starting from graphs with small number of distinct eigenvalues and performing some graph operations such as tensor product. Also, it is well-known that if \(\lambda_{1},\ldots,\lambda_{n}\) are eigenvalues of the adjacency matrix of a graph \(G\) and \(\mu_{1},\ldots,\mu_{m}\) are eigenvalues of the adjacency matrix of a graph \(H\), then the eigenvalues of the adjacency matrix of the tensor product \(G\otimes H\) are \(\lambda_{i}\cdot\mu_{j}\) for \(1\leq i\leq n\) and \(1\leq j\leq m\). Therefore, for some composite number \(n\) and arbitrary divisor \(d\mid n\), \(1<d<n\), we can start with complete graphs \(K_{d}\) and \(K_{\frac{n}{d}}^{*}\), where every vertex of \(K_{\frac{n}{d}}^{*}\) has a loop, and perform the tensor operation in the following way
\[\left.\begin{array}{l}Sp(K_{d})=\{d-1^{(1)},-1^{(d-1)}\}\\ Sp(K_{\frac{n}{d}}^{*})=\{\frac{n}{d}^{(1)},0^{(\frac{n}{d}-1)}\}\end{array} \right\}\Rightarrow\;Sp(K_{d}\otimes K_{\frac{n}{d}}^{*})=\{(d-1)\frac{n}{d} ^{(1)},0^{(n-d)},-\frac{n}{d}^{(d-1)}\}. \tag{11}\]
The notation \(Sp(G)\) denotes the spectrum of graph \(G\), which encompasses the eigenvalues along with their respective multiplicities. It is evident that the graph \(K_{d}\otimes K_{\frac{n}{d}}^{*}\) is regular, exhibiting a regularity of \(\frac{(d-1)n}{d}\). Additionally, this graph possesses precisely three distinct eigenvalues, thereby establishing it is a strongly regular graph. Moreover, according to (9) its parameters are \(r=c=\frac{(d-1)n}{d}\) and \(a=\frac{(d-2)n}{d}\). Due to the fact that any two nonadjacent vertices have the same neighbourhood of the size \(\frac{(d-1)n}{d}\), it can be concluded these two vertices actually belong to the independent set of the size \(n-\frac{(d-1)n}{d}=\frac{n}{d}\). This means that \(K_{d}\otimes K_{\frac{n}{d}}^{*}\) is isomorphic to the complete multipartite graph \(K_{\underbrace{n/d,\ldots,n/d}_{d}}\), which exhibits a circulant structure. Let us note that this graph coincides with the strongly regular graph \(\operatorname{ICG}_{n}(d^{\prime}\in D_{n}\mid d\nmid d^{\prime})\) as derived from Theorem 15 in [3]. However, the idea of exploiting the tensor product operation on graphs that initially have two or three distinct eigenvalues in their spectra will be utilized extensively in the following section. This approach aims to construct graphs that exhibit four distinct eigenvalues in their spectra. Moreover, it is evident that the graph resulting from the tensor product \(K_{d}\otimes K_{\frac{n}{d}}^{*}\) has a composite order. Therefore, in the remaining part of this section, we will introduce a class of strongly regular circulant graphs with non-integer spectra of prime order. Furthermore, it turns out that this is the only class of strongly regular circulant graphs of prime order, which will be demonstrated in the following theorem. This class of graphs will be exploited in discovering new classes of circulant graphs that possess four distinct eigenvalues in their spectra. We use the following well-known lemmas in the proof of the theorem.
**Lemma 3.1**: _Let \(p\) be an arbitrary prime number and \(P(x)\in\mathbb{Z}[x]\) a polynomial of degree at most \(p-1\) having \(\omega_{p}\) as a root. Then \(P(x)=c(x^{p-1}+x^{p-2}+...+x+1)\) where \(c\neq 0\) is an integer._
**Lemma 3.2**: _Let \(G(p;S)\) be a circulant graph with prime order \(p\) and \(\lambda_{0},\ldots,\lambda_{p-1}\) its spectrum given by the equation (1). Then, it holds that_
\[\lambda_{i}=\lambda_{j}\iff\{r_{i,s}|\ s\in S\}=\{r_{j,s}|\ s\in S\},\]
_for \(1\leq i,j\leq p-1\), and \(r_{i,s}\) represents the residue of is modulo \(p\) for \(1\leq i\leq p-1\) and \(s\in S\)._
**Proof.** It can be observed from (1) that \(\lambda_{i}=\lambda_{j}\iff\sum_{s\in S}\omega_{p}^{r_{i,s}}-\sum_{s\in S} \omega_{p}^{r_{j,s}}=0\). This implies that \(\omega_{p}\) is a root of the polynomial \(P(x)=\sum_{s\in S}x^{r_{i,s}}-\sum_{s\in S}x^{r_{j,s}}\in\mathbb{Z}[x]\). Considering that the polynomial \(P(x)\) has a degree of at most \(p-1\), as stated in Lemma 3.1, it can be either equal to \(cA(x)\) where \(A(x)=x^{p-1}+x^{p-2}+...+x+1\), or it can be the zero polynomial. However, since \(P(1)=0\) and \(A(1)=cp\neq 0\), it follows that \(P(x)\neq cA(x)\), and therefore \(P(x)\) must be the zero polynomial. This implies that \(\lambda_{i}=\lambda_{j}\iff\{r_{i,s}|\ s\in S\}=\{r_{j,s}|\ s\in S\}\).
**Theorem 3.3**: _For a prime number \(p\) the circulant graph \(G(p;S)\) is strongly regular if and only if \(S\) is a set of all quadratic residues modulo \(p\) or all quadratic non-residues modulo \(p\) and \(p\in 4\mathbb{N}+1\)._
**Proof.**
Let \(G(p;S)\) be a strongly regular graph. Considering the regularity of \(G(p;S)\), which is equal to \(|S|\), and the fact that \(\lambda_{0}=|S|\), it follows that the sequence of eigenvalues \(\lambda_{1},\lambda_{2},...,\lambda_{p-1}\), given by (1), must consist of exactly two distinct eigenvalues (according to Lemma 2.4).
Denote by \(r_{i,s}\) the residue of \(is\) modulo \(p\), for \(1\leq i\leq p-1\) and \(s\in S\). If we denote \(S_{i}=\{r_{i,s}|\ s\in S\}\), for any \(1\leq i\leq p-1\), according to Lemma 3.2, we conclude that
\[\lambda_{i}=\lambda_{j}\iff S_{i}=S_{j}. \tag{12}\]
Therefore, since \(S_{1}=S\), the number of distinct eigenvalues in the spectrum of \(G(p;S)\) is equal to three if and only if \(\{S_{i}|\ 1\leq i\leq p-1\}=\{S,T\}\) for some \(T,\ T\subseteq\{1,\ldots,p-1\}\). Moreover, for two distinct integers \(s_{1},s_{2}\in S\) we have \(r_{i,s_{1}}\neq r_{i,s_{2}}\), which implies \(|S_{i}|=|\{r_{i,s}|\ s\in S\}|=|S|=|T|\), for \(1\leq i\leq p-1\). Furthermore, for a given \(s\in S\), we can conclude that \(\{r_{i,s}|\ 1\leq i\leq p-1\}=\{1,\ldots,p-1\}\), since \(\{1,\ldots,p-1\}\) forms a reduced residue system modulo \(p\) and \(\gcd(s,p)=1\). Therefore, we have \(S\cup T=\{1,\ldots,p-1\}\).
Now, we will show that \(S\) and \(T\) are disjoint. Suppose that there exists some \(c\in S\cap T\). This means that \(c\in S_{i}\) for \(1\leq i\leq p-1\). Therefore, for every \(1\leq i\leq p-1\), there exists \(s\in S\) such that \(c=r_{i,s}\), and consequently, \(s\equiv_{p}c\cdot i^{-1}\), where \(i^{-1}\) denotes the modular inverse of \(i\) modulo \(p\). This implies that \(\{c\cdot i^{-1}\ |\ 1\leq i\leq p-1\}\subseteq S\). On the other hand, since \(p\) is a prime number, both sets \(\{i^{-1}|\ 1\leq i\leq p-1\}\) and \(\{c\cdot i^{-1}\equiv s\ |\ 1\leq i\leq p-1\}\) form reduced residue systems modulo \(p\). Therefore, we can conclude that \(\{1,\ldots,p-1\}\subseteq S\). Given that \(|S|=|T|\), we finally obtain \(S=T=\{1,\ldots,p-1\}\). However, this contradicts the fact that \(S\neq T\), and we have proved that \(S\cap T=\emptyset\).
Since \(|S|=|T|\), \(S\cup T=\{1,\ldots,p-1\}\) and \(S\cap T=\emptyset\), it holds that \(|S|=|T|=\frac{p-1}{2}\). According to Lemma 3.2, \(\lambda_{i}=\lambda_{j}\) yields \(\prod_{s\in S}r_{i,s}=\prod_{s\in S}r_{j,s}\) and thus \(\prod_{s\in S}is\equiv_{p}\prod_{s\in S}js\) and \(i^{|S|}\equiv_{p}j^{|S|}\). Using Euler's Criterion we obtain \(\left(\frac{i}{p}\right)\equiv_{p}i^{\frac{p-1}{2}}\equiv_{p}j^{\frac{p-1}{2}} \equiv_{p}\left(\frac{j}{p}\right)\). This means that if \(\lambda_{i}=\lambda_{j}\)
then the numbers \(i\) and \(j\) are either both quadratic residues or both quadratic non-residues modulo \(p\). Since \(S_{1}=S\) it follows \(\{i|\ S_{i}=S,\ 1\leq i\leq p-1\}\subseteq\{i|\ \left(\frac{i}{p}\right)=1,\ 1\leq i \leq p-1\}\). Similarly, we conclude that \(\{i|\ S_{i}=T,\ 1\leq i\leq p-1\}\subseteq\{i|\ \left(\frac{i}{p}\right)=-1,\ 1\leq i \leq p-1\}\). Given that \(\{i|\ \left(\frac{i}{p}\right)=1,\ 1\leq i\leq p-1\}=|\{i|\ \left(\frac{i}{p} \right)=-1,\ 1\leq i\leq p-1\}|=\frac{p-1}{2}\) and \(\{i|\ S_{i}=S,\ 1\leq i\leq p-1\}\cup\{i|\ S_{i}=T,\ 1\leq i\leq p-1\}=\{1, \ldots,p-1\}\), we have that \(\{i|\ S_{i}=S,\ 1\leq i\leq p-1\}=\{i|\ \left(\frac{i}{p}\right)=1,\ 1\leq i \leq p-1\}\) and \(\{i|\ S_{i}=T,\ 1\leq i\leq p-1\}=\{i|\ \left(\frac{i}{p}\right)=-1,\ 1\leq i \leq p-1\}\)
Suppose there exists \(s\in S\) such that \(\left(\frac{s}{p}\right)=1\). For every \(1\leq i\leq p-1\) such that \(\left(\frac{i}{p}\right)=1\), we have that \(S_{i}=S\) and \(r_{i,s}\in S_{i}\), which implies that \(r_{i,s}\in S\). Since \(\left(\frac{is}{p}\right)=\left(\frac{i}{p}\right)\left(\frac{s}{p}\right)=1\), we conclude that \(\{r_{i,s}\ |\ \left(\frac{i}{p}\right)=1,\ 1\leq i\leq p-1\}\subseteq\{i|\ \left(\frac{i}{p}\right)=1,\ 1\leq i \leq p-1\}\). Moreover, from the fact that \(i\neq j\) implies \(r_{i,s}\neq r_{j,s}\), for \(1\leq i,j\leq p-1\), we further get that \(\{r_{i,s}\ |\ \left(\frac{i}{p}\right)=1,\ 1\leq i\leq p-1\}=\{i|\ \left(\frac{i}{p} \right)=1,\ 1\leq i\leq p-1\}\). Finally, from the preceding discussion it can be concluded that \(\{i|\ \left(\frac{i}{p}\right)=1,\ 1\leq i\leq p-1\}\subseteq S\) and since \(|\{i|\ \left(\frac{i}{p}\right)=1,\ 1\leq i\leq p-1\}|=|S|=\frac{p-1}{2}\), it holds that \(\{i|\ \left(\frac{i}{p}\right)=1,\ 1\leq i\leq p-1\}=S\). If we assume that there exists \(s\in S\) such that \(\left(\frac{s}{p}\right)=-1\), it can be proven in a similar fashion \(\{i|\ \left(\frac{i}{p}\right)=-1,\ 1\leq i\leq p-1\}=S\).
Now, we will prove that \(p\in 4\mathbb{N}+1\). Without loss of generality, let us assume that \(S\) is the set of all quadratic residues. According to the definition of \(S\) as \(S=p-S\), we can conclude that both \(1\) and \(p-1\) are elements of \(S\). This implies that \(\left(\frac{-1}{p}\right)=1\). By applying Euler's Criterion, we further deduce that \((-1)^{\frac{p-1}{2}}\equiv_{p}1\), and it follows that \(\frac{p-1}{2}\) is even, as we set out to prove.
Suppose now that \(S\) is the set of all quadratic residues and \(p\in 4\mathbb{N}+1\). If \(x\in S\), then for any \(s\in S\) we have \(\left(\frac{xs}{p}\right)=\left(\frac{x}{p}\right)\left(\frac{s}{p}\right)=1\) and \(r_{x,s}\in S\) which implies \(S_{x}\subseteq S\). Since \(|S_{x}|=|S|=\frac{p-1}{2}\) we obtain \(S_{x}=S\). If \(T\) is the set of all quadratic non-residues and \(x\in T\), then for every \(s\in S\) there holds \(\left(\frac{xs}{p}\right)=\left(\frac{x}{p}\right)\left(\frac{s}{p}\right)=-1\) which yields \(r_{x,s}\in T\). Thus, we have \(S_{x}\subseteq T\) and \(S_{x}=T\). This way we have proved that \(G(p;S)\) has exactly three distinct eigenvalues in its spectrum. Similar conclusion can be derive for \(S\) being set of all quadratic non-residues.
In the rest of the proof, we show that \(S=p-S\). It is sufficient to prove that \(-1\) is a quadratic residue modulo \(p\). Using Euler's Criterion, we have that \(\left(\frac{-1}{p}\right)\equiv_{p}(-1)^{\frac{p-1}{2}}\equiv_{p}1\), which is supposed to be proven.
\(\blacksquare\)
It can be easily concluded that a strongly regular graph \(G(p;S)\), for \(p\in 4\mathbb{N}+1\), is a self-complementary graph. Indeed, if \(S\) contains all quadratic residues modulo \(p\), then the set of symbols of \(\overline{G(p;S)}\) contains all quadratic non-residues. We can establish a bijection \(f:\{0,1,\ldots,p-1\}\rightarrow\{0,1,\ldots,p-1\}\) such that \(f(x)=r_{b,x}\) for all \(x\in\{0,1,\ldots,p-1\}\) and some non-quadratic residue \(b\) modulo \(p\). It can be easily shown that this bijection is an isomorphism.
Now, we can proceed with determining of the spectrum and parameters of the strongly regular graph \(G(p;S)\), where \(p\in 4\mathbb{N}+1\) and \(S\) is the set of quadratic residues modulo \(p\). Let \(\lambda_{0},\ldots,\lambda_{p-1}\) be the spectrum of \(G(p;S)\) given by (1). From the proof of Theorem 3.3, we immediately see that the regularity of the graph \(r\) is equal to the eigenvalue \(\lambda_{0}=|S|=\frac{p-1}{2}\). Now, let \(j\) be a quadratic residue modulo \(p\). Since the equation \(x^{2}\equiv_{p}i\) has two incongruent solutions modulo \(p\), according to Theorem 2.3, we see that \(1+2\lambda_{j}=\sum_{i=0}^{p-1}\omega_{p}^{i^{2}}\). Let \(z\) denotes the sum \(z=\sum_{i=0}^{p-1}\omega_{p}^{i^{2}}\). We see that \(|z|^{2}=z\overline{z}=(\sum_{i=0}^{p-1}\omega_{p}^{i^{2}})(\sum_{i=0}^{p-1} \omega_{p}^{-i^{2}})=\sum_{i=0}^{p-1}\sum_{j=0}^{p-1}\omega_{p}^{i^{2}-j^{2}}= \sum_{i=0}^{p-1}\sum_{j=0}^{p-1}\omega_{p}^{i^{(i-j)}(i+j)}\). Given that we can establish a bijection \(g:\{0,1,\ldots,p-1\}\times\{0,1,\ldots,p-1\}\rightarrow\{0,1,\ldots,p-1\} \times\{0,1,\ldots,p-1\}\) such that \(g(i,j)=(r_{1,i-j},r_{1,i+j})\), then the last sum \(\sum_{i=0}^{p-1}\sum_{j=0}^{p-1}\omega_{p}^{(i-j)(i+j)}\) is equal to \(\sum_{i=0}^{p-1}\sum_{j=0}^{p-1}\omega_{p}^{ij}\). Finally, we have that \(\sum_{i=0}^{p-1}\sum_{j=0}^{p-1}\omega_{p}^{ij}=\sum_{j=0}^{p-1}\omega_{p}^{0}+ \sum_{i=1}^{p-1}\frac{\omega_{p}^{ip-1}}{\omega_{p-1}^{i}}=\sum_{i=0}^{p-1} \omega_{p}^{i^{2}-j^{2}}=\sum_{i=0}^{p-1}\omega_{p}^{i^
\(p\), and therefore \(|z|=\sqrt{p}\). As \(z\in\mathbb{R}\), suppose that \(z=\sqrt{p}\). This directly implies that \(\lambda_{i}=\frac{z-1}{2}=\frac{\sqrt{p}-1}{2}\). Let \(\tau=\lambda_{i}\) and \(\theta\) be the negative eigenvalue in the spectrum of \(G(p;S)\). According to the proof of the preceding theorem, we have that \(m_{\theta}=m_{\tau}=\frac{p-1}{2}\). From (10), we deduce that \(\theta=-\frac{\tau(p+1-m_{\theta})+r}{m_{\theta}}\). Moreover, since \(p+1-m_{\theta}=r=m_{\theta}=\frac{p-1}{2}\), we have that \(\theta=-\tau-1=-\frac{\sqrt{p}-1}{2}-1=-\frac{\sqrt{p}+1}{2}\). Furthermore, using (9), we see that \(c=\theta\tau+r\), which directly implies that \(c=\frac{p-1}{4}\). Taking into account the same equation, it can be observed that \(a=\theta+\tau+c\), and a simple calculation leads to the result \(a=\frac{p-5}{4}\). If we assume \(z=-\sqrt{p}\), we can determine that \(\tau=-\frac{\sqrt{p}+1}{2}\) and \(\theta=\frac{\sqrt{p}-1}{2}\). This indicates that we obtain the same set of eigenvalues in this case.
### Construction of strongly regular graphs using line operator
In this section, we explore additional classes of strongly regular graphs by using the line graph operator \(L\) on the class of unitary Cayley graphs. This concept arises from the well-known observation that two classes of strongly regular graphs can be obtained by applying the line operator \(L\) to complete graphs \(K_{n}\) and complete bipartite graphs \(K_{n,n}\). It is worth mentioning that both of these classes are circulant graphs, similar to unitary Cayley graphs. The graph \(L(K_{n})\) has the parameters \((\frac{n(n-1)}{2},2n-4,n-2,4)\) (also known as triangular graphs), while \(L(K_{n,n})\) has parameters \((n^{2},2n-2,n-2,2)\) (referred to as square lattice graphs). We will demonstrate that by repeatedly applying the line operator, we can obtain strongly regular graphs that are not circulant graphs
To begin with, we will describe all strongly regular graphs within the class of unitary Cayley graphs.
**Theorem 3.4**: _Unitary Cayley graph \(X_{n}\) is strongly regular if and only if \(n\) is composite prime power._
**Proof.** Recall that a connected regular graph is strongly regular if it has exactly three distinct eigenvalues (according to Lemma 2.4). Since \(X_{n}\) is a connected regular graph with spectral radius \(\lambda_{0}\), it is sufficient to characterize all Unitary Cayley graphs such that the set \(\{\lambda_{1},\ldots,\lambda_{n-1}\}\) contains exactly two distinct values.
Suppose that the set \(\{\lambda_{1},\ldots,\lambda_{n-1}\}\) contains exactly two distinct values. We distinguish two cases depending on the different values of \(k\).
**Case 1.**\(k=1\). For \(n=p_{1}\), it is clear that \(X_{p_{1}}\) is complete and it is not strongly regular by definition. Let \(n={p_{1}}^{\alpha_{1}}\) for \(\alpha_{1}\geq 2\) and let \(1\leq j\leq n-1\) be an arbitrary index such that \(j={p_{1}}^{\beta}M\), for \(0\leq\beta\leq\alpha_{1}-1\) and \(M\in 2\mathbb{N}+1\). Then, we conclude that \({t_{n,j}}={p_{1}}^{\alpha_{1}}/\gcd({p_{1}}^{\alpha_{1}},{p_{1}}^{\beta}M)={p_ {1}}^{\alpha_{1}-\beta}\). Furthermore, as
\[\lambda_{j}=c(j,n) = \mu({p_{1}}^{\alpha_{1}-\beta})\varphi({p_{1}}^{\alpha_{1}})/ \varphi({p_{1}}^{\alpha_{1}-\beta})=\left\{\begin{array}{rl}0,&0\leq\beta \leq\alpha_{1}-2\\ -{p_{1}}^{\alpha_{1}-1},&\beta=\alpha_{1}-1\end{array}\right.\, \tag{13}\]
it is clear that \(X_{p_{1}^{\alpha_{1}}}\) is a strongly regular graph, for \(\alpha_{1}\geq 2\).
**Case 2.**\(k\geq 2\). Let \(p\) be an arbitrary prime divisor of \(n\). Considering (8), we obtain that \(\lambda_{n/p}=c(n/p,n)=-\varphi(n)/(p-1)\). Since \(k\geq 2\), then there exists at least two prime divisors \(p_{i}\) and \(p_{j}\) of \(n\) such that \(i<j\) and \(|\lambda_{n/p_{i}}|>|\lambda_{n/p_{j}}|\). Furthermore, as \(\lambda_{1}=\mu(n)\), according to (6), we see that \(|\lambda_{n/p_{i}}|>|\lambda_{n/p_{j}}|\geq 1\geq\lambda_{1}\) and hence \(X_{n}\) is not strongly regular, if \(|\lambda_{n/p_{j}}|>1\). On the other hand, the equality \(|\lambda_{n/p_{j}}|=1\) holds if and only if \(n=2p_{2}\). However, in this case \(\lambda_{n/p_{j}}=-1\), \(\lambda_{1}=\mu(n)=1\), and hence \(\lambda_{1}\neq\lambda_{n/p_{j}}\). It follows that the graph \(X_{2p_{2}}\) does not meet the criteria for being strongly regular.
It is easy to see that strongly regular graphs in the class of unitary Cayley graphs are complete multipartite graphs on \(p^{\alpha}\) (\(\alpha\geq 2\)) vertices, with regularity \(p^{\alpha-1}(p-1)\). According to (9), any two distinct adjacent vertices have \(p^{\alpha-1}(p-2)\) common neigbours, while any to nonadjacent vertices have \(p^{\alpha-1}(p-1)\) common neighbours. Furthermore, the multiplicities of the eigenvalues \(\theta\) and \(\tau\) are \(m_{\theta}=p(p^{\alpha-1}-1)\) and \(m_{\tau}=p-1\), respectively.
We use the following theorem due to the fact that the line graph of a regular graph is also a regular graph.
**Theorem 3.5**: _[_21_]_ _Let \(G\) be \(k\) regular connected graph with \(n\) vertices and \(\lambda_{1},\ldots,\lambda_{n}\) the eigenvalues of its adjacency matrix. Then the spectrum of \(L(G)\) consists of \(-2\) with multiplicity \(\frac{kn}{2}-n\) and \(k+\lambda_{i}-2\) for every \(1\leq i\leq n\)._
This theorem enables us to easily observe that the number of distinct eigenvalues of a regular \(G\) is less than or equal to the number of distinct eigenvalues of \(L(G)\).
**Theorem 3.6**: _Let \(X_{n}\) be unitary Cayley graph of the order \(n\). Then the line graph of \(X_{n}\) is strongly regular if and only if \(n\) is either prime greater than \(3\) or \(n\) is a power of \(2\) greater than \(2\)._
**Proof.** Let \(k=\varphi(n)\) be the regularity of \(X_{n}\).
If \(kn/2-n=0\), then the number of distinct eigenvalues of \(L(X_{n})\) is equal to the number of distinct eigenvalues of \(X_{n}\). In this case, we have that \(k=\varphi(n)=2\), and therefore \(n\in\{3,4,6\}\). As \(X_{n}\simeq C_{n}\simeq L(X_{n})\) for \(n\in\{3,4,6\}\), according to Theorem 3.4, we have that \(L(X_{4})\) is the only strongly regular graph.
If \(kn/2-n>0\), then one eigenvalue of \(L(X_{n})\) must be \(-2\). Now, suppose that \(L(X_{n})\) is strongly regular. As \(L(X_{n})\) has three distinct eigenvalues then either \(X_{n}\) has three distinct eigenvalues such that one of them is equal to \(-k\) (\(k+\lambda_{i}-2=-2\)) or \(X_{n}\) has two distinct eigenvalues and none of them is equal to \(-k\).
According to Theorem 3.4, \(X_{n}\) has three distinct eigenvalues if and only if \(n=p^{\alpha}\), for some prime \(p\) and \(\alpha\geq 2\), and using (13) the eigenvalues of \(X_{n}\) are \(\{p^{\alpha-1}(p-1),0,-p^{\alpha-1}\}\). Furthermore, according to Theorem 3.5, any eigenvalue of \(L(X_{n})\) takes one of the following values \(\{2p^{\alpha-1}(p-1)-2,p^{\alpha-1}(p-1)-2,p^{\alpha-1}(p-2)-2,-2\}\). Therefore, as \(p^{\alpha-1}(p-2)-2\) is the second smallest eigenvalue, we deduce, in this case, that \(L(X_{n})\) is strongly regular if and only if \(n=p^{\alpha}\) and \(p^{\alpha-1}(p-2)-2=-2\), that is, for \(n=2^{\alpha}\).
If \(X_{n}\) has exactly two eigenvalues it is a complete graph and it is easy to see that \(n=p\), for some prime \(p\), and the eigenvalues of \(X_{n}\) are \(\{p-1,-1\}\). It is clear that none of them is equal to \(-k=-\varphi(p)=-p+1\), for \(p>2\), and therefore we conclude that \(L(X_{p})\) is strongly regular, for \(p>3\), and the eigenvalues of \(L(X_{p})\) are \(\{2(p-1)-2,p-4,-2\}\). \(\blacksquare\)
Using the following statement, we will prove that the founded classes \(L(X_{p})\) for prime \(p>3\), and \(L(X_{2^{\alpha}})\) for \(\alpha\geq 2\), establish new classes of strongly regular graphs, by proving that are not circulant graphs (with the exception of \(X_{4}\simeq C_{4}\)).
**Theorem 3.7**: _[_5_]_ _Let \(G\) be a connected graph such that \(L(G)\) is a circulant. Then \(G\) must either be \(C_{n}\), \(K_{4}\), or \(K_{a,b}\) for some \(a\) and \(b\) such that \(\gcd(a,b)=1\)._
Suppose first that \(L(X_{p})\) for prime \(p>3\), is circulant. Since \(X_{p}\) is isomorphic to the complete graph \(K_{p}\), for prime \(p>3\), then it cannot be isomorphic to any of the following graphs \(C_{n}\), \(K_{4}\), or \(K_{a,b}\) for some \(a\) and \(b\) such that \(\gcd(a,b)=1\).
Now, suppose that \(L(X_{2^{\alpha}})\) for \(\alpha\geq 2\), is circulant. It is easy to see that \(X_{2^{\alpha}}\) is a complete bipartite graph with the independent sets \(C_{i}=\{0\leq j\leq 2^{\alpha}-1\mid j\equiv_{2}i\}\), for \(i\in\{0,1\}\). Therefore, \(X_{2^{\alpha}}\simeq K_{2^{\alpha-1},2^{\alpha-1}}\) and since \(\alpha\geq 2\) we have \(\gcd(2^{\alpha-1},2^{\alpha-1})\neq 1\). Furthermore, the number of edges in \(X_{2^{\alpha}}\) is equal to \(\frac{2^{\alpha}\cdot 2^{\alpha-1}}{2}\), which is distinct from the number of edges in \(C_{2^{\alpha}}\), that is \(2^{\alpha}\) (for \(\alpha>2\)). Therefore, based on Theorem 3.7, we can conclude that strongly regular line graphs of unitary Cayley graphs are not circulants.
For \(k\geq 1\), the \(k\)-th iterated line graph of \(G\) is \(L^{k}(G)=L(L^{k-1}(G))\), where \(L^{0}(G)=G\) and \(L^{1}(G)=L(G)\). In the following theorem we prove that the class of strongly regular graphs derived from unitary Cayley graphs can not be extended any more using the line graph operation.
**Theorem 3.8**: _Let \(X_{n}\) be unitary Cayley graph of the order \(n\). Then \(L^{2}(X_{n})\) is strongly regular if and only if \(n=4\)._
**Proof.** Suppose that \(L^{2}(X_{n})\) is connected strongly regular, i.e., \(L^{2}(X_{n})\) has three distinct eigenvalues. This means that \(L(X_{n})\) may have exactly three distinct eigenvalues, since \(L(X_{n})\) can not be complete. Indeed, since the order of \(L(X_{n})\) is equal to \(\frac{n\varphi(n)}{2}\), the regularity of \(L(X_{n})\) is equal to \(2(\varphi(n)-1)\) and the relation \(\frac{n\varphi(n)}{2}-1=2(\varphi(n)-1)\) is never satisfied for \(n\neq 3\), we have that \(L(X_{n})\) is not a complete graph. Thus, we assume that \(L(X_{n})\) has three distinct eigenvalues, and according Theorem 3.6 we distinguish two cases depending on the values of \(n\).
Suppose that \(n=p\), for some prime \(p>3\). By Theorem 3.6, the distinct eigenvalues of \(L(X_{p})\) are \(\{2(p-1)-2,-2,p-4\}\). Moreover, from Theorem 3.5, we obtain that the regularity of \(L^{2}(X_{p})\) is equal to \(2(2(p-1)-2)-2\) and for arbitrary eigenvalue \(\lambda_{i}\) of \(L^{2}(X_{p})\) holds that
\(\lambda_{i}\in\{4p-10,-2,2(p-1)-6,3p-10\}\). Any two values from this set are mutually distinct, since \(p>3\), whence we conclude that \(L^{2}(X_{p})\) is not strongly regular.
Now, suppose that \(n=2^{\alpha}\), for \(\alpha\geq 2\). According to (13) we see that the distinct eigenvalues of \(L(X_{n})\) are \(\{2^{\alpha-1},0,-2^{\alpha-1}\}\) and from Theorem 3.5 the distinct eigenvalues of \(L(X_{n})\) are \(\{2^{\alpha}-2,2^{\alpha-1}-2,-2\}\). Therefore, the possible values for the eigenvalues of \(L^{2}(X_{n})\) are \(\{2^{\alpha+1}-6,3(2^{\alpha-1}-2),2^{\alpha}-6,-2\}\). Finally, we conclude that \(L^{2}(X_{n})\) has three distinct eigenvalues if \(2^{\alpha}-6=-2\), that is \(\alpha=2\), as \(2^{\alpha+1}-6>3(2^{\alpha-1}-2)>2^{\alpha}-6\geq-2\). \(\blacksquare\)
Since \(X_{4}\simeq C_{4}\), using the line operation any further will not result in any additional classes of strongly regular graphs beyond those that have already been found.
## 4 Graph matrices with four eigenvalues
### Construction of regular graphs with four eigenvalues using graph operations
Building upon the concept introduced in the preceding section, we can generate regular graphs with four distinct eigenvalues by employing various graph operations, including tensor product, union, and complement. These operations are applied to graphs (connected or disconnected) whose spectra already possess two or three different eigenvalues. Indeed, for a given composite numbers \(n\) and \(m\), and arbitrary divisor such that \(d\mid n\), \(d\mid m\) and \(\min\{n,m\}>d>1\), we can use the spectrum of the graph \(K_{d}\otimes K_{\frac{n}{d}}^{*}\), that is, \(Sp(K_{d}\otimes K_{\frac{n}{d}}^{*})=\{(d-1)\frac{n}{d}(1),0^{(n-d)},-\frac{n}{ d}(d-1)\}\), and perform the operation of the tensor product in the following way
\[Sp((K_{d}\otimes K_{\frac{n}{d}}^{*})\otimes(K_{d}\otimes K_{\frac{n}{d}}^{*}))=\]
\[\{(d-1)^{2}\frac{mn}{d^{2}}{}^{(1)},0^{(nm-d^{2})},-(d-1)\frac{mn}{d^{2}}{}^{(2 (d-1))},\frac{mn}{d^{2}}{}^{((d-1)^{2})}\}. \tag{14}\]
Using this construction, we obtain new classes of regular graphs with four different eigenvalues with the composite order \(nm\) and \(d>2\). Based on a computer search for constructed graphs with up to 10000 vertices, it can be concluded that these graphs are not necessarily circulants. Furthermore, according to the theorem presented below, we prove that for every even value of \(n\), there exists a connected graph that is generated by the aforementioned construction and is not circulant. Assume that \(n\) has the following prime factorization \(n=2^{\alpha_{1}}p_{2}^{\alpha_{2}}\cdots p_{k}^{\alpha_{k}}\). Let us introduce a notation for subsets of the divisor set \(D\subseteq D_{n}\). We will denote \(D_{0}\) as the set of divisors in \(D\) where \(n/d\) is an odd number, and \(D_{1}\) as the set of divisors in \(D\) where \(n/d\in 4\mathbb{N}+2\). Also, for a positive integer \(k\) and a set \(A\) of positive integers, by \(kA\) we will mean the set \(\{ka\mid a\in A\}\). We use the following theorem from [3] for proving the mention statement.
**Theorem 4.1**: _[_3_]_ _Let \(\mathrm{ICG}_{n}(D)\) be an integral circulant graph. The following statements are equivalent:_
* _Every_ \(\lambda_{j}\) _is even for odd_ \(0\leq j\leq n-1\)_._
* \(D_{0}=2D_{1}\)_._
* _Every_ \(\lambda_{j}=0\) _for odd_ \(0\leq j\leq n-1\)_._
**Theorem 4.2**: _For an arbitrary even \(n\) and \(d=\frac{n}{2^{\alpha_{1}}}\) the graph \((K_{d}\otimes K_{\frac{n}{d}}^{*})\otimes(K_{d}\otimes K_{\frac{n}{d}}^{*})\) is not a circulant graph._
**Proof.** Suppose that \((K_{d}\otimes K_{\frac{n}{d}}^{*})\otimes(K_{d}\otimes K_{\frac{n}{d}}^{*})\) is cirulant. Therefore, its spectrum can be denoted by \((\lambda_{0},\ldots,\lambda_{n^{2}-1})\), which is given by the equation (4). In other words, \((K_{d}\otimes K_{\frac{n}{d}}^{*})\otimes(K_{d}\otimes K_{\frac{n}{d}}^{*}) \simeq\mathrm{ICG}_{n^{2}}(D)\), for some \(D\subseteq D_{n^{2}}\). Since \(n\) is even and \(d\) is odd, from (14), we see that every \(\lambda_{i}\) is even. According to Theorem 4.1, we have that \(\lambda_{j}=0\) for odd \(0\leq j\leq n^{2}-1\) and \(D_{0}=2D_{1}\). Now, observe integral circulant graph \(\mathrm{ICG}_{\frac{n^{2}}{2}}(D\setminus D_{0})\) and denote its eigenvalues by \((\mu_{0},\ldots,\mu_{\frac{n^{2}}{2}-1})\). For every even \(0\leq j\leq n^{2}-1\), we see that
\[\lambda_{j}=\sum_{d\in D_{1}}(c(j,\frac{n^{2}}{d})+c(j,\frac{n^{2}}{2d}))+\sum _{d\in D\setminus(D_{0}\cup D_{1})}c(j,\frac{n^{2}}{d}).\]
For \(d\in D_{1}\) we conclude that \(\varphi(\frac{n^{2}}{d})=\varphi(\frac{n^{2}}{2d})\). Furthermore, we obtain \(\gcd(\frac{n^{2}}{d},j)=2\gcd(\frac{n^{2}}{2d},j)\) and also \(t_{\frac{n^{2}}{d},j}=t_{\frac{n^{2}}{2d},j}\). This directly yields that \(c(j,\frac{n^{2}}{d})=c(j,\frac{n^{2}}{2d})\). Similarly to the preceding case, we obtain that \(\gcd(\frac{n^{2}}{d},j)=2\gcd(\frac{n^{2}}{2d},\frac{j}{2})\) and \(t_{\frac{n^{2}}{d},j}=t_{\frac{n^{2}}{2d},\frac{j}{2}}\), for \(d\in D\setminus(D_{0}\cup D_{1})\). Moreover, it holds that \(\varphi(\frac{n^{2}}{d})=2^{2\alpha_{1}-1}\varphi(\frac{n^{2}}{2^{2\alpha_{1}} d})=2\cdot 2^{2\alpha_{1}-2}\varphi(\frac{n^{2}}{2^{2\alpha_{1}}d})=2\varphi( \frac{n^{2}}{2d})\). Therefore, we conclude that \(c(j,\frac{n^{2}}{d})=2c(\frac{j}{2},\frac{n^{2}}{2d})\). According, to this discussion we get that
\[\lambda_{j}=\sum_{d\in D_{1}}2c(j,\frac{n^{2}}{d})+2\sum_{d\in D\setminus(D_{0} \cup D_{1})}c(\frac{j}{2},\frac{n^{2}}{2d}).\]
Using a similar argument, we have that \(c(j,\frac{n^{2}}{d})=c(\frac{j}{2},\frac{n^{2}}{2d})\), for \(d\in D_{1}\), and thus
\[\lambda_{j}=2\sum_{d\in D_{1}}c(\frac{j}{2},\frac{n^{2}}{2d})+2\sum_{d\in D\setminus (D_{0}\cup D_{1})}c(\frac{j}{2},\frac{n^{2}}{2d})=2\mu_{\frac{j}{2}},\]
for \(0\leq\frac{j}{2}\leq n^{2}-1\). This way we obtain that \({\rm ICG}_{\frac{n^{2}}{2}}(D\setminus D_{0})\) has spectrum
\[Sp({\rm ICG}_{\frac{n^{2}}{2}}(D\setminus D_{0}))=\{(d-1)^{2}\frac{n^{2}}{2d ^{2}}\ ^{(1)},0^{(n^{2}-d^{2}-\frac{n^{2}}{2})},-(d-1)\frac{n^{2}}{2d^{2}}\ ^{(2(d-1))},\frac{n^{2}}{2d^{2}}\ ^{((d-1)^{2})}\}.\]
Through the repetition of this procedure \(2\alpha_{1}\) times, which involves obtaining graphs with smaller orders \(s\), we ultimately achieve a circulant graph with the corresponding spectrum.
\[\{(d-1)^{2}\frac{n^{2}}{2^{2\alpha_{1}}d^{2}}\ ^{(1)},0^{(n^{2}-d^{2}-\frac{n^{2}}{2} -\cdots-\frac{n^{2}}{2^{2\alpha_{1}}})},-(d-1)\frac{n^{2}}{2^{2\alpha_{1}}d^{ 2}}\ ^{(2(d-1))},\frac{n^{2}}{2^{2\alpha_{1}}d^{2}}\ ^{((d-1)^{2})}\}.\]
Since \(\frac{n^{2}}{2}+\cdots+\frac{n^{2}}{2^{2\alpha_{1}}}=\frac{n^{2}}{2^{2\alpha_ {1}}}(1+2+\cdots+2^{2\alpha_{1}-1})=\frac{n^{2}(2^{2\alpha_{1}}-1)}{2^{2\alpha_ {1}}}\), we have that \(n^{2}-d^{2}-\frac{n^{2}}{2}-\cdots-\frac{n^{2}}{2^{2\alpha_{1}}}=n^{2}(1-\frac {2^{2\alpha_{1}}-1}{2^{2\alpha_{1}}})=0\). This way, we obtain a regular connected circulant graph with three distinct eigenvalues in its spectrum, which means that it is strongly regular. However, this conclusion contradicts Theorem 15 stated in [3], which asserts that a strongly regular integral circulant graph must have \(0\) in its spectrum. \(\blacksquare\)
Showing whether or not the graphs resulting from applying the tensor product operations to a certain number of graphs with specific properties are circulant or not is a challenging task. In [16], authors found specific classes of graphs that are either circulant or non-circulant graphs, obtained from a singe tensor product operation on two graphs of particular types.
It is worth mentioning that the graph \((K_{2}\otimes K_{\frac{n}{2}}^{\ast})\otimes(K_{2}\otimes K_{\frac{n}{2}}^{ \ast})\) belongs to the class of circulant graphs. However, it is important to note that this graph is disconnected strongly regular graph. Indeed, the spectrum of this graph is \(\{\frac{n^{2}}{4}\,^{(2)},0^{(n^{2}-4)},-\frac{n^{2}}{4}\,^{(2)}\}\) and it contains only three distinct eigenvalues. Since it is a regular graph and the largest eigenvalue has a multiplicity of \(2\), we can conclude that it consists of two connected components whose spectra are \(\{\frac{n^{2}}{4}\,^{(1)},0^{(\frac{n^{2}}{2}-2)},-\frac{n^{2}}{4}\,^{(1)}\}\). It is well known that the aforementioned spectrum is the spectrum of the complete bipartite graph \(K_{\frac{n^{2}}{4},\frac{n^{2}}{4}}\), which is a circulant graph.
On the other hand, when \(n\neq d^{2}\), we can consider the class of connected graphs \(K_{d}\otimes K_{\frac{n}{d}}\) with a specific spectrum. This spectrum includes four distinct eigenvalues: \((d-1)\left(\frac{n}{d}-1\right)\) with multiplicity \(1\), \(-(d-1)\), with multiplicity \(\frac{n}{d}-1\), \(-\left(\frac{n}{d}-1\right)\), with multiplicity \(d-1\), and \(1\), with multiplicity \((d-1)\left(\frac{n}{d}-1\right)\). Therefore, this class of graphs serves as an example where four distinct eigenvalues appear in their spectra. According to Theorems 1 and 2 from [16] this class of graphs is circulant if and only if \(\gcd(d,\frac{n}{d})=1\). In the theorem that follows, we establish that circulant graphs resulting from the described construction do not qualify as unitary Cayley graphs. Unitary Cayley graphs with four eigenvalues will be analyzed in subsection 4.2. Namely, the following assertion holds.
**Theorem 4.3**: _If there exists a divisor \(d\) such that \(\gcd(d,\frac{n}{d})=1\) for an arbitrary \(n\), then \(K_{d}\otimes K_{\frac{n}{d}}\simeq{\rm ICG}_{n}(\{d_{1}d_{2}|\ d_{1}\in D_{d}, \ d_{2}\in D_{\frac{n}{d}}\})\)._
**Proof.** Since \(K_{d}\simeq{\rm ICG}_{d}(\{d^{\prime}\ |\ d\ |\ 1\leq d^{\prime}\leq d-1\})\), it is sufficient to prove that \({\rm ICG}_{n}(D_{1})\otimes{\rm ICG}_{m}(D_{2})\simeq{\rm ICG}_{nm}(\{d_{1} d_{2}|\ d_{1}\in D_{1},\ d_{2}\in D_{2}\})\) for \(\gcd(n,m)=1\) and conclude that \(K_{d}\otimes K_{\frac{n}{d}}\) is not unitary Cayley graph. Indeed, for \(0\leq x\leq nm-1\) there exist unique \(0\leq a\leq n-1\)
\(1\) and \(0\leq b\leq m-1\) such that \(x\equiv am+bn\pmod{mn}\) and we can establish a bijection from \({\rm ICG}_{n}(D_{1})\otimes{\rm ICG}_{m}(D_{2})\) onto \({\rm ICG}_{nm}(\{d_{1}d_{2}\mid d_{1}\in D_{1},\ d_{2}\in D_{2}\})\) such that \(f(a,b)=x\). Moreover, for adjacent vertices \((a_{1},b_{1})\) and \((a_{2},b_{2})\) from \({\rm ICG}_{n}(D_{1})\otimes{\rm ICG}_{m}(D_{2})\), we have that \({\rm gcd}(a_{1}-a_{2},n)=d_{1}\in D_{1}\) and \({\rm gcd}(b_{1}-b_{2},m)=d_{2}\in D_{2}\). We prove that \(f(a_{1},b_{1})\) and \(f(a_{2},b_{2})\) are adjacent in \({\rm ICG}_{nm}(\{d_{1}d_{2}\mid d_{1}\in D_{1},\ d_{2}\in D_{2}\})\), i.e., we show that \({\rm gcd}((a_{1}-a_{2})m+(b_{1}-b_{2})n,nm)=d_{1}d_{2}\). Indeed, let \(d={\rm gcd}((a_{1}-a_{2})m+(b_{1}-b_{2})n,nm)\) and without loss of generalization we can assume that \(d\mid m\), as \({\rm gcd}(m,n)=1\). Therefore, it holds that \(d\nmid n\) and \(d\mid b_{1}-b_{2}\). As \({\rm gcd}(b_{1}-b_{2},m)=d_{2}\), we see that \(d\mid d_{2}\). On the other hand, as \({\rm gcd}(b_{1}-b_{2},m)=d_{2}\) we get that \(d_{2}\mid(a_{1}-a_{2})m+(b_{1}-b_{2})n\) and similarly as \({\rm gcd}(a_{1}-a_{2},n)=d_{1}\) we get that \(d_{1}\mid(a_{1}-a_{2})m+(b_{1}-b_{2})n\). Finally, since \({\rm gcd}(d_{1},d_{2})=1\), it holds that \(d_{1}d_{2}\mid d\) and therefore \(d_{1}d_{2}=d\). The mapping \(f\) is obviously bijection, thus making it an isomorphism between the graphs. \(\blacksquare\)
Authors in [6] (Section 3) present an example of integral circulant graphs \({\rm ICG}_{n}(D)\), where \(n=p_{1}p_{2}p_{3}\) is a product of three primes and \(D=\{1,p_{i},p_{j}\}\) for \(1\leq i\neq j\leq 3\), demonstrating four distinct eigenvalues in their spectra. Clearly, the graph \({\rm ICG}_{p_{1}p_{2}p_{3}}(1,p_{i},p_{j})\) is simply a special case of the graph \(K_{d}\otimes K_{\frac{n}{d}}\), where \(n=p_{1}p_{2}p_{3}\), \(d=p_{k}\), \(1\leq i\neq k\neq j\leq 3\), and \({\rm gcd}(d,\frac{n}{d})=1\).
A well known fact is that when we apply the complement operation to a class of regular graphs results in a class of regular graphs whose spectra contain an equal or fewer number of distinct eigenvalues compared to the original graphs. However, the resulting class of graphs does not need to be connected.
Let \(\lambda_{i}\), for \(0\leq i\leq n-1\), be the eigenvalues of \(K_{d}\otimes K_{\frac{n}{d}}\), for \({\rm gcd}(d,n/d)=1\), where \(\lambda_{0}\) is the regularity of the graph. The eigenvalues of the complement graph of the graph \({\rm ICG}_{n}(D)\) are \(-1-\lambda_{i}\), \(1\leq i\leq n-1\) and \(n-1-\lambda_{0}\) (for example, see Lemma 8.5.1 from [15]), whence we conclude that the complement graph of \(K_{d}\otimes K_{\frac{n}{d}}\) has the following spectrum \(\{d+\frac{n}{d}-2^{(1)},d-2^{(\frac{n}{d}-1)},\frac{n}{d}-2^{(d-1)},-2^{((d-1) (\frac{n}{d}-1))}\}\). Given that the eigenvalue \(d+\frac{n}{d}-2\) stands as the largest single eigenvalue in the spectrum, the subsequent chain of inequalities holds: \(d+\frac{n}{d}-2>d-2>\frac{n}{d}-2>-2\), considering that \(d>\frac{n}{d}\). Consequently, we can deduce that \(\overline{K_{d}\otimes K_{\frac{n}{d}}}\) constitutes a connected graph featuring four distinct eigenvalues in its spectrum. By showing that \(K_{d}\otimes K_{\frac{n}{d}}\) is not a self-complementary graph, we establish the existence of a novel class of circulant graphs possessing four distinct eigenvalues in their spectra. If we suppose that it is self-complementary graph, then either the eigenvalue \(d-2\) or the eigenvalue \(\frac{n}{d}-2\) of \(\overline{K_{d}\otimes K_{\frac{n}{d}}}\) is equal to the eigenvalue \(1\) of \(K_{d}\otimes K_{\frac{n}{d}}\). In both cases, we observe that the regularity of \(\overline{K_{d}\otimes K_{\frac{n}{d}}}\) is equal to \(\frac{n}{3}-1\), which is less than \(\frac{n-1}{2}\), and this contradiction arises as a result. Furthermore, based on the proof of Theorem 4.3, we can deduce that \(\overline{K_{d}\otimes K_{\frac{n}{d}}}\) is isomorphic to \({\rm ICG}_{n}(D_{n}\setminus\{d_{1}d_{2}|\ d_{1}\in D_{d},\ d_{2}\in D_{\frac{n} {d}})\}\). This finding leads us to the conclusion that these graphs do not meet the criteria to be classified as unitary Cayley graphs.
In the following theorem, we present a class of circulant graphs whose spectrum contains four distinct eigenvalues of composite ordersimilar to the class discussed in the preceding text. However, this class of graphs contains irrational eigenvalues and can be derived from the class of strongly regular graphs found in Theorem 3.3 through certain graph operations. Also, for a positive integer \(k\) and a set \(A\) of positive integers, by \(k+A\) we will mean the set \(\{k+a\mid a\in A\}\).
**Theorem 4.4**: _Let \(G(p;S)\) be a strongly regular circulant graph with a prime order \(p\) and a set of symbols \(S\subseteq\{1,\ldots,p-1\}\). Then, if \(n\) is a composite number divisible by \(p\), the spectrum of the graph \(G(n;\cup_{i=0}^{p-1}(ip+S))\) contains four distinct eigenvalues._
**Proof.** Since \(G(p;S)\) is a strongly regular circulant graph with a prime order \(p\), as stated in Theorem 3.3, it follows that \(S\) corresponds to the set of quadratic residues modulo \(p\). Let \(\lambda_{0},\ldots,\lambda_{n-1}\) denote the eigenvalues of \(G(n;\cup_{i=0}^{\frac{n}{p}-1}(ip+S))\), as given by equation (1). We can deduce that \(|ip+S|=|S|=\frac{p-1}{2}\) for every \(0\leq i\leq\frac{n}{p}-1\), thereby establishing \(\lambda_{0}=\frac{n}{p}\cdot\frac{p-1}{2}\). Suppose now that \(1\leq i\leq n-1\). According to (1), it holds that
\[\lambda_{i}=\sum_{j=0}^{\frac{n}{p}-1}\sum_{s\in S}\omega_{n}^{(s+jp)i}=\sum_{j =0}^{\frac{n}{p}-1}\sum_{s\in S}\omega_{n}^{si}\omega_{n}^{jpi}=(\sum_{j=0}^{ \frac{n}{p}-1}\omega_{n}^{jpi})(\sum_{s\in S}\omega_{n}^{si})=(\sum_{j=0}^{ \frac{n}{p}-1}\omega_{\frac{n}{p}}^{ji})(\sum_{s\in S}\omega_{n}^{si}).\]
If \(\omega_{\frac{n}{p}}^{i}\neq 1\), which is the case when \(\frac{n}{p}\nmid i\), then we can observe that \((\sum_{j=0}^{\frac{n}{p}-1}\omega_{\frac{n}{p}}^{ji})(\sum_{s\in S}\omega_{n}^ {si})=\frac{\omega_{\frac{n}{p}}^{i\frac{n}{p}}-1}{\frac{\omega_{\frac{n}{p}} -1}{\sum_{s\in S}\omega_{n}^{si}}=0\). Consequently, we have demonstrated that \(0\) is an eigenvalue of the graph \(G(n;\cup_{i=0}^{\frac{n}{p}-1}(ip+S))\), and \(\lambda_{i}=0\) whenever \(\frac{n}{p}\nmid i\) for \(1\leq i\leq n-1\). As a result, \(|1\leq i\leq n-1\mid\frac{n}{p}\nmid i\mid=n-|1\leq i\leq n\mid\frac{n}{p} \mid i|=n-\frac{n}{n/p}=n-p\), which represents the multiplicity of the eigenvalue \(0\).
If \(\omega_{\frac{n}{p}}^{i}=1\), which occurs when \(\frac{n}{p}\mid i\), then we have \(\sum_{j=0}^{\frac{n}{p}-1}\omega_{\frac{n}{p}}^{ji}=\frac{n}{p}\). Hence, we can establish that \(\sum_{j=0}^{\frac{n}{p}-1}\omega_{\frac{n}{p}}^{ji}\sum_{s\in S}\omega_{n}^{si }=\frac{n}{p}\sum_{s\in S}\omega_{p}^{si}\), where \(i=\frac{n}{p}i_{1}\). According to the discussion following Theorem 3.3, if \(i_{1}\) is a quadratic residue modulo \(p\), then \(\sum_{s\in S}\omega_{p}^{si_{1}}=\tau=\frac{\sqrt{p}-1}{2}\), while if \(i_{1}\) is a non-quadratic residue modulo \(p\), then \(\sum_{s\in S}\omega_{p}^{si_{1}}=\theta=\frac{-\sqrt{p}-1}{2}\). Therefore, if \(\frac{ip}{n}\) is a quadratic residue then \(\lambda_{i}=\frac{n}{p}\tau=\frac{n(\sqrt{p}-1)}{2p}\), while if \(\frac{ip}{n}\) is a non-quadratic residue then \(\lambda_{i}=\frac{n}{p}\theta=-\frac{n(\sqrt{p}+1)}{2p}\). Since we have previously established that \(m_{\tau}=m_{\theta}=\frac{p-1}{2}\), we can conclude that \(\lambda_{i}\) also has a multiplicity of \(\frac{p-1}{2}\), where \(i_{1}=\frac{ip}{n}\) and \(i_{1}\) is a (non-)residue modulo \(p\).
\(\blacksquare\)
Let us enumerate the vertices of the graph \(G(n;\cup_{i=0}^{\frac{n}{p}-1}(ip+S))\) as \(0,1,\ldots,n-1\), and partition them into the classes \(C_{0},C_{1},\ldots,C_{\frac{n}{p}-1}\) such that \(ip,ip+1,\ldots,ip+(p-1)\in C_{i}\). Let \(u\) and \(v\) be two vertices such that \(u\in C_{i}\) and \(v\in C_{j}\), for some \(0\leq i,j\leq\frac{n}{p}-1\). This implies that \(u=ip+r_{1}\) and \(v=jp+r_{2}\), for some \(0\leq r_{1},r_{2}\leq p-1\), and without loss of generality, we assume that \(i\geq j\). Consequently, it holds that \(u-v\in(i-j)p+S\) if and only if \(r_{1}-r_{2}\in S\). In particular, if we set \(u,v\in C_{i}\) and establish the mapping \(f(u)=r_{1}\) and \(f(v)=r_{2}\) (\(f\) maps the elements of the class \(C_{i}\) to their residues), it can be seen that a subgraph induced by the class \(C_{i}\) is isomorphic to \(G(p;S)\) with respect to the mapping \(f\). Moreover, we can conclude that \(G(n;\cup_{i=0}^{\frac{n}{p}-1}(ip+S))\) consists of \(\frac{n}{p}\) copies of \(G(p;S)\), and two vertices from distinct copies congruent modulo \(p\) have the same neighborhood. Furthermore, we can represent each vertex \(u\) in \(C_{i}\) using a tuple where the first position corresponds to the class it belongs to (in this case, \(i\)), and the second position represents its position within the class \(C_{i}\) determined by its residue \(r\) modulo \(p\). Hence, we can state that two vertices \((i,r_{1})\) and \((j,r_{2})\) are adjacent if and only if \(r_{1}-r_{2}\in S\). In other words, in the graph \(G(n;\cup_{i=0}^{\frac{n}{p}-1}(ip+S))\), the vertices \((i,r_{1})\) and \((j,r_{2})\) are adjacent if and only if \(r_{1}\) and \(r_{2}\) are adjacent in \(G(p;S)\) for all \(0\leq i,j\leq\frac{n}{p}-1\). Considering the set \(\{i\mid 0\leq i\leq\frac{n}{p}-1\}\) as the vertices of the graph \(K_{\frac{n}{p}}^{*}\), where every two vertices are adjacent, we can conclude that \((i,r_{1})\) and \((j,r_{2})\) are adjacent in \(G(n;\cup_{i=0}^{\frac{n}{p}-1}(ip+S))\) if and only if \(i\) and \(j\) are adjacent in \(K_{\frac{n}{p}}^{*}\), and \(r_{1}\) and \(r_{2}\) are adjacent in \(G(p;S)\). This can be expressed as the isomorphism \(G(n;\cup_{i=0}^{\frac{n}{p}-1}(ip+S))\simeq K_{\frac{n}{p}}^{*}\otimes G(p;S)\).
In this context, we aim to discover a new classes of integral circulant graphs with composite order that possess four distinct eigenvalues in their spectra.
**Theorem 4.5**: _Integral circulant graph \(\mathrm{ICG}_{n}(D)\) posses four distinct eigenvalues in its spectrum, whenever \(n\) is a composite integer and \(D=\{d\in D_{n}\mid k\nmid d\}\cup kmD_{\frac{n}{km}}\), for some divisors \(n-1\geq k\geq 2\) of \(n\) and \(\frac{n}{k}-1\geq m\geq 2\) of \(\frac{n}{k}\)._
**Proof.** First, we compute the eigenvalues of \(\mu_{0},\ldots,\mu_{rs-1}\), which is given by the equation (4), of the graph \(\mathrm{ICG}_{rs}(rD_{s})\), for arbitrary positive integers \(r\geq 2\) and \(s\geq 1\). Given that \(\gcd(d\mid d\in rD_{s})=r\geq 2\), according to Lemma 2.1, it follows that \(\mathrm{ICG}_{rs}(rD_{s})\) is disconnected and consists of \(r\) connected components, each of which is isomorphic to the complete graph \(\mathrm{ICG}_{s}(D_{s})\). Furthermore, considering that the spectrum of \(\mathrm{ICG}_{s}(D_{s})\) is represented as \(s-1,\underbrace{-1,\ldots,-1}_{s-1}\) and the spectrum of \(\mathrm{ICG}_{rs}(rD_{s})\) is composed of \(r\) copies of the spectrum of \(\mathrm{ICG}_{s}(D_{s})\), we deduce that \(\mu_{j}=s-1\), for \(s\mid j\), and \(\mu_{j}=-1\), for \(s\nmid j\).
Let \(\lambda_{0},\ldots,\lambda_{n-1}\) be the spectrum of \(\mathrm{ICG}_{n}(D)\), \(\nu_{0},\ldots,\nu_{n-1}\) be the spectrum of \(\mathrm{ICG}_{n}(\{d\in D_{n}\mid k\nmid d\})\) and \(\eta_{0},\ldots,\eta_{n-1}\) be the spectrum of \(\mathrm{ICG}_{n}(kmD_{\frac{n}{km}})\), obtained by (4). It is clear that \(\lambda_{i}=\nu_{i}+\eta_{i}\), for \(0\leq i\leq n-1\). Since \(\mathrm{ICG}_{n}(\{d\in D_{n}\mid k\nmid d\})\) represents a regular graph, we see that its complement \(\mathrm{ICG}_{n}(\{d\in D_{n}\mid k\mid d\})\simeq\mathrm{ICG}_{n}(kD_{\frac{ n}{k}})\) has spectrum \(n-1-\nu_{0},-1-\nu_{1},\ldots,-1-\nu_{n-1}\). According to the preceding discussion, we get that \(n-1-\nu_{0}=\frac{n}{k}-1\), \(-1-\nu_{i}=\frac{n}{k}-1\), for \(\frac{n}{k}\mid i\), and \(-1-\nu_{i}=-1\), for \(\frac{n}{k}\nmid i\), \(1\leq i\leq n-1\). In conclusion, this implies that \(\nu_{0}=n-\frac{n}{k}\), \(\nu_{i}=-\frac{n}{k}\), for \(\frac{n}{k}\mid i\), and \(\nu_{i}=0\), for \(\frac{n}{k}\nmid i\), \(1\leq i\leq n-1\). Similarly, we can compute the spectrum of \(\mathrm{ICG}_{n}(kmD_{\frac{n}{km}})\) as follows: \(\eta_{i}=\frac{n}{km}-1\), for \(\frac{n}{km}\mid i\), and \(\eta_{i}=-1\), for \(\frac{n}{km}\nmid i\), \(0\leq i\leq n-1\). Finally, as \(\lambda_{i}=\nu_{i}+\eta_{i}\), for \(0\leq i\leq n-1\), it can be seen that \(\lambda_{0}=n-\frac{n}{k}+\frac{n}{km}-1\), \(\lambda_{i}=-\frac{n}{k}+\frac{n}{km}-1\), for \(\frac{n}{k}\mid i\), \(\lambda_{i}=\frac{n}{km}-1\), for \(\frac{n}{km}\mid i\), \(\frac{n}{k}\nmid i\), and \(\lambda_{i}=-1\), for \(\frac{n}{km}\nmid i\), for \(1\leq i\leq n-1\). The multiplicities of the particular eigenvalues are given by the following formula
\[\{n-\frac{n}{k}+\frac{n}{km}-1^{(1)},\frac{n}{km}-1^{(km-k)},-\frac{n}{k}+ \frac{n}{km}-1^{(k-1)},-1^{(n-mk)}\}.\]
\(\blacksquare\)
It is worth noting that the class of graphs \(\mathrm{ICG}_{p^{\alpha}}(1,p^{\alpha-1})\), as described in [6] by Theorem 4.1, represents only a specific case within the broader class of graphs defined in Theorem 4.5. In the case of \(n=p^{\alpha}\), \(k=p\), and \(m=p^{\alpha-2}\), we have \(\{d\in D_{n}\mid k\nmid d\}=\{1\}\) and \(kmD_{\frac{n}{km}}=p^{\alpha-1}D_{p}=\{p^{\alpha-1}\}\). Consequently, \(\mathrm{ICG}_{n}(\{d\in D_{n}\mid k\nmid d\}\cup kmD_{\frac{n}{km}})= \mathrm{ICG}_{p^{\alpha}}(1,p^{\alpha-1})\). Similarly, for \(n=p^{\alpha}\), \(m=p\) and \(k=p^{s}\) (where \(1<s\leq\alpha-2\)), we can deduce that \(\{d\in D_{n}\mid k\nmid d\}=\{1,p,\ldots,p^{s-1}\}\). We also have \(kmD_{\frac{n}{km}}=p^{s+1}D_{p^{\alpha-s-1}}=\{p^{s+1},p^{s+2},\ldots,p^{\alpha -1}\}\) and hence \(\mathrm{ICG}_{p^{\alpha}}(D_{p^{\alpha}}\setminus\{p^{s}\})\) is an example of a graph which exhibits four distinct eigenvalues in its spectrum, as derived in [6] through Theorem 4.3. Moreover, the statement of Theorem 4.5 from [6] directly follows from Theorem 4.5 for \(n=p^{\alpha}\), \(k=p^{\alpha-2}\) and \(m=p\). Furthermore, the statement of Theorem 5.1 from [6] directly follows from Theorem 4.5 for even composite \(n\), \(k=d\) and \(m=\frac{n}{2d}\), where \(d\) is a divisor of \(\frac{n}{2}\). Thus, we have demonstrated that all the classes of circulant graphs discussed in [6] are merely specific instances of the class of graphs obtained in the theorem mentioned above.
It is important to highlight that the graph mentioned in Theorem 4.5 can be expressed as the union of graphs: the strongly regular graph \(\mathrm{ICG}_{n}(d\in D_{n}\mid k\nmid d)\) and \(km\) copies of the complete graph \(K_{km}\). Furthermore, referring to equation (11) and the subsequent discussion, we observe that the graph \(\mathrm{ICG}_{n}(d\in D_{n}\mid k\nmid d)\) can be represented as the tensor product of specific graphs.
In the following theorem we present another class of integral circulant graphs with composite order whose spectrum contains four distinct eigenvalues. It is noteworthy that this particular class of graphs can be viewed as a union of two distinct graphs, both of which are subgraphs of strongly regular integral circulant graphs of specific orders.
**Theorem 4.6**: _Let \(n\) be an composite even number and \(k\) be an odd divisor of \(n\), such that \(n-1\geq k\geq 2\). Then, the spectrum of the integral circulant graph \(\mathrm{ICG}_{n}(D)\), for \(D=\{d\in D_{n}\ |\ d\in 2\mathbb{N},\ k\nmid d\}\cup\{d\in D_{n}\ |\ d\in 2 \mathbb{N}+1,\ k\mid d\}\), posses four distinct eigenvalues._
**Proof.** Let \(\lambda_{0},\ldots,\lambda_{n-1}\) be the spectrum of \(\mathrm{ICG}_{n}(D)\), \(\nu_{0},\ldots,\nu_{n-1}\) be the spectrum of \(\mathrm{ICG}_{n}(\{d\in D_{n}\ |\ d\in 2\mathbb{N},\ k\nmid d\})\) and \(\eta_{0},\ldots,\eta_{n-1}\) be the spectrum of \(\mathrm{ICG}_{n}(\{d\in D_{n}\ |\ d\in 2\mathbb{N}+1,\ k\mid d\})\), obtained by (4). It is clear that \(\lambda_{i}=\nu_{i}+\eta_{i}\), for \(0\leq i\leq n-1\). Based on Lemma 2.1, it can be observed that the graph \(\mathrm{ICG}_{n}(\{d\in D_{n}\ |\ d\in 2\mathbb{N},\ k\nmid d\})\) is isomorphic to two copies of the graph \(\mathrm{ICG}_{n/2}(\{d\in D_{n}\ |\ k\nmid d\})\). Additionally, as deduced from the proof of Theorem 4.5, the spectrum of this graph consists of three distinct eigenvalues: \(\{\frac{n}{2}-\frac{n}{2k},-\frac{n}{2k},0\}\). Consequently, we have \(\nu_{0}=\nu_{\frac{n}{2}}=\frac{n}{2}-\frac{n}{2k}\), \(\nu_{i}=-\frac{n}{2k}\) for \(\frac{n}{2k}\ |\ i\) and \(\frac{n}{2}\nmid i\), and \(\nu_{i}=0\) otherwise.
In the same fashion, we can conclude that \(\mathrm{ICG}_{n}(\{d\in D_{n}\ |\ d\in 2\mathbb{N}+1,\ k\mid d\})\) is isomorphic to \(k\) copies of \(\mathrm{ICG}_{\frac{n}{k}}(\{d\in D_{\frac{n}{k}}\ |\ 2\nmid d\})\). The spectrum of \(\mathrm{ICG}_{\frac{n}{k}}(\{d\in D_{\frac{n}{k}}\ |\ 2\nmid d\})\) contains three distinct eigenvalues \(\{\frac{n}{k}-\frac{n}{2k},-\frac{n}{2k},0\}\). Moreover, it holds that \(\eta_{i}=\frac{n}{2k}\), for \(\frac{n}{k}\ |\ i\), \(\eta_{i}=-\frac{n}{2k}\) for \(\frac{n}{2k}\ |\ i\) and \(\frac{n}{k}\nmid i\), and \(\eta_{i}=0\), otherwise. Therefore, we see that \(\lambda_{0}=\frac{n}{2}-\frac{n}{2k}+\frac{n}{2k}=\frac{n}{2}\), \(\lambda_{\frac{n}{2}}=\frac{n}{2}-\frac{n}{2k}\), \(\frac{n}{2k}=\frac{n}{2}-\frac{n}{k}\), \(\lambda_{i}=-\frac{n}{k}\), for \(\frac{n}{2k}\ |\ i\), \(\frac{n}{2}\nmid i\) and \(\frac{n}{k}\nmid i\), and \(\lambda_{i}=0\) otherwise.
Observing the complement of the graph \(\mathrm{ICG}_{n}(D)\), for \(D=\{d\in D_{n}\ |\ d\in 2\mathbb{N},\ k\nmid d\}\cup\{d\in D_{n}\ |\ d\in 2 \mathbb{N}+1,\ k\mid d\}\), we can found that its spectrum consists of four distinct eigenvalues: \(\mu_{0}=n-1-\frac{n}{2}=\frac{n}{2}-1\), \(\mu_{\frac{n}{2}}=-1-\frac{n}{2}+\frac{n}{k}\), \(\mu_{i}=\frac{n}{k}-1\), for \(\frac{n}{2k}\ |\ i\), \(\frac{n}{2}\nmid i\) and \(\frac{n}{k}\nmid i\), and \(\mu_{i}=-1\) otherwise.
In this section, thus far, we have identified classes of graphs, both circulant and non-circulant, whose spectra exhibit four distinct eigenvalues having composite order. As a result, we are now prepared to provide a characterization of all circulant graphs with prime order that possess four distinct eigenvalues in the subsequent statement.
**Theorem 4.7**: _For a prime number \(p\) the spectrum of a circulant graph \(G(p;S)\) posses four distinct eigenvalues in its spectrum if and only if \(S\) is a set of all cubic residues modulo \(p\) or all cubic non-residues modulo \(p\) and \(p\in 3\mathbb{N}+1\)._
**Proof.** Let \(G(p;S)\) exhibits four distinct eigenvalues in its spectrum. Since \(\lambda_{0}=|S|\), it follows that the sequence of eigenvalues \(\lambda_{1},\lambda_{2},...,\lambda_{p-1}\), given by (1), must consist of exactly three distinct eigenvalues. We retain the same notation as in the proof of Theorem 3.3. Since \(S_{1}=S\), the number of distinct eigenvalues in the spectrum of \(G(p;S)\) is equal to four if and only if \(\{S_{i}|\ 1\leq i\leq p-1\}=\{S,T,R\}\) for some \(T,R\subseteq\{1,\ldots,p-1\}\). Similarly as in the proof of Theorem 3.3, we conclude that \(|S_{i}|=|S|=|T|=|R|\), for \(1\leq i\leq p-1\). Furthermore, for a given \(s\in S\), we can conclude that \(\{r_{i,s}|\ 1\leq i\leq p-1\}=\{1,\ldots,p-1\}\), which implies that \(S\cup T\cup R=\{1,\ldots,p-1\}\).
We will now show that the sets \(S\), \(T\) and \(R\) are pairwise disjoint. Suppose there exists an element \(c\) that belongs to the intersection of \(S\), \(T\), and \(R\). Following a similar approach as in the proof of Theorem 3.3, we can establish that \(S=T=R=\{1,\ldots,p-1\}\). However, this leads to a contradiction. Now, without loss of generality, we can assume that there exists some \(c\in S\cap T\) and \(c\not\in R\). This means that \(c\in S_{i}\) for \(\{1\leq i\leq p-1\ |\ S_{i}\in\{S,T\}\}\). Therefore, for each such \(i\), there exists \(s\in S\) such that \(c=r_{i,s}\), and consequently, \(s\equiv_{p}c\cdot i^{-1}\). This implies
that \(\{c\cdot i^{-1}\ |\ 1\leq i\leq p-1,\ S_{i}\in\{S,T\}\}\subseteq S\). Let \(1\leq j\leq p-1\) be an arbitrary index such that \(S_{j}=R\). If \(c\cdot j^{-1}\in S\), then \(j\cdot c\cdot j^{-1}\in S_{j}\), and hence \(c\in R\), leading to a contradiction. Thus, we conclude that \(\{c\cdot i^{-1}|1\leq i\leq p-1,S_{i}\in\{S,T\}\}=S\). Let \(i\) and \(j\) be indices such that \(S_{i}=S\) and \(S_{j}=T\). Since, \(S_{i}\neq S_{j}\), then there exist elements \(ca^{-1}\) and \(cb^{-1}\) from the set \(\{c\cdot i^{-1}\ |\ 1\leq i\leq p-1,\ S_{i}\in\{S,T\}\}\) such that \(cia^{-1}\not\equiv_{p}cjb^{-1}\). We can distinguish three cases: \(S_{a}=S\) and \(S_{b}=S\), \(S_{a}=S\) and \(S_{b}=T\), and \(S_{a}=T\) and \(S_{b}=T\). Suppose that \(S_{a}=S\) and \(S_{b}=S\). From \(cia^{-1}\not\equiv_{p}cjb^{-1}\), we have that \(cai^{-1}\not\equiv_{p}cjb^{-1}\), which means that \(S_{a}\neq S_{b}\), as \(cai^{-1}\in S_{a}\) and \(cbj^{-1}\in S_{b}\). This leads us to a contradiction. In the similar fashion the same conclusion can be deduced for the case \(S_{a}=T\) and \(S_{b}=T\). Finally, suppose that \(S_{a}=S\) and \(S_{b}=T\). From \(cia^{-1}\not\equiv_{p}cjb^{-1}\), we have that \(cba^{-1}\not\equiv_{p}cji^{-1}\). This implies that \(S_{b}\neq S_{j}\), as \(cba^{-1}\) belongs to \(S_{b}\) and \(cji^{-1}\) belongs to \(S_{j}\). However, this contradicts our assumption that \(S_{b}=S_{j}=T\).
Since \(|S|=|T|=|R|\), \(S\cup T\cup R=\{1,\ldots,p-1\}\) and \(S\), \(T\) and \(R\) are pairwise disjoint, we obtain that \(|S|=|T|=|R|=\frac{p-1}{3}\). Based on Lemma 3.2, the equation \(\lambda_{i}=\lambda_{j}\) implies \(\prod_{s\in S}r_{i,s}=\prod_{s\in S}r_{j,s}\), which further leads to \(\prod_{s\in S}is\equiv_{p}\prod_{s\in S}js\) and subsequently \(i^{\frac{p-1}{3}}\equiv_{p}j^{\frac{p-1}{3}}\). By examining the values of \(i^{\frac{p-1}{3}}\) modulo \(p\), which can take the forms \(1\), \(x\), or \(x^{2}\), for some \(x\), \(x^{2}+x+1\equiv_{p}1\), we can deduce that if \(\lambda_{i}=\lambda_{j}\), then the values of \(i\) and \(j\) are both cubic residues modulo \(p\) (as stated in Theorem 2.3), or they satisfy the condition \(i^{\frac{p-1}{3}}\equiv_{p}j^{\frac{p-1}{3}}\equiv_{p}x\) or \(i^{\frac{p-1}{3}}\equiv_{p}j^{\frac{p-1}{3}}\equiv_{p}x^{2}\). If one of the last two conditions is satisfied, then it is easy to see that \(i\equiv_{p}xi_{1}\) or \(i\equiv_{p}x^{2}i_{1}\), respectively, for some cubic residue \(i_{1}\) modulo \(p\). Since \(S_{1}=S\) it follows \(\{i|\ S_{i}=S,\ 1\leq i\leq p-1\}\subseteq\{i|\ \left[\frac{i}{p}\right]_{3}=1,\ 1\leq i\leq p-1\}\). Similarly, we conclude that \(\{i|\ S_{i}=T,\ 1\leq i\leq p-1\}\subseteq\{x\cdot i|\ \left[\frac{i}{p}\right]_{3}=1,\ 1 \leq i\leq p-1\}\) and \(\{i|\ S_{i}=R,\ 1\leq i\leq p-1\}\subseteq\{x^{2}\cdot i|\ \left[\frac{i}{p}\right]_{3}=1,\ 1\leq i\leq p-1\}\). Given that \(\{i|\ \left[\frac{i}{p}\right]_{3}=1,\ 1\leq i\leq p-1\}=|\{x\cdot i|\ \left[\frac{i}{p} \right]_{3}=1,\ 1\leq i\leq p-1\}|=|\{x^{2}\cdot i|\ \left[\frac{i}{p}\right]_{3}=1,\ 1\leq i\leq p-1\}|=\frac{p-1}{3}\) and \(\{i|\ S_{i}=S,\ 1\leq i\leq p-1\}\cup\{i|\ S_{i}=T,\ 1\leq i\leq p-1\}\cup\{i|\ S_{i}=R,\ 1\leq i \leq p-1\}=\{1,\ldots,p-1\}\), we have that \(\{i|\ S_{i}=S,\ 1\leq i\leq p-1\}=\{i|\ \left[\frac{i}{p}\right]_{3}=1,\ 1\leq i\leq p-1\}\), \(\{i|\ S_{i}=T,\ 1\leq i\leq p-1\}=\{x\cdot i|\ \left[\frac{i}{p}\right]_{3}=1,\ 1\leq i\leq p-1\}\) and \(\{i|\ S_{i}=R,\ 1\leq i\leq p-1\}=\{x^{2}\cdot i|\ \left[\frac{i}{p}\right]_{3}=1,\ 1 \leq i\leq p-1\}\).
Suppose there exists \(s\in S\) such that \(\left[\frac{s}{p}\right]_{3}=1\). For every \(1\leq i\leq p-1\) such that \(\left[\frac{i}{p}\right]_{3}=1\), we have that \(S_{i}=S\) and \(r_{i,s}\in S_{i}\), thereby implying that \(r_{i,s}\in S\). In other words, we have that \(\{r_{i,s}\ |\ \left[\frac{i}{p}\right]_{3}=1,\ 1\leq i\leq p-1\}\subseteq S\). Since \(\left[\frac{is}{p}\right]_{3}=\left[\frac{i}{p}\right]_{3}\left[\frac{s}{p} \right]_{3}=1\), we conclude that \(\{r_{i,s}\ |\ \left[\frac{i}{p}\right]_{3}=1,\ 1\leq i\leq p-1\}\subseteq\{i|\ \left[\frac{i}{p} \right]_{3}=1,\ 1\leq i\leq p-1\}\). Moreover, from the fact that \(i\neq j\) implies \(r_{i,s}\neq r_{j,s}\), for \(1\leq i,j\leq p-1\), we further get that \(\{r_{i,s}\ |\ \left[\frac{i}{p}\right]_{3}=1,\ 1\leq i\leq p-1\}=\{i|\ \left[\frac{i}{p} \right]_{3}=1,\ 1\leq i\leq p-1\}\). Finally, from the preceding discussion it can be concluded that \(\{i|\ \left[\frac{i}{p}\right]_{3}=1,\ 1\leq i\leq p-1\}\subseteq S\) and since \(|\{i|\ \left[\frac{i}{p}\right]_{3}=1,\ 1\leq i\leq p-1\}|=|S|=\frac{p-1}{3}\), it holds that \(\{i|\ \left[\frac{i}{p}\right]_{3}=1,\ 1\leq i\leq p-1\}=S\). If we assume that there exists \(s\in S\) such that \(s\equiv_{p}x\cdot s_{1}\) or \(s\equiv_{p}x^{2}\cdot s_{1}\), where \(\left[\frac{s_{1}}{p}\right]_{3}=1\), it can be proven in a similar fashion \(S=\{x\cdot i|\ \left[\frac{i}{p}\right]_{3}=1,\ 1\leq i\leq p-1\}\) or \(S=\{x^{2}\cdot i|\ \left[\frac{i}{p}\right]_{3}=1,\ 1\leq i\leq p-1\}\). For all the obtained cases concerning the values of \(S\), it can be concluded that the graphs \(G(p;S)\) are mutually isomorphic.
\(\blacksquare\)
We can proceed with determining the spectrum of the graph \(G(p;S)\), where \(p\) is a prime number of the form \(3k+1\) and \(S\) represents the set of cubic residues modulo \(p\). In the previous theorem, we have established that the eigenvalues \(\lambda_{1},\ldots,\lambda_{p-1}\) of \(G(p;S)\) include three distinct values, denoted as \(\theta\), \(\tau\), and \(\eta\). These values can be calculated by considering the following sums: \(\theta=\sum_{s\in S}\omega_{p}^{s}\), \(\tau=\sum_{s\in S}\omega_{p}^{xs}\), and \(\eta=\sum_{s\in S}\omega_{p}^{x^{2}s}\), where \(x\) represents a cubic non-residue
modulo \(p\). Given that \(p\in 3\mathbb{N}+1\), according to well-known theorem of Fermat, there exist unique integers \(a\) and \(b\), up to sign, such that \(4p=a^{2}+27b^{2}\) (for more details see [8]). The sums \(\theta\), \(\tau\), and \(\eta\) are closely related to so called Gauss cubic sums and represent the roots of the polynomial \(t^{3}+t^{2}-\frac{p-1}{3}t-\frac{ap+3p-1}{27}\) (this result can be found in [18, p. 460-461]).
### Construction of regular graphs with four distinct eigenvalues using line operator
In this section, we find novel classes of graphs whose spectra posses four distinct eigenvalues by applying the line graph operator \(L\) on the class of unitary Cayley graphs. First, we aim to characterize all unitary Cayley graphs whose spectra exhibit four distinct eigenvalues.
**Theorem 4.8**: _Unitary Cayley graph \(X_{n}\) has four distinct eigenvalues if and only if \(n\) is the product of two distinct primes._
**Proof.** Suppose that \(X_{n}\) has exactly four distinct eigenvalues and that there exists prime \(p_{i}\) in the prime factorization of \(n=p_{1}^{\alpha_{1}}\cdots p_{k}^{\alpha_{k}}\), such that \(\alpha_{i}\geq 2\). If \(k=1\), according to Theorem 3.4, \(X_{n}\) is strongly regular which is a contradiction, so we assume that \(k\geq 2\).
Let \(S=\{p_{1}^{\beta_{1}}\cdots p_{k}^{\beta_{k}}<n\mid\beta_{i}\in\{\alpha_{i}-1, \alpha_{i}\}\}\) and \(m=p_{1}p_{2}\cdots p_{k}\). Since \(k\geq 2\), we obtain \(|S|\geq 3\). For \(j\in S\), where \(j=p_{1}^{\beta_{1}}\cdots p_{k}^{\beta_{k}}\) we conclude that \(t_{n,j}=\frac{n}{\gcd(n,j)}=p_{1}^{\alpha_{1}-\beta_{1}}\cdots p_{k}^{\alpha_ {k}-\beta_{k}}\), \(\varphi(t_{n,j})=\prod_{\beta_{i}=\alpha_{i}-1}(p_{i}-1)\) and
\[\lambda_{j}=c(j,n) = \mu(t_{n,j})\prod_{\beta_{i}=\alpha_{i}-1}p_{i}^{\alpha_{i}-1} \prod_{\beta_{i}=\alpha_{i}}p_{i}^{\alpha_{i}-1}(p_{i}-1)=\mu(t_{n,j})\frac{n} {m}\prod_{\beta_{i}=\alpha_{i}}(p_{i}-1)\neq 0.\]
By the definition of \(j\in S\), there exists a \(k-\)tuple \((\beta_{1},\ldots,\beta_{k})\) such that \(j=p_{1}^{\beta_{1}}\cdots p_{k}^{\beta_{k}}\), so it can be written in the following form \(j=j(\beta_{1},\ldots,\beta_{k})\), for a given \(n\). Also, by \(j_{i}\), \(1\leq i\leq k\), we denote the index \(j\in S\) such that \(j_{i}=j(\alpha_{1}-1,\alpha_{2}-1,\ldots,\alpha_{i}-1,\alpha_{i+1},\alpha_{i+ 2},\ldots,\alpha_{k})\). It is easy to see that for \(1\leq i_{1}<i_{2}\leq k\), it holds that \(j_{i_{1}}>j_{i_{2}}\) and \(|\lambda_{j_{i_{1}}}|>|\lambda_{j_{i_{2}}}|\). Therefore, if \(k\geq 3\) we have at least three distinct values among the eigenvalues \(\lambda_{j_{1}},\lambda_{j_{2}},\ldots,\lambda_{j_{k}}\) and together with the regularity \(\lambda_{0}\) and \(\lambda_{1}=\mu(n)=0\), we conclude that \(X_{n}\) has at least five eigenvalues, which is a contradiction. If \(k=2\) then it is obvious that \(|S|=3\). These eigenvalues are equal to \(\frac{n}{m},-\frac{n}{m}(p_{1}-1),-\frac{n}{m}(p_{2}-1)\), so they are mutually distinct. Similarly, together with the regularity \(\lambda_{0}\) and \(\lambda_{1}=\mu(n)=0\), we conclude that \(X_{n}\) has at least five eigenvalues, which is a contradiction.
Now, suppose that \(n\) is a square-free number. If \(n\) is prime, then \(X_{n}\) has exactly two distinct eigenvalues, which is a contradiction. So, we assume that \(n\) has the following prime factorization \(n=p_{1}\cdots p_{k}\), for \(k\geq 2\). Now, after a calculation we obtain that \(t_{n,p_{i}}=\frac{n}{\gcd(n,p_{i})}=\frac{n}{p_{i}}\) and \(\lambda_{p_{i}}=(-1)^{k-1}(p_{i}-1)\), for \(1\leq i\leq k\). Furthermore, for \(1\leq i,j\leq k\) and \(i\neq j\), it is clear that \(\lambda_{p_{i}}\neq\lambda_{p_{j}}\). If \(k\geq 3\), we have at least three distinct values among the eigenvalues \(\lambda_{p_{1}},\lambda_{p_{2}},\ldots,\lambda_{p_{k}}\) and together with the regularity \(\lambda_{0}\) and \(\lambda_{1}=\mu(n)=(-1)^{k}\), we conclude that \(X_{n}\) has at least five eigenvalues, which is a contradiction.
For \(k=2\), \(n=p_{1}p_{2}\) and \(\gcd(j,n)=1\), we see that \(t_{n,j}=p_{1}p_{2}\) and \(\lambda_{j}=1\). If \(\gcd(j,n)=p_{1}\) then \(t_{n,j}=p_{2}\) and \(\lambda_{j}=-(p_{1}-1)\). Similarly, if \(\gcd(j,n)=p_{2}\) then \(\lambda_{j}=-(p_{2}-1)\). Thus, we conclude that together with the regularity \(\lambda_{0}\), \(X_{n}\) has exactly four distinct eigenvalues.
\(\blacksquare\)
**Theorem 4.9**: _Let \(X_{n}\) be a unitary Cayley graph of the order \(n\). Then the line graph of \(X_{n}\) has exactly four eigenvalues if and only if either \(n=2p\) or \(n=p^{\alpha}\), for some prime number \(p\geq 3\) and \(\alpha\geq 2\)._
**Proof.** Suppose that \(L(X_{n})\) has exactly four distinct eigenvalues. According to Theorem 3.5, it holds that \(X_{n}\) has either three or four distinct eigenvalues. Now, suppose that \(X_{n}\) has four distinct eigenvalues. According to Theorem 4.8, \(n\) is equal to \(p_{1}p_{2}\) for some primes \(p_{1}\) and \(p_{2}\). Using Theorem 4.8, we show that the spectrum of \(X_{n}\) is equal to \(\{(p_{1}-1)(p_{2}-1),-(p_{1}-1),-(p_{2}-1),1\}\), for the eigenvalues \(\lambda_{i}\) of \(L(X_{n})\), we see \(\lambda_{i}\in\{2(p_{1}-1)(p_{2}-1)-2,(p_{1}-1)(p_{2}-2)-2,(p_{1}-2)(p_{2}-1)- 2,(p_{1}-1)(p_{2}-1)-1,-2\}\), according Theorem 3.5. As \(p_{1}<p_{2}\), \(L(X_{n})\) can have four distinct eigenvalues if and only if \((p_{1}-2)(p_{2}-1)-2=-2\) and \(p_{1}=2\).
If \(X_{n}\) is strongly regular, then \(n=p^{\alpha}\), for some prime \(p\), \(\alpha\geq 2\), the eigenvalues of \(X_{n}\) are \(\{p^{\alpha-1}(p-1),0,-p^{\alpha-1}\}\) (according to Theorem 3.6). From Theorem 3.5, we get that any eigenvalue of \(L(X_{n})\) takes one of the following values \(\{2p^{\alpha-1}(p-1)-2,p^{\alpha-1}(p-1)-2,p^{\alpha-1}(p-2)-2,-2\}\). Therefore, \(L(X_{n})\) has exactly four distinct eigenvalues only if \(p^{\alpha-1}(p-2)-2\neq-2\), i.e. \(p\neq 2\).
\(\blacksquare\)
We will show that the newly founded classes of graphs with four eigenvalues (determined in Theorem 4.9) are not circulants. Specifically, we establish that \(L(X_{2p})\) and \(L(X_{p^{\alpha}})\) are not circulants, except for \(L(X_{6})\), where \(p\geq 3\) and \(\alpha\geq 2\).
First, we show that \(L(X_{2p})\), \(p\geq 3\), is not circulant. According to Theorem 3.7 it is sufficient to prove that \(X_{2p}\), \(p\geq 3\), is neither the cycle \(C_{2p}\) (with the exception \(C_{6}\)), nor complete bipartite graph \(K_{a,b}\) such that \(\gcd(a,b)=1\). If \(X_{2p}\simeq C_{2p}\) for some \(p\geq 3\), then these graphs have equal number of the edges, so the equality \(\frac{2p\cdot\varphi(2p)}{2}=2p\) holds, which is satisfied only for \(p=3\). In the second case, if \(X_{2p}\simeq K_{a,b}\), from the equality of the number of edges we obtain that \(p(p-1)=ab\), whence without loss of generality we conclude that \(p\mid a\). But from the equality of the orders of the graphs it follows that \(a+b=2p\), and this is true only for \(a=b=p\). This is a contradiction since \(a\) and \(b\) must be relatively prime numbers.
Now, suppose that \(L(X_{p^{\alpha}})\), \(p\geq 3\) and \(\alpha\geq 2\), is circulant. If \(X_{p^{\alpha}}\simeq C_{p^{\alpha}}\) for some \(p\geq 3\) and \(\alpha\geq 2\), then these graphs have equal number of the edges, so the equality \(\frac{p^{\alpha}\varphi(p^{\alpha})}{2}=p^{\alpha}\) holds, which is never satisfied, because the left hand side is always greater than the right hand side, for \(p\geq 3\) and \(\alpha\geq 2\). In the second case, if \(X_{p^{\alpha}}\simeq K_{a,b}\), from the equality of the number of edges we obtain that \(\frac{p^{2\alpha-1}(p-1)}{2}=ab\), whence without loss of generality we conclude that \(p^{2\alpha-1}\mid a\), since \(\gcd(a,b)=1\). From the equality of the orders of the graphs it follows that \(a+b=p^{\alpha}\), which is a contradiction since the left hand side is greater than \(p^{2\alpha-1}\).
By the following theorem we find new classes of regular graphs with four different eigenvalues.
**Theorem 4.10**: _Let \(X_{n}\) be an unitary Cayley graph of the order \(n\). Then \(L^{2}(X_{n})\) has exactly four eigenvalues if and only if \(n\) is either prime greater than \(3\) or a power of two greater than \(4\) or equal to \(6\)._
**Proof.** Suppose that \(L^{2}(X_{n})\) has exactly four distinct eigenvalues. This means \(L(X_{n})\) has either three or four distinct eigenvalues.
If \(L(X_{n})\) is strongly regular, according to the proof of Theorem 3.6 we conclude that the spectrum of \(L(X_{n})\) is \(\{2(p-1)-2,p-4,-2\}\), if \(n\) is prime greater than \(3\) and \(\{2^{\alpha}-2,2^{\alpha-1}-2,-2\}\), if \(n\) is a power of two greater than \(2\). Furthermore, from Theorem 3.5 we see that the eigenvalues of \(L^{2}(X_{n})\) are \(\{4p-10,3p-10,2p-8,-2\}\), if \(n\) is prime greater than \(3\) and \(\{2^{\alpha+1}-6,3\cdot 2^{\alpha-1}-6,2^{\alpha}-6,-2\}\), if \(n\) is a power of two greater than \(2\). Therefore, \(L^{2}(X_{n})\) has four eigenvalues if \(n\) is either prime greater than \(3\) or a power of two greater than \(4\).
If \(L(X_{n})\) has four distinct eigenvalues, then we distinguish two cases depending on the values of \(n\). For \(n=2p\), where \(p\geq 3\) is prime, the distinct eigenvalues of \(L(X_{n})\) are
\(\{2p-4,p-4,p-2,-2\}\), according to Theorem 4.9. Thus, every eigenvalue \(\lambda_{i}\) of \(L^{2}(X_{n})\) satisfies \(\lambda_{i}\in\{2(2p-4)-2,3p-10,3p-8,2p-8,-2\}\), whence we conclude that \(L^{2}(X_{n})\) has four distinct values only if \(p=3\) and, that is, for \(n=6\). If \(n=p^{\alpha}\), for some prime \(p\geq 3\) and \(\alpha\geq 2\), then the eigenvalues of \(L(X_{n})\) are \(\{2p^{\alpha-1}(p-1)-2,p^{\alpha-1}(p-1)-2,p^{\alpha-1}(p-2)-2,-2\}\), according to Theorem 4.9. Therefore, every eigenvalue \(\lambda_{i}\) of \(L^{2}(X_{n})\) satisfies \(\lambda_{i}\in\{4p^{\alpha-1}(p-1)-6,\\ 3p^{\alpha-1}(p-1)-6,p^{\alpha-1}(3p-4)-6,2p^{\alpha-1}(p-1)-6,-2\}\). Any two values from this set are mutually distinct, since \(p>3\), whence we conclude that \(L^{2}(X_{n})\) has not four distinct values.
\(\blacksquare\)
**Theorem 4.11**: _Let \(X_{n}\) be unitary Cayley graph of order \(n\). Then \(L^{3}(X_{n})\) has exactly four eigenvalues if and only if \(n=6\)._
**Proof.** Suppose that \(L^{3}(X_{n})\) has four distinct eigenvalues. According Theorem 3.5, the graph \(L^{2}(X_{n})\) is strongly regular or its spectrum has four distinct eigenvalues.
If \(L^{2}(X_{n})\) is strongly regular, using Theorem 3.8 we have that \(X_{n}\simeq X_{4}\simeq C_{4}\) and \(L^{k}(X_{4})\simeq C_{4}\) (for \(k\geq 1\)), which is strongly regular. Using Theorem 4.10, if \(L^{2}(X_{n})\) has four different eigenvalues then either \(n\) is prime greater than \(3\) or \(n\) is a power of \(2\) greater than \(4\) or \(n=6\).
For prime \(p>3\), according to the proof of Theorem 4.10, the spectrum of \(L^{2}(X_{n})\) is equal to \(\{4p-10,3p-10,2p-8,-2\}\). Moreover, every eigenvalue \(\lambda_{i}\) of \(L^{3}(X_{n})\) satisfies \(\lambda_{i}\in\{8p-22,7p-22,6p-20,4p-14,-2\}\). Any two values from this set are mutually distinct, since \(p>3\), whence we conclude that \(L^{3}(X_{p})\) has five distinct eigenvalues.
For \(n=2^{\alpha}\) and \(\alpha\geq 3\), according to the proof of Theorem 4.10, the spectrum of \(L^{2}(X_{n})\) is equal to \(\{2^{\alpha+1}-6,3\cdot 2^{\alpha-1}-6,2^{\alpha}-6,-2\}\). furthermore, every eigenvalue \(\lambda_{i}\) of \(L^{3}(X_{n})\) satisfies \(\lambda_{i}\in\{2^{\alpha+2}-14,7\cdot 2^{\alpha-1}-14,3\cdot 2^{\alpha}-14,2^{ \alpha+1}-10,-2\}\). Any two values from this set are mutually distinct, since \(\alpha\geq 3\), whence we conclude that \(L^{3}(X_{n})\) has five distinct eigenvalues.
For \(n=6\), according to the proof Theorem 4.10 and Theorem 3.5 we see that the both spectra of the graphs \(L^{2}(X_{6})\) and \(L^{3}(X_{6})\) are equal to \(\{2,-1,1,-2\}\). \(\blacksquare\)
It can be easily seen that the graph \(X_{6}\simeq C_{6}\) is the only one that can be found at the intersection of the classes mentioned in the Theorems 4.9 and 4.10. Indeed, by Theorem 4.9 we found two classes of graphs that have four distinct values in the spectra: \(L(X_{2p})\), for prime \(p\geq 3\), with the set of the eigenvalues \(\{2p-4,p-4,p-2,-2\}\) and \(L(X_{p^{\alpha}})\), for prime \(p\geq 3\) and \(\alpha\geq 2\), with the set of the eigenvalues \(\{2p^{\alpha-1}(p-1)-2,p^{\alpha-1}(p-1)-2,p^{\alpha-1}(p-2)-2,-2\}\). On the other hand, by Theorem 4.10 we also found two classes: \(L^{2}(X_{p})\), for prime \(p>3\), with the set of the eigenvalues \(\{4p-10,3p-10,2p-8,-2\}\) and \(L^{2}(X_{2^{\alpha}})\), for \(\alpha\geq 3\), with the set of the eigenvalues \(\{2^{\alpha+1}-6,3\cdot 2^{\alpha-1}-6,2^{\alpha}-6,-2\}\). If there are graphs \(G_{1}\) and \(G_{2}\) that belong to the classes obtained by Theorem 4.9 and Theorem 4.10, respectively, such that \(G_{1}\simeq G_{2}\), then they must be cospectral. First, we conclude that \(G_{1}\) has two odd and two even, or three even and one odd distinct values in its spectrum. On the other hand, \(G_{2}\) has four even, or three even and one odd distinct values in its spectrum. By comparing the parity of the distinct values in the spectra of \(G_{1}\) and \(G_{2}\), we conclude that they have three even and one odd distinct values in their spectra. This means that there exit primes \(p\geq 3\), \(q>3\) and integer \(\alpha\geq 2\) such that \(G_{1}\simeq L(X_{p^{\alpha}})\) and \(G_{2}\simeq L^{2}(X_{q})\). In this case, the regularity and the only odd eigenvalues of the graphs \(G_{1}\) and \(G_{2}\) must be equal, so the following equalities hold \(2p^{\alpha-1}(p-1)-2=4q-10\) and \(p^{\alpha-1}(p-2)-2=3q-10\). By the substraction of the second equation from the first, we obtain that \(p^{\alpha}=q\), which is never satisfied, so we obtain a contradiction.
We can show that the classes of the graphs \(L^{2}(X_{p})\) and \(L^{2}(X_{2^{\alpha}})\) (for prime \(p>3\) and integer \(\alpha\geq 3\)), obtained in Theorem 4.10, are not circulants. According to Theorem 3.7, we easily see that the classes \(L(X_{p})\) and \(L(X_{2^{\alpha}})\) neither can be the cycles nor complete bipartite graphs. Indeed, the eigenvalues of \(L(X_{p})\) and \(L(X_{2^{\alpha}})\) are \(\{2p-4,p-4,-2\}\) and \(\{2^{\alpha}-2,2^{\alpha-1}-2,-2\}\), respectively, and all of them are integers not equal to \(0\). On the other hand, complete bipartite graph of the order \(\frac{p(p-1)}{2}\) has \(0\) as an eigenvalue and the cycle \(C_{\frac{p(p-1)}{2}}\) has the irrational eigenvalue \(2\cos(\frac{4\pi}{p(p-1)})\), for \(p>3\).
## 5 Concluding remarks
In this paper, we establish novel classes of strongly regular graphs and graphs with spectra that comprise four distinct eigenvalues. These classes encompass both circulant and non-circulant connected graphs. We achieve this by employing specific constructions based on graph operations such as line operations, tensor product, and complement, starting from graphs that possess two or three distinct eigenvalues in their spectra. We further characterize the class of circulant graphs with prime order, including both strongly regular graphs and graphs with spectra exhibiting four distinct eigenvalues. These findings represent an advancement in the study of characterizing strongly regular circulant graphs, which was initially initiated in [7] and further extended in [3]. It has been proven that circulant graphs with integral spectra that are strongly regular must have composite order. Moreover, it has been noted that the task of classifying the class of integral circulant graphs with four distinct eigenvalues is considerably more challenging compared to characterizing strongly regular integral circulant graphs. This classification likely involves examination of a significantly larger number of cases. Some proof demonstrations require thorough examination, specifically when dealing with the characterization of circulant graphs possessing three or four eigenvalues.
However, the problem of finding a characterization of circulant graphs with four distinct eigenvalues is still interesting if the discussion is restricted to some special classes of circulant graphs of prescribed order. Therefore, we believe that it is worthwhile to carry out further investigation in the class of circulant graphs which are incident graphs of some symmetric block-designs (these graphs have four distinct eigenvalues, as elaborated in [15]). Moreover, we think that an effective approach for characterizing such graphs lies in utilizing the statement that a graph is the incident graph of a symmetric block-design if and only if it is a distance-regular graph with a diameter of three [15]. The classification of all circulant graphs with a prescribed diameter of three represents the most challenging aspect of the proof. To illustrate this assertion, we can refer to the proof of Theorem 14 in [2], which encompasses numerous cases considering very specific classes of integral circulant graphs. Moreover, finding graphs with maximal diameter in the class of integral circulant graphs with a prescribed order is a more demanding problem, partially addressed in [4]. However, it is possible to characterize certain subclasses of circulant graphs which are incident graphs of symmetric block designs. Specifically, it has been proven that the unitary Cayley graph \(X_{n}\) is the incidence graph of a symmetric block design if and only if \(n=2p\), for some \(p>2\) (the parameters \((p,p-1,p-2)\) determine the symmetric block design).
**Acknowledgment.**
This research was supported by the research project of the Ministry of Education, Science and Technological Development of the Republic of Serbia (number 451-03-47/2023-01/ 200124). |
2302.09321 | Implicit Solvent Approach Based on Generalised Born and Transferable
Graph Neural Networks for Molecular Dynamics Simulations | Molecular dynamics (MD) simulations enable the study of the motion of small
and large (bio)molecules and the estimation of their conformational ensembles.
The description of the environment (solvent) has thereby a large impact.
Implicit solvent representations are efficient but in many cases not accurate
enough (especially for polar solvents such as water). More accurate but also
computationally more expensive is the explicit treatment of the solvent
molecules. Recently, machine learning (ML) has been proposed to bridge the gap
and simulate in an implicit manner explicit solvation effects. However, the
current approaches rely on prior knowledge of the entire conformational space,
limiting their application in practice. Here, we introduce a graph neural
network (GNN) based implicit solvent that is capable of describing explicit
solvent effects for peptides with different composition than contained in the
training set. | Paul Katzberger, Sereina Riniker | 2023-02-18T12:47:23Z | http://arxiv.org/abs/2302.09321v1 | Implicit Solvent Approach Based on Generalised Born and Transferable Graph Neural Networks for Molecular Dynamics Simulations
###### Abstract
Molecular dynamics (MD) simulations enable the study of the motion of small and large (bio)molecules and the estimation of their conformational ensembles. The description of the environment (solvent) has thereby a large impact. Implicit solvent representations are efficient but in many cases not accurate enough (especially for polar solvents such as water). More accurate but also computationally more expensive is the explicit treatment of the solvent molecules. Recently, machine learning (ML) has been proposed to bridge the gap and simulate in an implicit manner explicit solvation effects. However, the current approaches rely on prior knowledge of the entire conformational space, limiting their application in practice. Here, we introduce a graph neural network (GNN) based implicit solvent that is capable of describing explicit solvent effects for peptides with different composition than contained in the training set.
## 1 Introduction
Molecular dynamics (MD) simulations employ Newton's equation of motion to study the dynamics of (bio)molecular systems [1]. In recent years, MD has not only become a pivotal tool for the investigation of biomolecular processes such as membrane permeation of drug molecules or protein folding [2] but also accelerated and supported drug discovery [3] The conformational ensembles of molecules are strongly influenced by the surrounding medium. While intramolecular hydrogen bonds (H-bonds) are favoured in vacuum and apolar solvents (e.g., chloroform), they are generally disfavoured in polar solvents (e.g., water) [4]. Such effects of the (local) environment can be incorporated by explicitly simulating solvent molecules. This approach provides good accuracy since it includes both short-range and long-range interactions. Both contributions are needed in order to describe an accurate conformational ensemble [5]. However, explicit-solvent simulations come at the cost of substantially increasing the number of degrees of freedom (DOF) in the system, which results in substantially higher computational costs as well as slower effective sampling [6]. In addition, the potential energy of a single solute conformation is no longer an instantaneous property in explicit-solvent simulations as an infinite number of solvent configurations (i.e., arrangement of the solvent molecules around the solute) exist for a single solute conformation. Thus, the prediction of the potential energy of a single conformer requires integrating out the contributions of individual solvent configurations [5].
To simultaneously retain the instantaneously of the potential energy and to reduce the number of DOF in the system, implicit-solvent methods have been developed [7]. These approaches aim at modelling the solute-solvent interactions in an implicit manner by predicting the mean solvation energy and forces for a given solute conformation [6]. The most common implicit-solvent approach replaces the solvent by a continuum electrostatic. Examples are Poisson-Boltzmann (PB) [8], generalised Born (GB) [9], or fast analytical continuum treatments of solvation (FACTS) [10]. Note that GB and FACTS models are approximations of the PB model. However, current implicit-solvent models do not accurately reproduce the short-range effects of solvent molecules, and thus often do not reproduce the secondary
structures of peptides correctly [11]. Only very recently, the use of machine learning (ML) approaches have started to be explored for implicit-solvent models. Chen _et al._[12] used graph neural networks (GNN) to reproduce the potential energies and forces of two small peptides. This method is, however, not yet practical because the full conformational ensemble needs to be generated first via explicit-solvent simulations, before the GNN model can be trained to reproduce it. The GB models remain therefore the most commonly used implicit solvent approach to date.
The GB equation was first introduced by Still _et al._[9] and defines the solvation free energy \(\Delta G\) in terms of the atomic partial charges \(q_{i}\) and \(q_{j}\), the distance \(r_{ij}\), and the effective Born radii \(R_{i}\) and \(R_{j}\) of two atoms \(i\) and \(j\).
\[\Delta G=-\frac{1}{2}\left(\frac{1/\epsilon_{in}}{\epsilon_{out}}\right)\sum _{i,j}\frac{q_{i}q_{j}}{\sqrt{r_{ij}+R_{i}R_{j}exp\left(\frac{-r_{ij}^{2}}{4R_ {i}R_{j}}\right)}} \tag{1}\]
The Born radii are further calculated using the Coulomb integral \(I_{i}\) and the intrinsic radius \(\rho_{i}\),
\[R_{i}=(\rho_{i}^{-1}-I_{i})^{-1}. \tag{2}\]
The Coulomb integral can be derived from the Coulomb field approximation (CFA) and the intrinsic radius,
\[I_{i}=\frac{1}{4\pi}\int_{\Omega,r>\rho_{i}}\frac{1}{r^{4}}d^{3}r. \tag{3}\]
Typically, the integral is solved analytically by using a pairwise de-screening approximation [13]. While the functional form of different GB models is the same, the manner in which this integral is calculated distinguishes them: GB-HCT [13], GB-OBC [14], or GB-Neck [15]. In addition, also ML has been proposed to directly approximate reference born radii calculated by PB calculations [16, 17].
The calculation of the effective Born radii in standard GB models can be thought of as a one-pass message passing algorithm, where information is sent from each node to all other nodes within a cutoff. GNNs are therefore a natural choice to develop GB based models further. When multiple passes are used, GNN can aggregate information, intrinsically encode the geometric environment, and thereby go beyond the pairwise de-screening approximation. In general, it is mainly the local environment, dominated by short-range interactions, which is expected to benefit from this description, as the GB models can describe long-range interactions well. Therefore, the robustness and quality of a GNN based implicit solvent could be enhanced by using a \(\Delta\)-learning [18] approach rather than predicting the solvation forces directly with the GNN. This means that rather than replacing the GB model entirely, a ML correction could be added to a base model (i.e., GB-Neck). In related fields such as ML for QM/MM simulations of condensed-phase systems, a similar approach has been demonstrated to lead to stable simulations [19]. In addition, the \(\Delta\)-learning scheme could allow smaller cutoffs for the GNN as the long-range interactions are already well described by the GB model, reducing the computational cost.
Here, we use this idea to develop a ML based implicit solvent approach, which can be used to simulate molecules without the need of a complete conformational ensemble from explicit-solvent simulations for training. The training set for our ML approach consists of subsets of conformers extracted from explicit-solvent simulations (here with the TIP5P water model [20]), for which the mean solvation forces are calculated by keeping the solute position constrained and averaging over a multitude of solvent configurations. Note that this procedure to generate the solvation forces differs from the one proposed by Chen _et al._[12] as here the averaging is performed over multiple solvent configurations. The resulting mean solvation forces are used in a next step to train the GNN.
To assess the performance of our approach, test systems were chosen that are (a) interesting for the application of implicit solvents, (b) challenging for current implicit solvents, (c) have fast kinetics, and (d) can be analysed directly without dimensionality reduction or Markov state modelling. The aforementioned criteria are met by a class of small peptides, which are able to form a salt bridge. The importance of salt bridges for the stability of proteins [21] makes them an interesting target for implicit solvent models, while previous studies by Nguyen _et al._[22] have shown that GB-based implicit solvents could not accurately describe them. Although the parameters of the GB-Neck2 model [22] have been manually adjusted such that the height of the energy barrier matched a TIP3P solvent [23] simulation,
this implicit solvent failed to reproduce other key characteristics of the system such as the position of the free-energy minimum of the salt bridge or effects attributed to a single explicit water molecule between the opened salt bridge.
In this study, we study similar small peptides featuring a salt bridge between lysine (K, LYS) and glutamic acid (E, GLU) connected via two alanine (A, ALA) and one variable residue forming the peptides KAXAE, with X being valine (V, VAL), leucine (L, LEU), isoleucine (I, ILE), phenylalanine (F, PHE), serine (S, SER), threonine (T, THR), tyrosine (Y, TYR), and proline (P, PRO). In addition, we also test the approach on the same peptide KAAE as in Ref. [22], together with the longer variants KAAAE and KAAAAE. The performance is compared to the state-of-the-art implicit solvent GB-Neck2 as well as explicit-solvent simulations with TIP3P and TIP5P. To explore the generalizability and transferability characteristics of the GNN, the model is challenged to simulate peptides outside of the training set.
## 2 Methods
### Molecular Dynamics Simulations
Starting coordinates and topologies were generated using the AmberTools21 [24] software package. The amino acids and capping groups were parametrised using the AMBER force field ff99SB-ILDN [25]. All simulations were performed using OpenMM (version 7.7.0) [26]. For the explicit-solvent simulations, the peptides were solvated in a box of water (TIP3P or TIP5P) with a padding of 1 nm. For all systems, an energy minimisation using the L-BFGS algorithm [27] was performed with a tolerance of 10 kJ mol\({}^{-1}\) nm\({}^{-1}\). All bonds involving hydrogens were constrained using the SETTLE [28] and CCMA [29] algorithms for water and peptide bonds, respectively. For all simulations, Langevin dynamics were used with the LFMiddle discretization scheme [30]. For the explicit-solvent simulations, a cutoff of 1 nm for the long-range electrostatic interactions was used together with the particle mesh Ewald (PME) correction [31]. The simulation temperature set to 300 K and a time step of 2 fs was applied. The simulations with explicit solvent or GB-Neck2 were carried out for 1000 ns. Simulations with the GNN to describe the solvent were performed using the OpenMM-Torch package (version 0.6, url: [https://github.com/openmm/openmm-torch](https://github.com/openmm/openmm-torch)) by introducing the GNN as an additional force to the vacuum force field. These simulations were carried out for 30 ns. Note that explicit-solvent simulations with TIP3P were only performed for peptides KAPAE, KASAE, KATAE, and KAYAE.
### Generation of the Training Set
From the explicit-solvent simulations with TIP5P, a conformer was extracted every 200 ps for the peptides with X being apolar (i.e., KAVAE, KALAE, KAIAE, KAFAE, and KAPAE) and every 100 ps for the peptides with X being polar (i.e., KASAE, KATAE, and KAYAE). To calculate the mean solvation forces for each conformer, the solute atoms were positionally constrained and an explicit-solvent simulation was performed for 200 ps. The solvent-solute forces were evaluated every 200 fs by calculating the forces for each solute atom within the explicit simulation and subtracting the forces of the same solute atom in vacuum.
### Graph Neural Networks
Two GNN architectures were explored, sharing a three-pass neural network as core architecture. Both GNNs were applied in the subsequent simulations via a \(\Delta\)-learning scheme [18]. In the first case (abbreviated as GNN+), the energies of the base model (i.e., GB-Neck2) and the GNN were summed up following a traditional \(\Delta\)-learning approach (the forces were then obtained via standard derivation). In the second case (abbreviated as GNN*), the functional form of the base model (GB-Neck2) was adjusted such that the Born radii are scaled by the GNN within a predefined range according to Eq. 4,
\[R_{i}^{\prime}=R_{i}*(p+S(\phi(R,q,r_{a},r,R_{\text{cutoff}}))\cdot(1-p)\cdot 2) \tag{4}\]
where \(R_{i}\) is the Born radius calculated based on the Neck integral, \(p\) the scaling parameter to adjust the strength of the scaling of the Born radii (i.e., \(1=\text{no scaling applied}\); \(0=\text{maximum scaling applied}\)),
\(S\) the sigmoid function, and \(\phi\) the function approximated by the GNN based on the Born radii \(R\), the charges \(q\), the atomic radii \(r_{a}\) of all atoms, the distances \(r\) between all atoms, and a cutoff radius \(R_{\text{cutoff}}\).
The GNN* and GNN+ networks share key architectural elements as both employ three passes through interaction networks that only differ in the shape of the input and output, followed by SiLU activations [32]. One interaction network pass is characterised by a concatenation of the node features of the sending and receiving node together with the distance encoded by a radial Bessel [33] function of length 20, followed by a multi-layer perceptrons (MLP) with two layers and SiLU activation functions. As an aggregation method, the sum of all messages was taken and the node-wise computation employed again a two-layer MLP with SiLU activation functions. All hidden layers have a size of 128. As an embedding for the atoms, the GNN+ model used the partial charges and atomic radii from the force field and GBNeck2, respectively, while the GNN* incorporates additionally the calculated Born radius from the Neck integral. A schematic representation for the GNN architectures is shown in Figure 1.
The GNNs were trained on randomly selected 80 % of the conformations, using the Adam optimiser [34] for 100 epochs with a batch size of 32 and an exponentially decaying learning rate starting at 0.001 and decaying by two orders of magnitude to 0.00001. The mean squared error (MSE) was chosen as the loss function and the samples randomly shuffled after each epoch.
### Training-Test Splits
To assess the generalisability and transferability of the GNN, we composed different training and test splits (Table 1). Splits 1-4 include all KAXAE peptides for training but one. In split 5, all KAXAE peptides with X = polar are used for training, while the ones with X = apolar are simulated prospectively. Split 6 is the other way around (i.e., training on X = apolar, testing on X = polar). Finally, in split 7, all
Figure 1: Schematic representation of the GNN architectures. Node-wise computations are shown in blue, while message operations are shown in orange. (**A**): Schematic representation of the computation through the GNN. (**B**): Message computations start by concatenating the atom features of the sending and receiving node with the distance encoded by a Bessel function (RBF), followed by a two-layer MLP with SiLU activation functions. (**C**): Node-wise computations of aggregation by summation followed by a two-layer MLP with SiLU activation functions.
peptides with five residues (except X = A) were used for training, while prospective simulations were performed for KAAE, KAAAE, and KAAAAAE.
### Data Analysis
We used MDTraj (version 1.9.7) [35] for the analysis of the trajectories. To estimate the statistical uncertainty of the different approaches, the explicit-solvent simulations were divided into five 200 ns blocks and analysed separately. In addition, the training and simulation of the GNN was repeated three times with different random seeds. All free-energy profiles were calculated with a Jakobian correction factor of \(4\pi r^{2}\)[36]. A key feature that the ML-based implicit solvent should reproduce is the correct representation of the salt bridge between LYS N\({}_{\zeta}\) and GLU C\({}_{\delta}\). We have identified three main characteristics concerning the salt bridge for all test systems (Figure 2): (i) a double well in the free-energy minimum at 0.33 nm and 0.37 nm with different weights, corresponding to two different closed salt-bridge geometries, (ii) the height of the energy barrier the opening of the salt bridge at 0.43 nm, and (iii) a dip in the free-energy profile at 0.55 nm, corresponding to conformations with one water molecule between LYS N\({}_{\zeta}\) and GLU C\({}_{\delta}\) in a hydrogen-bond network. In addition to the salt bridge, the backbone dihedral angles \(\phi\) and \(\psi\) of the central amino acid are monitored. For polar central residues, the distance between the oxygen of the hydroxy group of the polar side chains and the LYS N\({}_{\zeta}\) and GLU C\({}_{\delta}\) (Figure 2).
\begin{table}
\begin{tabular}{l|c c c c c c c c c c} \hline Split & V & L & I & F & P & S & T & Y & ” & A & AA \\ \hline \hline
1 & & & & & & & & & & & \\ \hline
2 & & & & & & & & & & & \\ \hline
3 & & & & & & & & & & & \\ \hline
4 & & & & & & & & & & & \\ \hline
5 & & & & & & & & & & & \\ \hline
6 & & & & & & & & & & & \\ \hline
7 & & & & & & & & & & & \\ \hline \end{tabular}
\end{table}
Table 1: Training and test splits of peptides KAXAE with X being V, L, I, F, P, S, T, Y, ”, A, AA (note that KAXAE with X = ” corresponds to KAAE). Peptides used for training are marked with, while peptides simulated prospectively (testing) are marked with.
Figure 2: Structural characteristics of KASAE as example (light blue) with highlighted salt bridge, the distance SER \(O_{\gamma}\) – LYS N\({}_{\zeta}\) and SER \(O_{\gamma}\) – GLU C\({}_{\delta}\), and backbone torsional angles \(\phi\) and \(\psi\) of the central residue.
Results and Discussion
### Comparison of GNN Architectures
We explored two GNN architectures with \(\Delta\)-learning schemes for the ML-based implicit solvent: GNN\(+\) and GNN*. To compare the architectures and evaluate their hyperparameters, we calculated the solvation forces of simulation snapshots in a retrospective manner. We considered three peptides (KAVAE, KALAE, and KASAE) for training (using a random 20 % subset as validation set during the GNN training process), and two peptides KATAE and KAFAE as external test set. The peptides KAIAE, KAPAE and KAYAE were not included in this benchmark in order to not bias the choice of model architecture for the subsequent simulation studies. The investigated hyperparameters were the cutoff radius \(R_{\text{cutoff}}\) for which the fully connected graph is constructed, as well as the scaling parameter \(p\) of the GNN*, which regulates how much the Born radii of the GB-Neck2 model are allowed to change. The results of the benchmarking are summarised in Figure 3.
Interestingly, the GNN* model was found to perform significantly better than the GNN+ model in predicting the forces of the external test set over the entire range of tested cutoff radii, reaching an RMSE of \(16.2(3)\,\mathrm{k}\mathrm{J}\,\mathrm{m}\mathrm{l}^{-1}\,\mathrm{n}\mathrm{m} ^{-1}\). In addition, smaller deviations between the different random seeds were observed with the GNN* (indicated by lower standard deviations), which is a desirable feature. The effect of the scaling parameter on the GNN* models is more subtle. Values of 0.1, 0.2, and 0.4 gave essentially the same results, while the error increases slightly for a scaling parameter of 0.8. The impact of the cutoff radius \(R_{\text{cutoff}}\) is also small, although \(R_{\text{cutoff}}\) = 0.4 nm is likely too short. As longer radii (i.e. 0.7 nm) did not improve the performance significantly but increase the computational cost, we decided to focus in the following on the GNN* architecture with cutoff radii of 0.5 nm and 0.6 nm together with a scaling parameter of 0.1 and 0.2.
### Prospective Molecular Dynamics Simulations
To investigate the ability of the ML-based implicit solvent to simulate novel peptides, we composed different training and test splits (Table 1). First, we assessed the simulation performance of the GNN* models with with radii \(=0.5\,\mathrm{n}\mathrm{m}\) or 0.6 nm and scaling parameter \(=0.1\) or 0.2 on the training/test splits
Figure 3: Comparison of the root-mean-square error (RMSE) of the forces predicted by the GNN* and GNN\(+\) models with \(\Delta\)-learning for the external test set. The GNN* models with scaling parameter \(p\) of 0.1, 0.2, 0.4, or 0.8 are shown as colored dots (orange, purple, light blue, and navy, respectively), while the GNN\(+\) is shown as black squares. Statistical uncertainty is denoted by error bars.
1, 2, and 3 (i.e., training on all peptides except KASAE, KATAE, or KAYAE, respectively, and prospective simulation of the leftout peptide). The GNN* model with a radius of 0.6 nm and a scaling parameter of 0.1 yielded the most stable simulation results as indicated by the smallest deviations between the different random seeds for the closed salt-bridge conformations (see Figures S1-S4 in the Supporting Information). The observation that models with similar performance on a retrospective test set show more different behaviour in prospective simulations is in line with findings by Fu _et al._[37] and Stocker _et al._[38] Based on these results, the following analyses were performed using only the GNN* with a radius of 0.6 nm and a scaling parameter of 0.1.
#### 3.2.1 KASAE, KATAE, or KAYAE as Test Peptide
The training/test splits 1, 2, and 3 (Table 1) are particularly interesting as they challenge the model the most. Among the three, the simulation of KASAE (split 1) and KATAE (split 2) are expected to be easier for the model as there is a similar residue in the training set (THR versus SER). Free-energy profiles of the salt bridge, the \(O_{\gamma}\) - LVS \(\mathrm{N_{\zeta}}\) distance, and \(O_{\gamma}\) - GLU \(\mathrm{C_{\delta}}\) distance are shown in Figure 4.
The GNN* implicit solvent was able to correctly reproduce all desired properties of the salt bridge in TIP5P explicit water, featuring the correct double well in the free-energy profile at short distances, the correct energy barrier of opening for the salt bridge, and a local minima at 0.55 nm (Figure 4A,D). The GB-Neck2 model, on the other hand, failed to describe any of these features. Note that it has been shown by Nguyen _et al._[22] that the GB-Neck2 model can be tweaked to reproduce smaller barrier heights for salt-bridge opening, but the other features have not been reported for that model. Larger deviations can be observed for the SER/THR O\({}_{\gamma}\) LVS \(\mathrm{N_{\zeta}}\) distance (Figure 4B,E). Again, the GB-Neck2 model is not able to capture the key characteristics of the TIP5P free-energy surface, while the GNN* reproduces the minima correctly and shows also good agreement with the local minima of the direct hydrogen bond between O\({}_{\gamma}\) and \(\mathrm{N_{\zeta}}\) at 0.29 nm. Interestingly, the TIP3P model shows quite different characteristics compared to TIP5P. The minima of the direct hydrogen bond is lower than the second minima at 0.43 nm. The latter minima contains conformations where an explicit water molecule forms a hydrogen bond network between the salt-bridge partners. Analysing similar conformations in the TIP5P solvent simulation, we could identify examples that benefit from the specific geometry of the TIP5P water representation (Figure 5). We hypothesise that the 'triangle' between the SER O\({}_{\gamma}\), the LVS \(\mathrm{N_{\zeta}}\), and a carbonyl of the peptide backbone allows for a trident 'binding' site for the explicit TIP5P water molecules, thus stabilising these conformations, which is not possible with the simplified TIP3P model.
A more challenging case is KAYAE (split 3) as none of amino acids in the training set is very similar to TYR. Thus, the model needs to learn about the solvation of the TYR side chain from different amino acids.
Figure 4: Comparison of the GNN* implicit solvent model (orange) with explicit TIP5P (navy blue) and TIP3P (light blue) as well as the GB-Neck2 implicit solvent (purple). Results for KASAE (split 1) are shown in the top row, results for KATAE (split 2) in the bottom row. **(A, D)**: Free-energy profile of the salt bridge. **(B, E)**: Distance \(O_{\gamma}\) - LYS \(N_{\zeta}\). **(C, F)**: Distance \(O_{\gamma}\) - GLU \(C_{\delta}\). The shaded area indicates the statistical uncertainty of the corresponding solvent model (not shown for GB-Neck2 for clarity).
Figure 5: Example conformation of KASAE (light blue) with a SER O\({}_{\gamma}\) - LYS N\({}_{\zeta}\) distance of 0.43 nm. The TIP5P water molecule interacting with the SER O\({}_{\gamma}\), the LYS N\({}_{\zeta}\), and a carbonyl of the backbone is shown with its off-site charges displayed in dark red.
acids, i.e., the learning task becomes to generalise from PHE and SER/THR to a combination of the two. If the model achieves a good accuracy for split 3, it demonstrates its transferability to peptides with increasing structural differences to the training set. The free-energy profile of the KAYAE salt bridge, and the distances TYR \(O_{\eta}\) - LYS \(\mathrm{N_{\zeta}}\) and the TYR \(O_{\eta}\) - GLU \(\mathrm{C_{\delta}}\) are shown in Figure 6.
Again, the GNN* model reproduces the salt-bridge free-energy profile of TIP5P very well (Figure 6A). Interestingly, the TIP3P solvent shows in this case a different behaviour than TIP5P at short distances. The double well of the salt bridge is significantly different and the barrier height (at 0.43 nm) is lower for TIP3P. For the distance TYR \(O_{\eta}\) - LYS \(\mathrm{N_{\zeta}}\), the TIP5P and GNN* show the same behaviour, while the energy barrier with TIP3P is of approximately 3 kJ mol\({}^{-1}\) lower (Figure 6B). For the distance TYR \(O_{\eta}\) - GLU \(\mathrm{C_{\delta}}\), the minimum with TIP3P and GNN* is the direct H-bond distance at 0.37 nm, while the minimum with TIP5P is at a distance of 0.51 nm, where one explicit water interacts in a H-bond network between the salt-bridge partners (Figure 6C).
While the free-energy profiles of the salt bridge were similar for TIP5P and TIP3P for KASAE and KATAE, they differed for KAYAE. Therefore, we investigated this observation further. The difference in barrier height for opening the salt bridge (Figure 6A) could be due to different preferences for the distance TYR \(O_{\eta}\) - LYS \(\mathrm{N_{\zeta}}\). In Figure 7, the free-energy profile of the salt bridge for KASAE, KATAE, and KAYAE was calculated once for conformations with the distance between \(O\) of the hydroxy group and LYS \(\mathrm{N_{\zeta}}<0.6\) nm and once for those with this distance \(>0.6\) nm. Intriguingly, the energy barrier to open the salt bridge of KAYAE is higher when TYR \(O_{\eta}\) interacts with the LYS \(\mathrm{N_{\zeta}}\) (e.g., either via a direct hydrogen bond or mediated by a water molecule) in both TIP5P and TIP3P water. For KASAE and KATAE, this is not the case. From these findings, it follows that because TIP5P and GNN* favour conformations of KAYAE with a distance TYR \(O_{\eta}\) - LYS \(\mathrm{N_{\zeta}}<0.6\) nm more than TIP3P (Figure 6B), the energy barrier of the salt bridge is higher than in TIP3P.
Figure 6: Comparison of the GNN* implicit solvent model (orange) with explicit TIP5P (navy blue) and TIP3P (light blue) as well as the GB-Neck2 implicit solvent (purple). Results for KAYAE (split 3) are shown. (**A**): Free-energy profile of the salt bridge. (**B**): Distance TYR \(O_{\eta}\) - LYS \(N_{\zeta}\). (**C**): Distance TYR \(O_{\eta}\) - GLU \(C_{\delta}\). The shaded area indicates the statistical uncertainty of the corresponding solvent model (not shown for GB-Neck2 for clarity).
#### 3.2.2 Proline as Special Case: KAPAE
An interesting case is PRO as central residue (split 4 in Table 1). Proline has a different Ramachandran plot than the other amino acids and is therefore possibly challenging to describe correctly by the GNN* approach. As can be seen in Figure 8A, the free-energy profile of the salt bridge is reproduced well, although deviations from TIP5P occur at long distances. Interestingly, at these long distances also TIP3P and TIP5P disagree with each other. While TIP3P yields lower free energies for the open state compared to TIP5P, the GNN* predicts even higher free energies. As PRO is in the middle of the peptide, the sampling of its backbone dihedral angles will influence the long salt-bridge distances. The Ramachandran plots for PRO are shown in Figure 8B-D for TIP3P, GNN*, and TIP5P. The main difference is in the population of the polyproline II state, which is overestimated with TIP3P and underestimated with GNN* compared to TIP5P (see also Figure S5 in the Supporting Information). It is therefore important to include PRO in the training set to ensure proper sampling of its backbone conformations.
Figure 7: Free-energy profile of the salt-bridge distance for KASAE (top), KATAE (middle), and KAYAE (bottom) in the explicit-solvent simulations with TIP5P (left (**A**, **C**, **E**) navy blue) and TIP3P (right (**B**, **D**, **F**) light blue). The dotted line corresponds to conformations with the distance between \(O\) of the hydroxy group and LYS \(N_{\zeta}<0.60\,\mathrm{nm}\) and the solid line to conformations with the distance between \(O\) of the hydroxy group and LYS \(N_{\zeta}>0.60\,\mathrm{nm}\).
#### 3.2.3 Apolar Versus Polar Residues
The results above demonstrated that the GNN* model is able of reproduce key characteristics of explicit-water simulations of peptides different from the training set. Next, we investigated how different training set compositions influenced the simulation performance of the model. We therefore created two training sets (splits 5 and 6 in Table 1) by dividing all peptides into two excluding subsets: (1) central residue has a polar side chain (i.e., KASAE, KATAE, and KAYAE), and (2) central residue has a apolar side chain (i.e., KAVAE, KAIAE, KALAE, KAFAE, and KAPAE). In split 5, training was carried out with the first group and prospective simulations were performed for the second group. As can be seen in Figure 9 for KAIAE, the generalization from the peptides with a polar central residue to those with an apolar one is good. For all tested peptides, the GNN* model is able to reproduce the free-energy profile of the salt bridge, the double well minima, the height of the energy barrier for opening, and the first dip of the reference TIP5P simulation. The corresponding results for KAVAE, KALAE, KAFAE, and KAPAE are shown in Figures S6-S9 in the Supporting Information.
To probe the conformational sampling of the central residue in more detail, we compared its Ramachandran plot for the different solvent descriptions. With the exception of the L\({}_{\alpha}\) state, the Ramachandran plots with GNN* and TIP5P agree well. Note that the transition into the L\({}_{\alpha}\) state is a rare event. In the TIP5P reference simulations of KAIAE, this state is sampled in only one of the five 200 ns blocks (see Figures S10-S14 in the Supporting Information). The differences in the population of
Figure 8: Comparison of the GNN* implicit solvent model (orange) with explicit TIP5P (navy blue) and TIP3P (light blue) as well as the GB-Neck2 implicit solvent (purple). Results for KAPAE (split 4) are shown. (**A**): Free-energy profile of the salt bridge. The shaded area indicates the statistical uncertainty of the corresponding solvent model (not shown for the GB-Neck2 implicit solvent for clarity). (**B**): Ramachandran plot of PRO with GB-Neck2. (**C**): Ramachandran plot of PRO with GNN*. (**D**): Ramachandran plot of PRO with TIP5P. The polyproline II state is highlighted in the Ramachandran plots by a dashed black line.
the L\({}_{\alpha}\) state between GNN* and TIP5P may therefore stem from finite sampling effects.
The inverse, i.e., training on peptides with an apolar central residue and testing on peptides with a polar central residue (split 6), is more challenging. The GNN* model was still superior to the GB-Neck2 implicit solvent in reproducing key characteristics of the TIP5P reference simulations, however, deviations were observed for the interactions of the polar central residue with the salt-bridge partners (i.e., distance between \(O\) of the hydroxy group and LYS N\({}_{\zeta}\) /GLU C\({}_{\delta}\) ) (see Figures S15-S17 in the Supporting Information). These results indicate that the extent to which the GNN* model can generalise from the training set is limited to similar functional groups. For instance, if no hydroxy group is present in the training set, the ability of the model to represent its interactions is limited. On the other hand, the TYR case demonstrates that the model is able to generalise from a hydroxy group in SER/THR to one in a different local environment. Taken together, these findings suggest that it is important for a generally applicable GNN* implicit-solvent approach to include all functional groups in the training set, but that the model does not have to have seen the complete molecule for good performance.
#### 3.2.4 Varying the Length of the Peptide
Finally, we investigated whether the GNN* model is able to generalise to larger or smaller peptides by removing the middle amino acid or instead inserting an extra ALA residue, i.e., peptides KAAE and KAAAAE (split 7 in Table 1). In addition, we included KAAAE (same size) in the test set for comparison. The resulting free-energy profiles and Ramachandran plots for KAAE, KAAAE, and KAAAAE are shown
Figure 9: Comparison of the GNN* implicit solvent model (orange) with explicit TIP5P (navy blue) and TIP3P (light blue) as well as the GB-Neck2 implicit solvent (purple). Results for KAIAE (split 5) are shown. (**A**): Free-energy profile of the salt bridge. The shaded area indicates the statistical uncertainty of the corresponding solvent model (not shown for the GB-Neck2 implicit solvent for clarity). (**B**): Ramachandran plot of ILE with GB-Neck2. (**C**): Ramachandran plot of ILE with GNN*. (**D**): Ramachandran plot of ILE with TIP5P. The L\({}_{\alpha}\) state is highlighted in the Ramachandran plots by a dashed black line.
in Figure 10. For all three peptide lengths, the GNN* is able to reproduce the free-energy profile of the salt bridge of the TIP5P reference simulation for short distances (i.e., \(<1\,\mathrm{nm}\)), including the double well, the height of the energy barrier, and the first dip. For the KAAE and KAAAE case also the long range behavior matches the TIP5P simulation to a high degree. Only for the KAAAAE, a deviation between GNN* and TIP5P is observed at longer distances (i.e., \(>1\,\mathrm{nm}\)), which could highlight a potential weak point of the GNN. While generalization to shorter peptides works well, longer peptides require either the inclusion in the training set or the introduction of a long-range correction in order to describe the elongated conformations accurately. As discussed above, differences in the population of the \(\mathrm{L}_{\alpha}\) state may likely be finite sampling effects.
#### 3.2.5 Timings
One major advantage of standard implicit solvent models is that they are much faster to compute than explicit solvent molecules. When employing GNNs for this task, the computational costs are currently still too high. Using a desktop PC with an Intel(R) Xeon(R) W-1270P CPU with a clock rate of 3.80GHz and a NVIDIA(R) Quadro(R) P2200 GPU, approximately 46 ns d\({}^{-1}\) of the peptide KASAE could be obtained with our proof-of-concept implementation of the GNN implicit solvent, whereas approximately 200 ns d\({}^{-1}\) were reached with explicit TIP5P simulations. Similar observations were made in Ref. [39] for classical force-field terms. The slower speed of GNNs represents a major challenge for its application to replace explicit solvent simulations. However, this is primarily a technical issue and not a fundamental
Figure 10: Comparison of the GNN* implicit solvent model (orange) with explicit TIP5P (navy blue) as well as the GB-Neck2 implicit solvent (purple). Results for KAAE (top), KAAAE (middle), and KAAAAE (bottom) are shown (split 7). (**A**, **D**, **G**): Free-energy profile of the salt bridge. The shaded area indicates the statistical uncertainty of the corresponding solvent model (not shown for the GB-Neck2 implicit solvent for clarity). (**B**, **E**, **H**): Combined Ramachandran plot of all ALA residues with TIP5P. (**C**, **F**, **I**): Combined Ramachandran plot of all ALA residues with GNN*. The \(\mathrm{L}_{\alpha}\) state is highlighted by a dashed black line.
limitation. While the TIP5P explicit simulation is highly optimised, our GNN implementation is not yet. Currently, the GNN is evaluated on the GPU while the classical forces are evaluated on the CPU leading to high communication cost and low utilisation of the GPU. Recently, two approaches have been reported to increase dramatically the speed of NN potentials. The first option is the optimisation of the operations of the GNN to better suite the application of MD simulations [40]. The second option involves batching of multiple simulations that run on one GPU in parallel [12]. Both approaches have been shown to bring the speed of NN potentials on par with their classical counterparts. In this work, we focused on providing a conceptual proof that developing an ML based transferable implicit solvent is possible. Improving the computational performance of the implementation is part of future work to develop a practically usable GNN implicit solvent.
## 4 Conclusion
In this work, we have developed a GNN-based implicit solvent that can be trained on a set of peptides and used to prospectively simulate different ones. The GNN* model is based on the GB-Neck2 implicit solvent with a \(\Delta\)-learning scheme. To validate our approach, we have chosen a traditionally hard problem for implicit-solvent models where the local effects of explicit water molecules play a key role: the free-energy profile of a salt bridge. Here, the salt bridge is formed by peptides with the composition KAXAE, where X can be varied. We could demonstrate that the GNN* implicit solvent was able to reproduce the key characteristics of the reference explicit-solvent simulations with TIP5P, matching or surpassing the accuracy of explicit-solvent simulations with the simpler TIP3P water model. With different training/test splits, we assessed the ability of the GNN* model to generalise to unseen amino acids and varying peptide length. Overall, we found that the model has a high transferability as long as all functional groups are represented in the training set. For instance, if an aliphatic hydroxy group (SER or THR) is in the training set, it is sufficient for the model to correctly describe the aromatic hydroxy group of TYR. These findings are encouraging as they suggest that the training set for a globally applicable ML-based implicit solvent model may not need to be extremely large but "only" contain all necessary functional groups. The results of this work present an important step towards the development of such a model, capable of replacing explicit-solvent simulations to an increasing degree.
## Data and Software Availability
The code used to generate the results of this study is available at [https://github.com/rinikerlab/GNNImplicitSolvent](https://github.com/rinikerlab/GNNImplicitSolvent). Topologies, starting structures, examples for the performed analysis, and the training data are provided at the ETH Research Collection ([https://doi.org/10.3929/ethz-b-000599309](https://doi.org/10.3929/ethz-b-000599309)). Complete trajectories are available from the corresponding author upon reasonable request.
## Acknowledgments
The authors gratefully acknowledge financial support by ETH Zurich (Research Grant no. ETH-50 21-1). The authors thank Moritz Thurlemann for helpful discussions.
|
2303.00992 | Sequential minimum optimization algorithm with small sample size
estimators | Sequential minimum optimization is a machine-learning global search training
algorithm. It is applicable when the functional dependence of the cost function
on a tunable parameter given the other parameters can be cheaply determined.
This assumption is satisfied by quantum circuits built of known gates. We apply
it to photonics circuits where the additional challenge appears: low frequency
of coincidence events lowers the speed of the algorithm. We propose to modify
the algorithm such that small sample size estimators are enough to successfully
run the machine learning task. We demonstrate the effectiveness of the modified
algorithm applying it to a quantum optics classifier with data reuploading. | Wojciech Roga, Takafumi Ono, Masahiro Takeoka | 2023-03-02T06:02:46Z | http://arxiv.org/abs/2303.00992v1 | # Sequential minimum optimization algorithm with small sample size estimators
###### Abstract
Sequential minimum optimization is a machine-learning global search training algorithm. It is applicable when the functional dependence of the cost function on a tunable parameter given the other parameters can be cheaply determined. This assumption is satisfied by quantum circuits built of known gates. We apply it to photonics circuits where the additional challenge appears: low frequency of coincidence events lowers the speed of the algorithm. We propose to modify the algorithm such that small sample size estimators are enough to successfully run the machine learning task. We demonstrate the effectiveness of the modified algorithm applying it to a quantum optics classifier with data reuploading.
## Motivation
There are a variety of proposals how to use quantum devices in machine learning [1; 2; 3; 4; 5; 6]. The research is motivated by the expectation that quantum features can bring benefits for the performance. Indeed, many algorithms use linear algebra modules, which where shown to be solved faster under some conditions when applied on a quantum device [7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20]. Also Grover search algorithm if applicable can speed up the algorithm [21]. Or, sampling from a programmable quantum device can be faster than calculating the expectation values of certain observables [22; 23; 24; 25; 26; 27; 28; 29]. Moreover, the access to entangled states allow one to produce different statistical dependencies than only classical states on a fixed processors as the entangled states occupy a significant volume of all quantum states of a given system [30; 31].
Among the supervised quantum machine learning algorithms [32; 33; 34; 35; 36; 37; 38] we find hybrid quantum-classical ones with classically programmable circuits. Their parameters are trained by classical algorithms such as gradient descent, and the cost function is constructed based on the expectation values of observables that are measured. The speed at which the training algorithms find the global minimum of the cost function is part of the efficiency of the total algorithm.
There are local search algorithms such as gradient descent, in which the search starts from a chosen starting point and explores the cost function landscape further from that point gradually. Also global search algorithms recognise and explore some global features of the landscape.
One of the global search algorithms proposed for the quantum circuits recently is the so-called sequential minimum optimization [39] (SMO). It is based on the idea, that from a small number of measurements, one can infer the functional dependence of the cost function along one parameter that typically is a known quantum gate or an optical element like a phase shifter [40]. Knowing this, we can in one step update this parameter with the value that corresponds to the minimum of the cost function along this variable. This procedure can be applied sequentially to all parameters of the circuit converging in a few rounds to the minimum of the cost function landscape, which is possibly far from the starting point. This algorithm, unlike local search algorithms, is not affected by barren plateaus and does not rely on additional parameters such as the size of the step. In [39] where the authors introduced this procedure for the qubit quantum circuits, the convergence was also discussed.
In [41], sequential minimum optimization was used to an integrated photonics based quantum classifier in an experiment that showed computational abilities of two-mode quantum circuit. This paper followed [31], in which the authors first showed that a sufficiently long single qubit quantum circuit satisfies the universal approximation theorem [42], known from the theory of a single-layer neural networks, and is able to classify data points divided by an arbitrary border function. The authors performed simulations of several problems using gradient descent training.
However in the photonic circuit experiment with two photons performed in [41] the sequential minimal optimization was preferred as the time of measuring expectation values of coincident photons is longer. Coincidence counts in this experiment allowed for reducing decoherence errors. However, this type of experiment is strongly affected by losses which induce a long time of gathering enough data in postselection.
The experimental run-time needed to train the circuit
with the SMO algorithms is proportional to:
\[t = \#rounds*\#circuit\_parameters*\#training\_points\] \[*\#function\_coefficients*\#estimator\_measurements,\]
where the last term is the number of measurements, we need to estimate the expectation value of the observable measured in the experiment and used in the cost function. The number of function coefficients is the number of values for a given training point needed to specify the functional dependence of the cost function on one parameter of the circuit in the SMO algorithm. The number of rounds is how many times the algorithm updates all parameters. The number of circuit parameters and the number of training points are clear.
The number of rounds can be kept small due to the efficiency of the sequential minimal optimization algorithm [39]. In the experiment reported in [41], it was equal to 10. The number of function parameters, as we will explain later discussing details of the algorithm, is \(O(n)\), where \(n\) is the number of photons in the circuit. The number of detector clicks to estimate the expectation value of an observable, in principle, can be arbitrary large and in experiment with coincidence measurements can dominate the total time of the training process.
The goal of this paper is to modify the sequential minimal optimization algorithm such that the expectation value of the observable is replaced there by the estimator of this value based on a small sample size. If the number of coincidence events used to estimate a given expectation value of the measured observation was around 200, our modification can reduce the experiment run-time even by 100 times. With this reduction the weak sources generating for instance, entangled states can still be efficiently used in machine learning experiments with many training data points.
This approach is inspired by [43], where the authors discuss in detail the problem of replacing expectation values by unbiased estimators for the gradient descent type training. Then, estimation of the first and sometimes second derivatives of the cost functions at given points are needed. The authors show that the training based on the estimators works well and the computational time related to the measurements can be significantly reduced. The driving idea of the method is the observation that since the cost function is an unbiased estimator of the sum of expectation values of independent variables, it can be replaced by the sum of unbiased estimators of the variables. The error in using estimators from a finite number of measurements depends on the number of measurements and the number of training points. Therefore, if the number of training points is sufficiently large, the number of measurements used for the estimators of the expectation value can be kept small.
In this manuscript, we discuss in detail the idea of [43] adapted to the sequential minimal optimization. As [43] covers gradient descent-type training, we add something novel to the development of the SMO method.
The main result is formulated as Theorem 1, it is proven and supported by the simulation of a 2-mode photonic optical classifier with a NOON state as the input. Although the main message of our paper does not require the NOON state, and could be shown with other fixed numbers of photon states, we decided to use the NOON state as a tribute to Jonathan Dowling [44], our friend and collaborator. There are other advantages from using entangled states in this kind of experiments, but it is not our goal to discuss them here.
The paper is organized as follows. First, we give details of the sequential minimal optimization for a photonic circuit. This method follows the analogous algorithm derived earlier in [39] for qubit circuits however our notation and conventions are slightly different. Then, we present the main result of our paper in the form of a Theorem; we give a detailed proof and discuss the convergence of the estimator of the cost function to the real cost function when the number of training points increases. In the following section we present numerical simulator of a photonic classifier. We summarize our work with conclusions and an outlook.
## II Sequential minimal optimization in a photonic circuit
We consider a general passive optical circuit. Without losing generality, we can compose the circuit with phase-shifters and 50-50 beam-splitters applying for example, the so-called Reck configuration [45; 46]. Therefore, we can assume that the parameters of the circuit are phases of the phase-shifters. However, the method we discuss below can be easily adapted to circuits with any passive optical elements single- or multi-mode.
We assume that an \(n\)-photon state \(|\Psi_{in}\rangle\) is put as the input and an \(n\)-photon state \(|\Phi_{out}\rangle\) is measured. They
Figure 1: General passive photonic circuit with the input state \(|\Psi_{in}\rangle\) of a fixed number of photons. We distinguish a phase shifter depending on a parameter \(\varphi\). We want to know the dependence of the probability of \(|\Psi_{out}\rangle\) at the end of the circuit on \(\varphi\). Here, unitary transformations \(U_{1}\) and \(U_{2}\) can be arbitrary photon number preserving transformations.
can be arbitrary. We assume that a given data point, possibly repeatedly (data-reuploading), is encoded in some phases.
We consider the cost function to be a function of the probability of measuring an \(n\)-photon state \(|\Psi_{out}\rangle\), trainable phases, and data. For a training point indexed by \(i\), let us write the probability of measuring \(|\Psi_{out}\rangle\) distinguishing one of the trainable variables \(\varphi\) to as in figure 1,
\[p^{i}(\varphi)=|\langle\Psi_{out}|U_{2}^{\dagger}U_{\varphi}U_{1}|\Psi_{in} \rangle|^{2}. \tag{1}\]
Here, \(U_{1}\) and \(U_{2}\) are unitaries before and after a phase-shifter \(U_{\varphi}\) parameterized by \(\varphi\). The idea is to update each parameter \(\varphi\) to the value where the cost function expressed as the function of this parameter achieves its minimum. To do that we need to derive an explicit formula for \(p^{i}(\varphi)\) for each \(\varphi\) in each step of the training. Therefore, assume we are interested in the functional dependence of \(p^{i}\) on phase \(\varphi\) of a phase-shifter in mode \(m\). Let us expand \(U_{1}|\Psi_{in}\rangle\) and \(U_{2}|\Psi_{out}\rangle\) in the Fock basis writing mode \(m\) as the first one
\[U_{1}|\Psi_{in}\rangle =\alpha_{0}|0\rangle_{m}|\xi_{0}\rangle+\alpha_{1}|1\rangle_{m}| \xi_{1}\rangle+...+\alpha_{n}|n\rangle|\xi_{n}\rangle,\] \[U_{2}|\Psi_{out}\rangle =\beta_{0}|0\rangle_{m}|\xi_{0}^{\prime}\rangle+\beta_{1}|1 \rangle_{m}|\xi_{1}^{\prime}\rangle+...+\beta_{n}|n\rangle|\xi_{n}^{\prime}\rangle.\]
Acting by the phase-shifter on \(U_{1}|\Psi_{in}\rangle\) we get
\[U_{\phi}U_{1}|\Psi_{in}\rangle =\alpha_{0}|0\rangle_{m}|\xi_{0}\rangle+\alpha_{1}e^{i\varphi}|1 \rangle_{m}|\xi_{1}\rangle\] \[+\alpha_{2}e^{2i\varphi}|2\rangle_{m}|\xi_{2}\rangle+...+\alpha_ {n}e^{ni\varphi}|n\rangle|\xi_{n}\rangle.\]
Hence, it easily follows from (1) that
\[p^{i}(\varphi) =A_{0}+A_{1}\cos(\varphi)+A_{2}\sin(\varphi)\] \[+A_{3}\cos(2\varphi)+A_{4}\sin(2\varphi)+..., \tag{2}\]
If we knew \(A_{0},...,A_{2n+1}\), we would have \(p^{i}(\varphi)\). Values \(A_{j}\) can be obtained by solving the set of \(2n+1\) equations given by (2) measured for \(2n+1\) different \(\varphi\). For simplicity, let us assume \(n=2\). Then, we can figure out the values of \(A_{j}\) from measuring probabilities denoted here by \(r_{j}=p^{i}(\varphi_{j})\) for 5 different values of the phase (generalization of this procedure is straightforward)
\[\begin{pmatrix}1&\cos\varphi_{0}&\sin\varphi_{0}&\cos 2\varphi_{0}&\sin 2 \varphi_{0}\\ 1&\cos\varphi_{1}&\sin\varphi_{1}&\cos 2\varphi_{1}&\sin 2\varphi_{1}\\ 1&\cos\varphi_{2}&\sin\varphi_{2}&\cos 2\varphi_{2}&\sin 2\varphi_{2}\\ 1&\cos\varphi_{3}&\sin\varphi_{3}&\cos 2\varphi_{3}&\sin 2\varphi_{3}\\ 1&\cos\varphi_{4}&\sin\varphi_{4}&\cos 2\varphi_{4}&\sin 2\varphi_{4}\end{pmatrix} \begin{pmatrix}A_{0}\\ A_{1}\\ A_{2}\\ A_{3}\\ A_{4}\end{pmatrix}=\begin{pmatrix}r_{0}\\ r_{1}\\ r_{2}\\ r_{3}\\ r_{4}\end{pmatrix}. \tag{3}\]
Values \(r_{k}\) are the probabilities measured by the same circuit with 5 chosen phases \(\varphi_{k}\). In particular, if one choses
\[\varphi_{0}=0,\quad\varphi_{1}=\frac{2\pi}{5},\quad\varphi_{2}=-\frac{2\pi}{5 },\quad\varphi_{3}=\frac{4\pi}{5},\quad\varphi_{4}=-\frac{4\pi}{5},\]
equation (3) takes the form of the easily invertible Fourier transform
\[F\begin{pmatrix}(A_{3}-iA_{4})/2\\ (A_{1}-iA_{2})/2\\ A_{0}\\ (A_{1}+iA_{2})/2\\ (A_{3}+iA_{4})/2\end{pmatrix}=\begin{pmatrix}r_{3}\\ r_{1}\\ r_{0}\\ r_{2}\\ r_{4}\end{pmatrix}, \tag{4}\]
where the Fourier transform matrix is
\[F=\begin{pmatrix}\omega^{4}&\omega^{2}&\omega^{0}&\omega^{-2}&\omega^{-4}\\ \omega^{2}&\omega^{1}&\omega^{0}&\omega^{-1}&\omega^{-2}\\ \omega^{0}&\omega^{0}&\omega^{0}&\omega^{0}&\omega^{0}\\ \omega^{-2}&\omega^{-1}&\omega^{0}&\omega^{1}&\omega^{2}\\ \omega^{-4}&\omega^{-2}&\omega^{0}&\omega^{2}&\omega^{4}\end{pmatrix}, \tag{5}\]
and \(\omega=e^{i\frac{2\pi}{5}}\). Hence
\[\begin{pmatrix}A_{3}\\ A_{1}\\ A_{0}\\ A_{2}\\ A_{4}\end{pmatrix}=\frac{2}{5}\begin{pmatrix}\cos\frac{8\pi}{5}&\cos\frac{4 \pi}{5}&1&\cos\frac{4\pi}{5}&\cos\frac{8\pi}{5}\\ \cos\frac{4\pi}{5}&\cos\frac{2\pi}{5}&1&\cos\frac{2\pi}{5}&\cos\frac{4\pi}{5} \\ 0.5&0.5&0.5&0.5\\ \sin\frac{4\pi}{5}&\sin\frac{2\pi}{5}&0&-\sin\frac{2\pi}{5}&-\sin\frac{4\pi}{5} \\ \sin\frac{8\pi}{5}&\sin\frac{4\pi}{5}&0&-\sin\frac{4\pi}{5}&-\sin\frac{8\pi}{5} \end{pmatrix}\begin{pmatrix}r_{3}\\ r_{1}\\ r_{0}\\ r_{2}\\ r_{4}\end{pmatrix}. \tag{6}\]
This shows that to obtain the functional dependence of the probability \(p(\varphi)\) in (2) on each adjustable circuit parameter \(\varphi\), we need to measure \(2n+1\) probabilities \(r_{j}\), where \(n\) is the number of photons, with the same circuit but with different values of the parameter under investigation.
Notice that with the effort of finding \(A_{j}\) for each training point, we can find an explicit formula for the cost function and its minimum for each adjustable parameter. Hence, we can update the parameter with the value in which the cost function achieves the minimum, which is the key idea of SMO. This procedure is applied in the analysis of the following section.
## III Sequential minimum optimization with small sample size estimators
For each training point enumerated by index \(i\), for each phase shifter characterized by its \(\varphi\), we measure \(2n+1\) probabilities \(r_{j}\) for \(2n+1\) phases, as in the previous section. From the probabilities we calculate parameters \(A_{k}^{i}\) that allow us to define the function \(p^{i}(\varphi)\) as in (2) that enters to the cost function
\[C(\varphi)=\frac{1}{N}\sum_{i=1}^{N}(p^{i}(\varphi)-y^{i})^{2}, \tag{7}\]
which is still an easy function of \(\varphi\) that can be efficiently minimized numerically. The phase of the phase shifter is updated as follows
\[\varphi\rightarrow\text{argmin }C(\varphi), \tag{8}\]
and we move to the next phase shifter. We repeat everything until a stopping criterion is met.
However, in an experiment with photonic circuits to estimate a real single probability \(r_{j}\), we use its estimator \(\tilde{r}_{j}=N_{success}/N_{all}\), which is assumed to be a random variable with a Binomial distribution. The related mean square error is scaled with the sample size \(N_{all}\) as \(O(1/N_{all})\). To keep the error small, we typically collect data from a few hundreds of coincidence events. The sample size depends on the intensity of the source and time we want to spend for the total experiment. For example, in [41], estimating a single probability \(\tilde{r}_{j}\) took around a second with a weak coherent state source that produced around 3000 two-photon coincidences in this interval. In experiments with multiple photons or with sources of entangled photons, the time of gathering similar statistics would be significantly increased and running the experiment would be inefficient.
However, in what follows, we show that we can successfully run our experiment with strongly smaller sample size estimators, i.e., when \(N_{all}\) is small. Indeed, some tasks can be satisfactory solve with as little sample size as \(N_{all}=1\) in which case \(\tilde{r}_{j}\) is binary.
Therefore, for each training point indexed by \(i\), using the estimators \(\tilde{r}_{j}^{i}\), we can calculate the estimators of \(A_{k}^{i}\) denoted by \(\tilde{A}_{k}^{i}\) as in Eq. (6) and define the estimator of \(p^{i}(\varphi)\) analogous to (2) as follows
\[\tilde{p}^{i}(\varphi)=\tilde{A}_{0}^{i}+\tilde{A}_{1}^{i}\cos(\phi)+\tilde{A }_{2}^{i}\sin(\varphi)+... \tag{9}\]
We prove the following theory that shows that for a large number of the training points \(N\) a function built from these estimators converges to the cost function built from their expectation values.
**Theorem 1**: _Let \(N\) be the number of training points and_
\[\tilde{C}(\varphi)=\frac{1}{N}\sum_{i}^{N}\Big{(}\tilde{p}^{i}(\varphi)( \tilde{p}^{i})^{\prime}(\varphi)-2\tilde{p}^{i}(\varphi)y^{i}+(y^{i})^{2} \Big{)}, \tag{10}\]
_where \(\tilde{p}^{i}(\varphi)\) and \((\tilde{p}^{i})^{\prime}(\varphi)\) are functions (9) generated from separate independent measurements for the same training point. For an arbitrary unbiased estimator of the probabilities of experimental outcomes and for sufficiently large \(N\)_
\[\mathrm{argmin}\ \tilde{C}(\varphi)\rightarrow\mathrm{argmin}\ C(\varphi) \tag{11}\]
**Proof.** Let us expand \(\tilde{C}(\varphi)\)
\[\tilde{C}(\varphi) =\frac{1}{N}\sum_{i}^{N}\Big{[}\Big{(}\tilde{A}_{0}^{i}+\tilde{A }_{1}^{i}\cos(\varphi)+\tilde{A}_{2}^{i}\sin(\varphi)+...\Big{)}\] \[\qquad\qquad*\Big{(}(\tilde{A}_{0}^{i})^{\prime}+(\tilde{A}_{1}^ {i})^{\prime}\cos(\varphi)+(\tilde{A}_{2}^{i})^{\prime}\sin(\varphi)+...\Big{)}\] \[-2\Big{(}\tilde{A}_{0}^{i}+\tilde{A}_{1}^{i}\cos(\varphi)+ \tilde{A}_{2}^{i}\sin(\varphi)+...\Big{)}y^{i}+(y^{i})^{2}\Big{]}\] \[=\frac{1}{N}\sum_{i}^{N}\tilde{A}_{0}^{i}(\tilde{A}_{0}^{i})^{ \prime}+\frac{1}{N}\sum_{i}^{N}\tilde{A}_{0}^{i}(\tilde{A}_{1}^{i})^{\prime} \cos(\varphi)+...\] \[-2\frac{1}{N}\sum_{i}^{N}\tilde{A}_{0}^{i}y_{i}-2\frac{1}{N}\sum _{i}^{N}\tilde{A}_{1}^{i}y^{i}\cos(\varphi)+...\] \[+\frac{1}{N}\sum_{i}^{N}(y^{i})^{2}.\]
Let us look first at the term linear in \(\tilde{A}_{0}^{i}\). We express it by the estimators of the probabilities \(\tilde{r}_{k}^{i}\)
\[X=\frac{1}{N}\sum_{i}^{N}\tilde{A}_{0}^{i}y_{i}=\frac{1}{N}\sum_{i}^{N}(w_{0}^ {0}\tilde{r}_{0}^{i}+w_{1}^{0}\tilde{r}_{1}^{i}+...)y^{i}, \tag{12}\]
where \(w\) are appropriate constants from equation (6). Notice that as \(\tilde{r}_{j}^{i}\) are random variables with expectation values \(r_{j}^{i}\), also \(X\) is a random variable with the expectation value
\[\frac{1}{N}\sum_{i}^{N}(w_{0}^{0}r_{0}^{i}+w_{1}^{0}r_{1}^{i}+...)y^{i} \tag{13}\]
here \(r_{j}^{i}\) are the values of the real probabilities from the experiment. The standard deviation of \(X\) scales with \(N\) such as \(\frac{\sigma^{i}}{\sqrt{N}}\), where \((\sigma^{i})^{2}\) is the variance of \(\tilde{r}_{j}^{i}\). The expectation value of \(X\) is the same as the term computed from the perfect probabilities \(r_{j}^{i}\), with the standard deviation vanishing with \(N\). The same reasoning holds for other linear terms of \(\tilde{C}(\varphi)\). This implies that the linear terms of \(\tilde{C}(\varphi)\) tend to the analogous terms of \(C(\varphi)\).
Let us consider now the quadratic term
\[Y =\frac{1}{N}\sum_{i}^{N}\tilde{A}_{0}^{i}(\tilde{A}_{0}^{i})^{\prime}\] \[=\frac{1}{N}\sum_{i}^{N}(w_{0}^{0}\tilde{r}_{0}^{i}+w_{1}^{0} \tilde{r}_{1}^{i}+...)(w_{0}^{0}(\tilde{r}_{0}^{i})^{\prime}+w_{1}^{0}(\tilde{ r}_{1}^{i})^{\prime}+...)\] \[=\frac{1}{N}\sum_{i}^{N}w_{0}^{0}w_{0}^{0}\tilde{r}_{0}^{i}( \tilde{r}_{0}^{i})^{\prime}+... \tag{14}\]
For \(\tilde{r}_{0}^{i}\) and \((\tilde{r}_{0}^{i})^{\prime}\) being independent random variables, \(\frac{1}{N}\sum_{i}^{N}w_{0}^{0}w_{0}^{0}\tilde{r}_{0}^{i}(\tilde{r}_{0}^{i})^{\prime}\) is a random variable with the expectation value equal to \((w_{0}^{0})^{2}(r_{0}^{i})^{2}\). Similarly, other terms of \(Y\). Therefore, the expectation value of \(Y\) is \(\frac{1}{N}\sum_{i}^{N}(w_{0}^{0})^{2}(r_{0}^{i})^{2}+...\) that is the same as the analogous
term in \(C(\varphi)\). By the same argument as previously, the variance of \(Y\) vanishes with \(N\). The same reasoning can be applied to the other quadratic terms.
Concluding, the coefficients in front of trigonometric functions in \(\tilde{C}(\varphi)\) tend to the appropriate coefficients of \(C(\varphi)\) when \(N\) is sufficiently large. QED.
## IV Simulation of photonic classifier with data reuploading
As a demonstration of this method, let us simulate a fragment of a simple optical universal classifier. The idea of a universal quantum classifier with a single qubit was developed in [31]. In this scenario, two-dimensional data points \(x\) to be classified are encoded as parameters of a single qubit unitary transformation \(U(\varphi,x)\) of the qubit initialized in the ground state \(|0\rangle\). This transformation also contains the free parameters \(\varphi\), which are tuned during the learning process. Similarly, more unitary gates can be introduced sequentially with different free parameters, but the same data \(x\) as in figure 2. This provides the nonlinearity of the expectation values of the output states in the variable encoding the data. After the circuit, the qubit is measured. The classification is based on the results. The free parameters \(\varphi\) are adjusted during the training of the system.
The authors of [31] provide arguments that in this way, one can arbitrarily approximate any function of the input points achieving a universal classifier. The arguments are based on the analogy to the universality of neural networks.
In [41] the idea of the universal quantum classifier has been implemented on an optical circuit of integrated photonics using the advantages of this platform. The original scheme from [31] was adopted for bosonic systems. The classification based on the data reuploading scheme has been performed experimentally on an integrated photonic quantum optical circuit. Instead of the qubit, two mode state of two photons was used. In this way losses has been eliminated from the system as the coincidences of two photons where measured.
We simulate an optical classifier shown in figure 3 to classify two dimensional points in two regions as shown in figure 4\(a)\). The input state is taken as a NOON state
\[|\Psi_{in}\rangle=\frac{1}{\sqrt{2}}(|20\rangle+|02\rangle) \tag{15}\]
(although we would obtain similar results with other input states). The circuit consists of phase-shifters and fixed beam-splitters. The phases are functions of the free parameters adjustable during the training and data. Specifically, one phase \(\varphi_{j}=\phi_{j_{1}}+\phi_{j_{2}}x_{k}\), where \(x_{k}\) is either \(x_{1}\) - the horizontal coordinate of a 2D data point or \(x_{2}\) - the vertical coordinate. So, the circuit from figure 3 contains 6 trainable parameters \(\phi\). Notice that the same data are re-uploaded repetitively. During the training at the end of the circuit, we measure the estimator of the probability of state \(|11\rangle\). We train the circuit minimizing the cost function defined in (10).
After the training we generate new testing points. To test the classifier we calculate the proper probability of \(|11\rangle\) and classify a given point as 0 if the probability is smaller than a threshold, otherwise the class is denoted as 1. We check the number of results which are: true positive (TP, classifier indicates 1 for class 1), true negative (TN, classifier indicates 0 for class 0), false positive (FP, classifier indicates 1 for class 0), and false negative (FN, classifier indicates 0 for class 1). The quality of performance is characterised by the average success probability, which is an average of the true positive and true negative ratios:
\[P=\frac{1}{2}TPR+\frac{1}{2}TNR, \tag{16}\]
where
\[TPR =\frac{TP}{TP+FN},\] \[TNR =\frac{TN}{TN+FP}. \tag{17}\]
We repeat the simulation for different estimators of the probability given by the sample size \(N_{all}=200,50,10,5,2,1\). The number of training points is taken as \(N=300\). The training points and the starting point are the same in experiments different \(N_{all}\). The results are shown in figures 4 and 5.
The numerical tests were performed with 300 training data, as shown in figure 4\(a)\) with 10 rounds of updating all 6 parameters of the circuit. We observe that using an estimator of the probability based on 200 measurements give the same efficiency as the algorithm with the exact probabilities, figure 4\(b)\). However, we observe that the efficiency only slightly changes when \(N_{all}\) is reduced till \(N_{all}=1\) when the estimator of a single probability is a
Figure 3: A two mode unitary transformation built of phase-shifters and the Mach-Zender interferometer.
Figure 2: Layers of the single qubit classifier with data reuploading [31].
binary variable. For all estimators features of the sets are clearly visible. Figure 5 shows the plot of the probability of success averaged over different series of training data versus the number of measurements \(N_{all}\) of the estimators. The averages were calculated by repeating all experiments for 10 different training data sets.
Based on our numerical example, we conclude that an estimator based on 1 measurement is still sufficiently good to recognize points from different classes with a high probability of success. Taking these estimators allows us to reduce the time of the experiment based on 200 measurements per probability, 100 times.
## IV Discussion
In the context of machine learning with quantum optical circuits, we study a training algorithm, the so-called sequential minimal optimization [39; 41] that gives us an update rule for parameters in the optimization of the cost function. This algorithm is a non local search alternative to gradient descent. In the original algorithm, the cost function is constructed from measured estimators of the expectation values of variables. In some types of experiments in which the interesting events are rare, building estimators that well approximate the expectation values takes time that may be prohibitive. Therefore, following the research on gradient descent algorithms with small sample size estimators [43], we develop the theory of the sequential minimum optimization algorithm with small sample size estimators. Our theory allows for using week sources of light to run still efficient supervised machine learning tasks.
Figure 4: a) Training points in two classes. b)-h) Classifier trained with respectively: probabilites, estimators from \(N_{all}=200\), estimators from \(N_{all}=50\), estimators from \(N_{all}=10\), estimators from \(N_{all}=5\), estimators from \(N_{all}=2\), estimators from \(N_{all}=1\). In the upper right corner of each figure the average success probability is written.
Figure 5: Average of the probability of success as a function of the number of measurements used by the estimator. The average is over different sets of training data. The task and the structure of the circuit are fixed. The mark ”inf” denotes the real value of the probability. The error bars show the standard deviation.
Our theory was demonstrated in the photonic circuit experiment but is not restricted to it. It can be used as well in other platforms where the events under interest are rare.
_Acknowledgements_ The authors are grateful to Jonathan P. Dowling for his insightful suggestions and encouragements (and plenty of humor!) to this work. This work was initially started when the authors were working at National Institute of Information and Communications Technology (NICT). Jon was a frequent visitor to NICT and initiated many projects and collaborations, where this work is a part of them. He will be immenselly missed. This work was supported by JST PRESTO Grant No. JPMJPR1864, The Murata Science Foundation, JST CREST Grant No. JPMJCR1772, and JST COI-NEXT Grant No. JPMJPF221.
|
2307.12861 | Unconventional superconductivity protected from disorder on the kagome
lattice | Motivated by the recent discovery of superconductivity in the kagome
$A$V$_3$Sb$_5$ ($A$: K, Rb, Cs) metals, we perform a theoretical study of the
symmetry-allowed superconducting orders on the two-dimensional kagome lattice
with focus on their response to disorder. We uncover a qualitative difference
between the robustness of intraband spin-singlet (even-parity) and spin-triplet
(odd-parity) unconventional superconductivity to atomic-scale nonmagnetic
disorder. Due to the particular sublattice character of the electronic states
on the kagome lattice, disorder in spin-singlet superconducting phases is only
weakly pair-breaking despite the fact that the gap structure features sign
changes. By contrast, spin-triplet condensates remain fragile to disorder on
the kagome lattice. We demonstrate these effects in terms of the absence of
impurity bound states and an associated weak disorder-induced $T_c$-suppression
for spin-singlet order. We also discuss the consequences for quasi-particle
interference and their inherent tendency for momentum-space anisotropy due to
sublattice effects on the kagome lattice. For unconventional kagome
superconductors, our results imply that any allowed spin-singlet order,
including for example $d+id$-wave superconductivity, exhibits a
disorder-response qualitatively similar to standard conventional $s$-wave
superconductors. | Sofie Castro Holbæk, Morten H. Christensen, Andreas Kreisel, Brian M. Andersen | 2023-07-24T14:59:27Z | http://arxiv.org/abs/2307.12861v2 | # Unconventional superconductivity protected from disorder on the kagome lattice
###### Abstract
Motivated by the recent discovery of superconductivity in the kagome \(A\)V\({}_{3}\)Sb\({}_{5}\) (\(A\): K, Rb, Cs) metals, we perform a theoretical study of the symmetry-allowed superconducting orders on the two-dimensional kagome lattice with focus on their response to disorder. We uncover a qualitative difference between the robustness of intraband spin-singlet (even-parity) and spin-triplet (odd-parity) unconventional superconductivity to atomic-scale nonmagnetic disorder. Due to the particular sublattice character of the electronic states on the kagome lattice, disorder in spin-singlet superconducting phases becomes non-pair-breaking despite the fact that the gap structure features sign changes. By contrast, spin-triplet condensates remain fragile to disorder on the kagome lattice. We demonstrate these effects in terms of the absence of impurity bound states and an associated weak disorder-induced \(T_{c}\)-suppression for spin-singlet order. We also discuss the consequences for quasi-particle interference and their inherent tendency for momentum-space anisotropy due to sublattice effects on the kagome lattice. For unconventional kagome superconductors, our results imply that any allowed spin-singlet order, including for example \(d+id\)-wave superconductivity, exhibits a disorder-response qualitatively similar to standard conventional \(s\)-wave superconductors.
## I Introduction
The irreducible representations of a given point group dictate the allowed homogeneous superconducting order parameters of materials [1]. However, additional complexity of e.g. sublattice degrees of freedom, multiple active orbitals at the Fermi level, associated Hund's interactions, or strong spin-orbit coupling adds significant richness to the problem [2; 3; 4; 5; 6; 7; 8; 9; 10; 11]. Whatever superconducting gap structure eventually gets singled out in a given material depends on the particular pairing mechanism operative in that system. Determining experimentally which order parameter is present in specific materials can be a tremendous challenge due to the low temperature and often minute energy scales of the superconducting problem. In this respect, disorder can play an important role because it can act as a phase-sensitive probe. Disorder effects can be studied in terms of e.g. atomically-resolved impurity bound states detectable by local probes as well as the overall disorder-averaged superconducting response as seen by e.g. thermodynamic probes or transport measurements. If nonmagnetic disorder is able to generate in-gap bound states or if it severely affects superconductivity, it is typically a strong indicator of an unconventional superconducting condensate [12].
The discovery of superconductivity in vanadium-based kagome metals \(A\)V\({}_{3}\)Sb\({}_{5}\) (\(A\): K, Rb, Cs) has reinvigorated the discussion of conventional versus unconventional pairing in novel quantum materials [13]. The kagome lattice is particularly intriguing since its basic electronic structure features both flat bands, van Hove singularities, and Dirac points, as seen in Fig. 1. In addition, the Fermi surface distribution of sublattice weights of the eigenstates of the kagome tight-binding bands, also illustrated in Fig. 1, can play important roles for determining e.g. the leading instabilities arising from electronic interactions [14; 15]. This may indeed be of relevance in the \(A\)V\({}_{3}\)Sb\({}_{5}\) compounds where superconductivity appears in proximity to a charge-density wave (CDW) phase, which has been studied intensely both theoretically [16; 17; 18; 19; 20; 21; 22] and experimentally [23; 24; 25; 26; 27]. For the superconducting phase, some theoretical studies have explored Cooper pairing arising from purely electronic fluctuations [14; 16; 17; 19; 28; 29; 30; 31; 32; 33; 34]. Other works have pointed to the important role of phonons for the generation of superconductivity, either on their own or in conjunction with electronic correlations [35; 36; 37; 38; 39; 40].
Experimentally, the \(A\)V\({}_{3}\)Sb\({}_{5}\) kagome metals enter their superconducting phase at \(T_{c}\sim 1-3\)K [41; 13; 42]. The critical temperature \(T_{c}\), however, may be significantly enhanced by uniaxial strain or hydrostatic pressure, e.g., for CsV\({}_{3}\)Sb\({}_{5}\), \(T_{c}\sim 8\)K at \(\sim 2\) GPa [43; 44; 45; 46]. The detailed nature and origin of electronic pairing in \(A\)V\({}_{3}\)Sb\({}_{5}\) remains controversial at present, with evidence for both standard nodeless non-sign-changing gaps and nodal unconventional superconducting order. For example, several STM measurements have reported 'V'-shaped STM conductance spectra [47; 26; 48], and thermal conductivity data has been interpreted in favor of nodal superconductivity [49]. Similarly, muon spin spectroscopy experiments on RbV\({}_{3}\)Sb\({}_{5}\) and KV\({}_{3}\)Sb\({}_{5}\) samples report nodal gaps at ambient pressure [50], and a pressure-tuned transition to nodeless order with additional evidence for spontaneous time-reversal symmetry breaking (TRSB) setting in at \(T_{c}\) for high pressures \(\sim 2\) GPa [50; 51]. On the other hand, a Knight shift suppression and the existence of a Hebel-Slichter coherence peak in the spin-lattice relaxation observed by nuclear magnetic resonance measurements point to \(s\)-wave spin-singlet superconductivity [52]. Penetration depth measurements and specific heat data on CsV\({}_{3}\)Sb\({}_{5}\) have also been analysed in terms of an anisotropic non-sign-changing gap with a finite small minimum gap [41; 45; 53; 54; 55]. Recent laser ARPES measurements find isotropic (momentum-independent) spectroscopic gaps [56]. Nodeless superconductivity also appears consistent with multiband features |
2302.06012 | Computation with Large Advice | In this paper, we consider a new direction of computation, which we call
computation with large advice. We mainly consider constant space computation
with large advice in Turing machines, and prove the following facts: (i) The
class of decision problems solvable by a constant space Turing machine with
polynomial-size advice includes nonuniform-{\sf NC}$^1$, (ii) The class of
decision problems solvable by a constant space Turing machine with
quasipolynomial-size advice equals nonuniform-{\sf polyL}. The facts mean
constant space computation with large advice has unexpected computational
power. On the other hand, we mention bounded time computation with large
advice, and attempt to propose a concept of ``algorithms with large advice''.
In the proposal, advice is precomputed data for a problem and a fixed instance
size, and we expect efficient algorithms by large or huge advice. | Hiroki Morizumi | 2023-02-12T21:58:09Z | http://arxiv.org/abs/2302.06012v2 | # Computation with Large Advice
###### Abstract
In this paper, we consider a new direction of computation, which we call computation with large advice. We mainly consider constant space computation with large advice in Turing machines, and prove the following facts: (i) The class of decision problems solvable by a constant space Turing machine with polynomial-size advice includes nonuniform-\(\mathsf{NC}^{1}\), (ii) The class of decision problems solvable by a constant space Turing machine with quasipolynomial-size advice equals nonuniform-polyL. The facts mean constant space computation with large advice has unexpected computational power. On the other hand, we mention bounded time computation with large advice, and attempt to propose a concept of "algorithms with large advice". In the proposal, advice is precomputed data for a problem and a fixed instance size, and we expect efficient algorithms by large or huge advice.
## 1 Introduction
In this paper, we prove the following theorems.
**Theorem 1**.: _The class of decision problems solvable by a constant space Turing machine with polynomial-size (sequential access) advice includes nonuniform-\(\mathsf{NC}^{1}\)._
**Theorem 2**.: _The class of decision problems solvable by a constant space Turing machine with quasipolynomial-size (sequential access) advice equals nonuniform-polyL._
Two theorems imply unexpected power of large advice in computation. We consider "computation with large advice" in this paper.
### Advice
Advice of Turing machines is an extra input which depends only on the size of the main input. Advice is given on a read-only tape of sequential access in this paper. Advice is closely related to the nonuniformity of complexity classes. For example, the nonuniform variant of \(\mathsf{P}\) is \(\mathsf{P/poly}\) which is the class of decision problems solvable by a polynomial time Turing machine with polynomial size advice. The nonuniform variant of \(\mathsf{L}\) is \(\mathsf{L/poly}\) which is the class of decision problems solvable by a logarithmic space Turing machine with polynomial size advice.
In \(\mathsf{P/poly}\) and \(\mathsf{L/poly}\), advice gives no computational power beyond nonuniformity. In this paper, we consider large advice which gives extra power beyond nonuniformity.
### Implications of Theorem 2
Nonuniform-polyL (\(=\mathsf{polyL/quasipoly}\)) includes various complexity classes, e.g. as follows.
* \(\mathsf{L}\) with quasipolynomial-size (sequential access) advice 1
* NL with quasipolynomial-size (sequential access) advice
The classes above includes the class of decision problems solvable by a constant space Turing machine with quasipolynomial-size (sequential access) advice. Furthermore, nonuniform-polyL equals
* nonuniform-NC of quasipolynomial size (i.e., the class of decision problems solvable by circuits with quasipolynomial size, depth \(\log^{O(1)}n\), and fan-in 2)
Thus, by Theorem 2, all complexity classes above are equivalent.
### The Contents of The Paper
The main results of this paper are Theorem 1 and Theorem 2. In Section 3, we describe our ideas and contribution. In Section 4 and Section 5, we prove Theorem 1 and Theorem 2, respectively.
Theorem 1 and Theorem 2 raise questions and interests for the power of large advice in computation. In Section 6, we discuss further studies of large advice. Especially, we mention the power of large advice for algorithms. Although the contents of Section 6 are conceptual, the discussion strengthens the motivation of this paper.
**Note.** In Theorem 1 and Theorem 2, we assume that advice is sequential access, and the theorems strongly depend on sequential access. On the other hand, we also discuss random access advice for future works in Section 6.
## 2 Preliminaries
### Definitions
#### 2.1.1 Branching programs and Boolean circuits
A _branching program_ is a directed acyclic graph. The nodes of out-degree 2 are called _inner nodes_ and labeled by a variable. The nodes of out-degree 0 are called _sinks_ and labeled by 0 or 1. For each inner node, one of the outgoing edges is labeled by 0 and the other one is labeled by 1. There is a single specific node called the _start node_. An assignment to the variables determines a computation path from the start node to a sink node. The value of the sink node is the output of the branching program. If the nodes are arranged into a sequence of levels with edges going only from one level to the next, then the _width_ is the size of the largest level.
_Circuits_ are formally defined as directed acyclic graphs. The nodes of in-degree 0 are called _inputs_, and each one of them is labeled by a variable or by a constant 0 or 1. The other nodes are called _gates_, and each one of them is labeled by a Boolean function. The _fan-in_ of a node is the in-degree of the node, and the _fan-out_ of a node is the out-degree of the node. In this paper, the gates are AND gates of fan-in two, OR gates of fan-in two, and NOT gates. There is a single specific node called _output_. The _size_ of a circuit is the number of gates in the circuit. The _depth_ of a circuit is the length of the longest path in the circuit.
#### 2.1.2 Complexity classes
L is the class of decision problems solvable by a logarithmic space Turing machine. NL is the nondeterministic variant of L. polyL is the class of decision problems solvable by a polylogarithmic space Turing machine. polyL/quasipoly is the class of decision problems solvable by a polylogarithmic space Turing machine with quasipolynomial size advice.
Let \(n\) be the number of inputs in circuits. NC\({}^{1}\) is the class of decision problems solvable by a uniform family of circuits with polynomial size, depth \(O(\log n)\), and fan-in 2. NC is the class of decision problems solvable by a uniform family of circuits with polynomial size, depth \(\log^{O(1)}n\), and fan-in 2.
#### 2.1.3 The sorting problem
The sorting problem is defined as follows in this paper.
**Input:**\(n\) numbers in the set \(\{0,1,\ldots,k\}\).
**Output:** A permutation \(\langle a_{1},a_{2},\ldots,a_{n}\rangle\) of the input such that \(a_{1}\leq a_{2}\leq\cdots\leq a_{n}\).
Note that \(n\) numbers are restricted in the set \(\{0,1,\ldots,k\}\).
### A Conversion Lemma
We use the following lemma in the proof of Theorem 2 (in Section 5).
**Lemma 3**.: _Any quasipolynomial-size branching program can be converted to a circuit of polylogarithmic depth._
Lemma 3 is proved by the following lemma.
**Lemma 4**.: _If any branching programs of size \(s\) can be converted to a circuit of depth \(d\), then any branching programs of size \(2s\) can be converted to a circuit of depth \(d+\lceil\log s^{2}\rceil+2\)._
Proof.: Let \(G\) be a branching programs of size \(2s\). Let \(G_{1}\) and \(G_{2}\) be the former \(s\) nodes and the latter \(s\) nodes, respectively, in arbitrary topological sorted order. Let \(E_{1}\) be the edges between \(G_{1}\) and \(G_{2}\). The number of edges in \(E_{1}\) is at most \(s^{2}\). All paths from the start node to a sink node contain one edge in \(E_{1}\). For each edge in \(E_{1}\), we check the existence of a path from the start node to the \(1\) sink node. Natural construction of such circuit is enough to prove the lemma.
Proof of Lemma 3.: We apply Lemma 4 recursively.
## 3 The Key Ideas and Our Contribution
Our results of this paper are closely related to Barrington's theorem which has been well-known in computational complexity theory. In the 1980's, Barrington proved the following theorem.
**Theorem 5** ([1]).: _The class of decision problems solvable by a nonuniform family of polynomial-size \(5\)-width branching programs equals nonuniform-\(\mathsf{NC}^{1}\)._
The theorem is a surprising result, since \(5\)-width branching programs have only at most five states in each stage of computation. Thus, it has been known that computation with constant working space can have unexpected computational power. In this paper, we represent the computation in Barrington's theorem by Turing machines with large advice, which is our main contribution and gives a new insight to computation.
## 4 Proof of Theorem 1
We firstly prove a relation between constant space Turing machines with polynomial-size advice and polynomial-size \(5\)-width branching programs.
**Lemma 6**.: _Constant space Turing machines with polynomial-size advice simulate polynomial-size \(5\)-width branching programs._
Proof.: We prepare an encoding of the branching program on the tape of advice. The execution of the branching program is simulated with sequential access to the encoding. (The Turing machine does not store the position of the advice tape. The tape head of advice moves to only one direction.) The encoding for variables of branching programs needs to be careful. The encoding for variable \(x_{i}\) consists of one starting symbol and \(n\) symbols, the \(i\)'th of which is marked.
We prove Theorem 1.
Proof of Theorem 1.: By Theorem 5 and Lemma 6, the theorem holds.
## 5 Proof of Theorem 2
The proof outline of Theorem 2 is similar to the one of Theorem 1. To prove Theorem 2, we prove the quasipolynomial-size variant of Barrington's theorem.
### The Quasipolynomial-Size Variant of Barrington's Theorem
The following theorem is the quasipolynomial-size variant of Barrington's theorem. (Theorem 7 may be of independent interest as a variant of Barrington's theorem, although the theorem is not the main aim of the paper.)
**Theorem 7**.: _The class of decision problems solvable by a nonuniform family of quasipolynomial-size \(5\)-width branching programs equals nonuniform-polyL._
To prove Barrington's theorem, Barrington has proved the following lemma. The lemma is useful also for the proof of Theorem 7. (We have added some modifications to fit the paper.)
**Lemma 8** (Theorem 1 of [1] (with some modifications)).: _Any circuit with depth \(d\) and fan-in 2 can be converted to a \(5\)-width branching program with length at most \(4^{d}\)._
We prove Theorem 7.
Proof of Theorem 7.: By Lemma 3 and Lemma 8, any quasipolynomial-size branching program can be converted to a quasipolynomial-size \(5\)-width branching program. Since the class of decision problems solvable by a nonuniform family of quasipolynomial-size branching programs equals nonuniform-polyL, the theorem holds.
### Proof of Theorem 2
This subsection is almost similar to Section 4.
**Lemma 9**.: _Constant space Turing machines with quasipolynomial-size advice simulate quasipolynomial-size \(5\)-width branching programs._
Proof.: We can prove the lemma by the similar way of Lemma 6.
We prove Theorem 2.
Proof of Theorem 2.: Nonuniform-polyL obviously includes the class of decision problems solvable by a constant space Turing machine with quasipolynomial-size advice. By Theorem 7 and Lemma 9, the theorem holds.
## 6 Towards Further Studies of Large Advice
The main results of the paper (Theorem 1 and Theorem 2) raise questions and interests for the power of large advice in computation. In this section, we discuss further studies for computation with large advice.
### Bounded Time Computation and Large Advice
The power of large advice in bounded time computation depends on access to advice. If advice is sequential access, advice beyond polynomial size is obviously useless for polynomial time computation since we cannot access superpolynomial advice in polynomial time. However, if advice is random access, then advice is useful. For example, advice of exponential size can have acceptances of all inputs. Even if \(\mathsf{P}\neq\mathsf{NP}\), we can resolve the problems in \(\mathsf{NP}\) if \(\mathsf{NP}\) is included in \(\mathsf{P}\) with advice of reasonable size and we can prepare the advice. The observation above raises an idea of algorithms with large advice, which is discussed in the next subsection.
### Algorithms with Large Advice
In this subsection, we attempt to propose a concept of "algorithms with large advice". In our proposal, advice is precomputed data for a problem and a fixed instance size, and we expect efficient algorithms by large or huge advice. (We call the data advice, since the data correspond to advice of Turing machines.) In the following example, the merge sort algorithm for the sorting problem accelerates from \(O(n\log n)\) time to \(O(n\log\log n)\) time by huge advice.
**Theorem 10**.: _A modified merge sort algorithm runs in \(O(n\log\log n)\) time, if \(n\) is fixed and \(\tilde{O}(k^{\frac{n}{\log n}})\) advice (i.e., precomputed data of \(\tilde{O}(k^{\frac{n}{\log n}})\) space) is given._
Proof.: We execute the \(\log\log n\) usual recursive steps of the merge sort, and sort \(\frac{n}{\log n}\) numbers by precomputed data. Precomputed data has all patterns of sequences of \(\frac{n}{\log n}\) numbers and the sorted sequences.
The sorting problem is an example for the framework of algorithms with large advice. We are interested in algorithms for computationally hard problems such as \(\mathsf{NP}\)-hard problems, e.g., the SAT problem.
**Remark 1**.: In one of situations which we suppose, advice (i.e., precomputed data) is stored in some places of the internet as public data. The current developed internet offers good environment for algorithms with large advice.
**Remark 2**.: In this paper, we described a preliminary and minimal framework for algorithms with large advice. For actual execution, we need an efficient algorithm to compute advice (i.e., precomputed data), although the algorithm runs once and the obtained advice can be stored.
|
2306.02783 | Unraveling Femtosecond Spin and Charge Dynamics with EUV T-MOKE
Spectroscopy | The magneto-optical Kerr effect (MOKE) in the extreme ultraviolet (EUV)
regime has helped to elucidate some of the key processes that lead to the
manipulation of magnetism on ultrafast timescales. However, as we show in this
paper, the recently introduced spectrally-resolved analysis of such data can
lead to surprising experimental observations, which might cause
misinterpretations. Therefore, an extended analysis of the EUV magneto-optics
is necessary. Via experimental determination of the dielectric tensor, we find
here that the non-equilibrium excitation in an ultrafast magnetization
experiment can cause a rotation of the off-diagonal element of the dielectric
tensor in the complex plane. In direct consequence, the commonly analyzed
magneto-optic asymmetry may show time-dependent behaviour that is not directly
connected to the magnetic properties of the sample. We showcase such critical
observations for the case of ultrafast magnetization dynamics in Ni, and give
guidelines for the future analysis of spectrally-resolved magneto-optical data
and its comparison with theory. | Henrike Probst, Christina Möller, Maren Schumacher, Thomas Brede, John Kay Dewhurst, Marcel Reutzel, Daniel Steil, Sangeeta Sharma, G. S. Matthijs Jansen, Stefan Mathias | 2023-06-05T11:22:26Z | http://arxiv.org/abs/2306.02783v2 | # Unraveling Femtosecond Spin and Charge Dynamics with EUV T-MOKE Spectroscopy
###### Abstract
The magneto-optical Kerr effect (MOKE) in the extreme ultraviolet (EUV) regime has helped to elucidate some of the key processes that lead to the manipulation of magnetism on ultrafast timescales. However, as we show in this paper, the recently introduced spectrally-resolved analysis of such data can lead to surprising experimental observations, which might cause misinterpretations. Therefore, an extended analysis of the EUV magneto-optics is necessary. Via experimental determination of the dielectric tensor, we find here that the non-equilibrium excitation in an ultrafast magnetization experiment can cause a rotation of the off-diagonal element of the dielectric tensor in the complex plane. In direct consequence, the commonly analyzed magneto-optic asymmetry may show time-dependent behaviour that is not directly connected to the magnetic properties of the sample. We showcase such critical observations for the case of ultrafast magnetization dynamics in Ni, and give guidelines for the future analysis of spectrally-resolved magneto-optical data and its comparison with theory.
## I Introduction
With the advent and success of laser-based femtosecond element-specific M-edge magneto-optical Kerr spectroscopy in 2009 [1], a very successful series of experiments using this technique has helped to verify and elucidate a number of key findings in ultrafast magnetism [2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13]. As more powerful laser systems have become available over the years, this experimental technique is becoming available in a growing number of laboratories and becoming an important workhorse to study ultrafast magnetization dynamics [14; 15; 16; 17; 18]. However, the introduction of a new experimental capability also requires an ongoing check of the validity of the collected data, and a crosscheck of the results achieved with complementary experimental techniques. Such validation has been done, for example, for the case of photo-induced ultrafast spin currents and the heavily discussed relative delay of the Fe and Ni demagnetization in a Permalloy sample [2; 14; 17; 19; 20; 21].
Currently, spectrally-resolved analysis of femtosecond transverse magneto-optical Kerr effect (T-MOKE) data in the extreme-ultraviolet (EUV) is more frequently used, for example for the verification of the so-called optical intersite spin-transfer (OISTR) effect [7; 8; 9; 22; 23]. However, it has already been recognized that the interpretation of such data asks for a more detailed analysis of the collected T-MOKE asymmetry data [24; 25]. Similarly, the comparison of magneto-optical spectroscopy data with state-of-the-art time-dependent density functional theory (TDDFT) and similar theoretical methods requires careful analysis [26; 27].
The urgency for a sophisticated analysis of spectrally-resolved T-MOKE data can be best introduced with an exemplary measurement of ultrafast magnetization dynamics in the prototypical 3d ferromagnet Ni. Fig. 1 shows transient T-MOKE asymmetry data, measured with our setup [17], and analyzed at 63.6 eV (\(\pm 2\%\)) EUV photon energy for slightly varying incidence angles. Very disturbingly, the transient dynamics of the T-MOKE asymmetry shows completely distinct behaviour. In particular, for the very same ultrafast pump-probe experiment, all heavily discussed experimental signatures, i.e. an increase, a delay, and a decrease in the T-MOKE asymmetry can be identified, which have previously been used to identify the OISTR effect [7; 8; 9], the influence of exchange scattering [2], and the typical demagnetization process [1]. Having just the information of this particular measurement available, all previous interpretations of T-MOKE data would be in question. In the following, we will show how such critical T-MOKE data needs to be analyzed to obtain reliable access to the true spin dynamics.
The overall topic of the paper is illustrated in Fig. 2. Usually, the main quantity of interest is the global magnetization (top right) or the energy-resolved magnetic moment, given by
Figure 1: Increase, delay, or decrease of the spin dynamics? Very different dynamics in the T-MOKE asymmetry are observed at 63.6 eV (\(\pm 2\%\)) photon energy for slightly different angles-of-incidence (dark blue: 44\({}^{\circ}\), blue: 44.8\({}^{\circ}\), light blue: 45.9\({}^{\circ}\)). The measurement clearly illustrates that a direct interpretation of T-MOKE asymmetry with ultrafast magnetization dynamics can be highly problematic. Note that the solid lines serve as a guide to the eye.
the difference in majority and minority spins in the density of states (bottom right). In EUV T-MOKE experiments, the aim is to probe the time-dependent magnetization and magnetic moment, however neither magnetization (orange arrow) nor the energy-resolved magnetic moment (red arrow) are directly measured. Rather, as an optical technique, T-MOKE probes the dielectric tensor and is specifically sensitive to the off-axis dielectric tensor element \(\epsilon_{xy}\)[28]. Since the dielectric tensor (including \(\epsilon_{xy}\)) can be calculated from TDDFT calculations of the spin-resolved density of states [26; 27; 29; 30; 31], a time-resolved extraction of \(\epsilon_{xy}\) from T-MOKE data would allow for a quantitative comparison of experimental T-MOKE data and theoretical calculations.
In our work, we therefore develop a robust and easy-to-implement method to analyze transient dynamics of the dielectric tensor. With the help of this analysis, we elucidate that the non-equilibrium excitation by the optical pulse can lead to a rotation of the off-diagonal element of the dielectric tensor in the complex plane. This rotational behaviour can lead to the observed increase, delay, and decrease of the T-MOKE asymmetry as seen in Fig. 1, and these differing signals simply depend on the used measurement geometry of the T-MOKE experiment, i.e. the angle of incidence of the EUV light. As TDDFT is also able to provide transient dynamics of the dielectric tensor, we find that a direct comparison of experiment and theory becomes possible via such a dielectric tensor analysis from the data. In Ni, via comparison with theory, we can show that the observed dynamics at early times of the pump excitation (<50 fs) is dominantly driven by spin-conserving transitions in the minority channel. Besides a comparison of the same quantity, our approach also ensures that spectral broadening, multiple edges, and overlapping edges from multiple elements in multi-component materials is properly taken into account.
## II Magneto-optical spectroscopy
### Magneto-optical asymmetry in T-MOKE
Previous magneto-optical studies have attempted to make a direct connection between the magneto-optical signal and the element-specific magnetization properties of the sample after an ultrafast excitation. However, as shown above and discussed in Fig. 2, the relation between the measurement signal in a T-MOKE experiment and the magnetization is more complex, even in the equilibrium case. At the microscopic scale, the interaction of light with a material is given by the complex-valued dielectric tensor \(\epsilon\), which can be derived by counting the number of allowed optical transitions within the the spin-resolved band structure [32; 33; 34; 27; 35]. For a magnetic material, the imbalance between the different spin channels leads to off-diagonal terms in the dielectric tensor, which couple light fields of orthogonal polarization. For a typical (cubic) magnetic material that is magnetized along the \(z\)-axis, the dielectric tensor is commonly expressed as
\[\epsilon=\begin{pmatrix}\epsilon_{xx}&\epsilon_{xy}&0\\ -\epsilon_{xy}&\epsilon_{xx}&0\\ 0&0&\epsilon_{xx}\end{pmatrix}. \tag{1}\]
Here, \(\epsilon_{xx}\) can be directly related to the non-magnetic refractive index, mostly written as
\[\sqrt{\epsilon_{xx}}=n=1-\delta+i\beta, \tag{2}\]
while the off-axis dielectric tensor element \(\epsilon_{xy}\) describes the magneto-optical response of the material.
Through the Fresnel equations, it is possible to express the signal in various magneto-optical techniques in terms of the dielectric tensor components. For example for XMCD, it is known that a signal can be extracted that is proportional to \(\text{Re}(\epsilon_{xy})\)[26; 27]. For T-MOKE, the reflectivity of a single vacuum/magnetic material interface can to good accuracy be expressed as [36]
\[R_{\uparrow/\downarrow}=|R_{0}|^{2}+|R_{m}\epsilon_{xy}|^{2}\pm\text{Re}\{2R _{0}^{*}R_{m}\epsilon_{xy}\} \tag{3}\]
where \(R_{0}=\frac{n\cos\theta-\cos\theta_{i}}{n\cos\theta_{i}+\cos\theta_{i}}\) and \(R_{m}=\frac{sin\theta_{i}}{n^{2}(\cos\theta_{i}+\cos\theta_{i})}\) are the (complex-valued) non-magnetic and magnetic contributions to the reflectivity, respectively. \(\text{Re}\{...\}\) indicates the real part, and the incidence angle \(\theta_{i}\) and refraction angle \(\theta_{i}\) are related by Snell's law. The \(\pm\) sign in Eq. 3 is directly linked to the magnetization direction, where switching the magnetization direction also switches the sign.
Based on Eq. (3), the commonly-used measurement strategy in EUV T-MOKE is to measure the reflected intensity \(I_{\uparrow/\downarrow}\) (which is proportional to \(R_{\uparrow/\downarrow}\)), and subsequently calculate the T-MOKE asymmetry for bulk magnetic material by [24; 25; 37; 37]
\[A=\frac{I_{\uparrow}-I_{\downarrow}}{I_{\uparrow}+I_{\downarrow}}=\frac{2 \text{Re}(R_{0}^{*}R_{m}\epsilon_{xy})}{|R_{0}|^{2}+|R_{m}\epsilon_{xy}|^{2}}. \tag{4}\]
By alternately measuring \(I_{\uparrow}\) and \(I_{\downarrow}\), this allows to efficiently filter out variations in the light source intensity. Then, it is useful to consider the following assumptions: if \(|R_{m}\epsilon_{xy}|^{2}\ll|R_{0}|^{2}\)
Figure 2: Overview of the connections between experimental magneto-optical signal, the off-diagonal tensor-element and the magnetization.
and the refractive index \(n\) is constant, then the transient asymmetry \(A(t)\) can be normalized to a reference asymmetry \(A_{\text{ref}}\) (commonly the T-MOKE asymmetry of the sample in equilibrium), and the signal
\[\frac{A(t)}{A_{\text{ref}}}=\frac{\text{Re}(R_{0}^{*}R_{m}\epsilon_{xy}(t))}{ \text{Re}(R_{0}^{*}R_{m}\epsilon_{xy}^{\text{ref}})} \tag{5}\]
is acquired. Now, a further assumption can be made to finally quantitatively link the observed signal \(A(t)\) to a change in \(\epsilon_{xy}\): namely that the angle of \(\epsilon_{xy}\) in the complex plane does not change and only the magnitude \(|\epsilon_{xy}|\) decreases/increases. However, as will become clear in the following analysis, the data that is presented in Fig. 1 unambiguously shows that this set of assumptions cannot hold.
### Angle-dependence of the T-MOKE asymmetry
It is well known that the reflectivity of a sample, and more specifically also the T-MOKE signal, depend strongly on the angle of incidence \(\theta_{i}\)[2; 10; 38]. This is particularly true for the magneto-optical reflectivity close to the Brewster angle, where a strong magneto-optical signal is observed as the non-magnetic reflection is strongly suppressed. In order to understand this behavior, it is useful to consider \(\epsilon_{xy}=\text{Re}(\epsilon_{xy})+i\text{Im}(\epsilon_{xy})\) as a vector in the complex plane: \(\vec{\epsilon}_{xy}\). Approximating the T-MOKE asymmetry to depend linearly on \(\epsilon_{xy}\) (i.e., using \(|R_{0}|^{2}\gg|R_{m}\epsilon_{xy}|^{2}\) in Eq. (4)), we rewrite Eq. (4) as
\[A(\epsilon_{xy}) \approx\frac{2}{|R_{0}|^{2}}\cdot\text{Re}(R_{0}^{*}R_{m}\epsilon _{xy})\] \[=\frac{2}{|R_{0}|^{2}}\cdot[\text{Re}(R_{0}R_{m}^{*})\cdot\text{ Re}(\epsilon_{xy})+\text{Im}(R_{0}R_{m}^{*})\cdot\text{Im}(\epsilon_{xy})]\] \[=\vec{p}_{\theta}\cdot\vec{\epsilon}_{xy}=|\vec{p}_{\theta}||\vec {\epsilon}_{xy}|\cos(\spherical(\vec{p}_{\theta},\vec{\epsilon}_{xy})).\]
This analysis shows that it is possible to interpret the T-MOKE asymmetry \(A(\vec{\epsilon}_{xy})\) as inner product of \(\vec{\epsilon}_{xy}\) with a _probe vector_\(\vec{p}_{\theta}\). For the simple case of a single vacuum/magnet interface, \(\vec{p}_{\theta}\) is (up to a scaling factor) proportional to \((\text{Re}(R_{0}R_{m}^{*}),\text{Im}(R_{0}R_{m}^{*}))\). More generally, we define the probe vector as the derivative of the T-MOKE asymmetry with respect to \(\epsilon_{xy}\):
\[\vec{p}_{\theta}=\left(\frac{\partial A}{\partial(\text{Re}(\epsilon_{xy}))},\frac{\partial A}{\partial(\text{Im}(\epsilon_{xy}))}\right). \tag{6}\]
We note that since \(\vec{p}_{\theta}\) depends only on the geometric and non-magnetic properties of the sample, it can be calculated without (precise) a-priori knowledge of \(\epsilon_{xy}\). It is most important, however, to realize that \(\vec{p}_{\theta}\) rotates strongly in the complex plane as the angle of incidence \(\theta_{i}\) of the EUV light in the T-MOKE measurement changes.
This strong \(\theta_{i}\) angle of incidence dependence is critical, because the T-MOKE asymmetry is proportional to the inner product of the vectors \(\vec{\epsilon}_{xy}\) and the probe vector \(\vec{p}_{\theta}\). If the angle \(\spherical(\vec{p}_{\theta},\vec{\epsilon}_{xy})\) between these vectors approaches \(90^{\circ}\), a particularly strong dependence of the asymmetry on the geometrical factors must be expected. This situation is illustrated in Fig. 3a, where the measured T-MOKE asymmetry for three incidence angles close to \(\theta_{i}=45^{\circ}\) is shown. In the spectral region marked with a rectangle, the T-MOKE asymmetry is strongly angle-of-incidence-dependent, and even flips from positive to negative values. This can be understood by looking at the schematic in Fig. 3b, which illustrates the positive to negative change of the inner product of \(\vec{p}_{\theta}\) and \(\vec{\epsilon}_{xy}\) as a function of the angle-of-incidence and therewith \(\spherical(\vec{p}_{\theta},\vec{\epsilon}_{xy})\).
Another very important aspect for angles \(\spherical(\vec{p}_{\theta},\vec{\epsilon}_{xy})\approx 90^{\circ}\) is the realization that even small rotations of \(\epsilon_{xy}\) in time-resolved measurements would lead to sign-changes in the measured T-MOKE asymmetry. Indeed, we will show that this is exactly the reason for the disturbing data presented in Fig. 1, which was analyzed at a photon energy of 63.6 eV: here, a small transient rotation of \(\epsilon_{xy}\) leads to very different transient T-MOKE asymmetries for different angle-of-incidences, as we will fully analyze below.
On the other hand, if \(\spherical(\vec{p}_{\theta},\vec{\epsilon}_{xy})\) is much larger or smaller than \(90^{\circ}\), as is the case in the spectral region around 66 eV, no such peculiar behaviour is expected. An analysis of transient T-MOKE asymmetry around 66 eV for different angles of incidence consequently yields perfectly identical results (Fig. 4), and the transient T-MOKE asymmetry now reliably reflects the ultrafast magnetic behaviour.
In summary, we can already conclude that angles of \(\spherical(\vec{p}_{\theta},\vec{\epsilon}_{xy})\approx 90^{\circ}\) can lead to results in the time-resolved data that are not straightforward to interpret. However, this particular situation can readily be identified, if the asymmetry in a certain spectral range is highly sensitive to the EUV angle-of-incidence and flips sign or approaches zero.
### Beyond the bulk approximation
In the previous analysis, we have focused specifically on a single vacuum/magnet interface, as the Fresnel equations for such a case are comparatively simple. However, the magnetic samples studied in ultrafast magnetism, including also
Figure 3: Angle-dependence of the static T-MOKE asymmetry for Ni. (a) The observed T-MOKE asymmetry for the 21 nm Ni sample for three different incidence angles \(\theta_{i}\) close to \(45^{\circ}\). (b) Schematic representation of the probe vectors following Eq. 6 for the different incidence angles in (a), and the \(\epsilon_{xy}\) that was determined from these measurements. For the calculation, we used the same sample composition and refractive index values as in Section III.1.
this study, are often best described as multilayer structures. In particular, a capping layer is often included to protect the magnetic material from the oxidizing environment; and if no capping layer is included, commonly a native oxide layer forms. For such a sample, expressions for the magnetization-dependent reflectivity can be derived using the transfer matrix formalism [39; 40; 41]. In the present study, we have implemented the transfer matrix formalism from Ref. [40] using a symbolic math package (SymPy) [42] for Python to calculate the reflectivity, the T-MOKE asymmetry, and the probe vectors \(\vec{p}_{\theta}\) without relying on assumptions concerning the strength of the magneto-optic response.
## III Determination of dielectric tensor from T-MOKE data
The strong angle-dependence of the T-MOKE asymmetry can be leveraged in order to extract the complex-valued off-diagonal dielectric tensor element \(\epsilon_{xy}\)[36; 38; 43]. Here, we present a unique approach to such magneto-optical reflectometry where only a small range of incidence angles needs to be measured for complete access to \(\epsilon_{xy}\) at the given photon energies. Specifically, this approach takes advantage of the strong enhancement and dependence on the incidence angle of the T-MOKE asymmetry around 45\({}^{\circ}\), and furthermore makes use of the well-known non-magnetic components of the dielectric tensor, which allows us to reliably calculate the probe vectors. This approach provides two crucial advantages: the experimental setup can be integrated into a typical EUV T-MOKE setup without requiring a major overhaul, and since only a small number of incidence angles needs to be measured, full femtosecond time-resolved magneto-optical spectroscopy is possible with only a small (less than 5-fold) increase in the measurement time.
In the following, we will first discuss the extraction of the static off-diagonal dielectric tensor element, which is necessary in order to calibrate certain experimental parameters, before we continue with the extraction of the time-resolved dielectric tensor.
### Extraction of the static off-diagonal dielectric tensor element
Several approaches exist to extract the magneto-optical dielectric tensor element \(\epsilon_{xy}\) at extreme ultraviolet wavelengths [43; 44; 26; 45; 38]. For thin samples that are best probed in reflection, incidence-angle-dependent or polarization-dependent reflectivity measurements have demonstrated their value [43; 45; 38]. As these techniques rely on reflectivity measurements for linearly polarized light, they are commonly implemented in laboratory-scale experiments based on high-harmonic generation. A limiting factor for the implementation of these techniques, however, is that they typically require a dedicated experimental geometry. Consequently, there is a need for an \(\epsilon_{xy}\) measurement technique that is compatible with, and can be implemented in, currently existing EUV femtomagnetism experiments. Here, we present the magneto-optical reflectometry method that we implemented in our table-top femtosecond EUV T-MOKE spectroscopy setup [17]. This was made possible by the implementation of a motorized, non-magnetic and UHV-compatible tip-tilt sample holder (SmarAct GmbH), which enabled a scan range of 2.5\({}^{\circ}\) (see Fig. S1 in the Supplemental Material (SM)). As such, we anticipate that the method can be rapidly implemented in many setups that are currently being used for fixed-angle EUV T-MOKE.
In order to gain access to the off-axis dielectric tensor element of a magnetized Ni thin film, we performed a series of T-MOKE asymmetry measurements at 11 different incidence angles (shown in Fig. 5a). The sample consists of a 21 nm Ni film that was deposited by sputtering on a Si\({}_{3}\)N\({}_{4}\)-coated Si wafer. This thickness was chosen such that femtosecond optical pulses will excite the sample homogeneously, while simultaneously being thick enough that the reflection at the Ni/Si\({}_{3}\)N\({}_{4}\) interface does not modify the observed T-MOKE asymmetry strongly. The 100 nm Si\({}_{3}\)N\({}_{4}\) layer is thick enough that no reflection from the Si\({}_{3}\)N\({}_{4}\)/Si interface has to be accounted for. Finally, the Ni layer possesses a thin, native Ni oxide layer on top, which reduces the strength of the T-MOKE asymmetry. In line with literature [46; 47; 48], we set the thickness of the NiO layer to 2 nm. In our analysis, we then use the transfer matrix formalism and calculate expressions for the full three-layer system, including the NiO capping layer, the Ni magnetic layer and the Si\({}_{3}\)N\({}_{4}\) bottom layer.
For our sample, the T-MOKE asymmetry depends on the complex-valued refractive indices of Ni, NiO and Si\({}_{3}\)N\({}_{4}\), the layer thicknesses, the incidence angle, and the complex-valued \(\epsilon_{xy}\) of Ni. Using the result of the transfer matrix formalism as the model for the sample, the energy-resolved complex-valued \(\epsilon_{xy}\) can be determined from a set of angle-dependent T-MOKE asymmetry measurements using a standard least-square fitting procedure. In order to facilitate a reliable determination of \(\epsilon_{xy}\) with few measurements over a small angular range, we fix the complex-valued refractive indices to literature values. Here, we retrieve the values for the dis
Figure 4: Transient magneto-optical asymmetry at 66.4 eV, i.e., at the Ni resonance, yields identical dynamics for each angle.
persive (real) part of the refractive index \(\delta\) from CXRO [49], while we used more recent measurements from Ref. [26] for the absorptive (imaginary) part \(\beta\). Using other sources for \(\beta\) such as Refs. [44] or [49] leads to small variations in the overall amplitude of \(\varepsilon_{xy}\), but does not affect the photon-energy dependence significantly.
The best-fit values for the real and imaginary part of the off-diagonal tensor element \(\varepsilon_{xy}\) in Ni are shown in Fig. 5b. As expected, the largest values can be found around the M absorption edges of Ni around 66 eV (M\({}_{3}\) edge). We also find a good qualitative agreement with previously published data [44, 26]. Furthermore, the dielectric tensor can be accurately calculated based on (time-dependent) density functional theory complemented by calculations in the \(GW\) framework to achieve an accurate description of the 3p core states [26, 27, 35, 50]. Here, we find a good agreement between experiment and theory on the shape of the dielectric tensor, although theory indicates an overall larger amplitude of \(\varepsilon_{xy}\). We attribute the discrepancy in the amplitude to a common overestimation of the film-averaged magnetic moment in the TDDFT calculation [51, 25]. In this regard, a better match between experiment and theory, particularly for the spectral region just above the edge, was recently also achieved by manually reducing the exchange splitting [26].
### Systematics and uncertainties
There are several experimental uncertainties that may influence the extracted values for \(\varepsilon_{xy}\). These include statistical and systematical effects. The statistical effects include shot-to-shot intensity fluctuations, position-dependent detection efficiency on the camera and an uncertainty in the relative angle determination. As we performed a large number of angle-dependent measurements with a relatively long measurement time, however, we estimate that the statistical errors are negligible compared to the systematic ones. We have identified several possible sources of systematic errors, which we analyse in the context of the present Ni sample:
First, the leading source of systematic errors is the calibration of the incidence angle. From Fig. 3, it is clear that a change in the incidence angle leads to a rotation of the probe vectors, as well as a change in the length. Therefore, a systematic error in the angle determination leads to a rotation of \(\widetilde{\varepsilon}_{xy}\) in the complex plane, as well as a change in amplitude. Performing the full \(\varepsilon_{xy}\) reconstruction for different angle calibrations, we find that a 0.5\({}^{\circ}\) calibration error in the incidence angle typically leads to a rotation of \(\widetilde{\varepsilon}_{xy}\) by 10\({}^{\circ}\) and a scaling of the amplitude by 10%. Considering the experimental setup, we estimate that the absolute angle calibration is accurate to within 0.3\({}^{\circ}\).
Second, quantitative determination of the T-MOKE asymmetry relies on an accurate subtraction of the background signal in the spectrometer. This is particularly critical when the detected EUV flux is low. To avoid errors due to this effect, we have chosen to evaluate the data only at photon energies around the peaks of the high-harmonic generation (HHG) spectrum. Nevertheless, we note that close to the HHG cut-off energy (\(\approx\) 70 eV, cf. Fig. S2 in SI), the flux of each harmonic significantly decreases. At these photon energies, an imperfect background correction can lead to a reduction of the observed T-MOKE asymmetry and thereby an underestimation of the amplitude of \(\varepsilon_{xy}\). For the data reported in this paper, we estimate this effect to be negligible.
Third, it was already discussed that the NiO capping influences the observed T-MOKE asymmetry. Unfortunately, no direct determination of the capping layer thickness was possible. Therefore, we have analysed the influence of the capping layer thickness on the retrieved \(\varepsilon_{xy}\) values. Within the range of expected NiO thicknesses (1.5 to 2.5 nm), we find that a thicker (thinner) NiO capping would reduce (increase) the size of the observed T-MOKE asymmetry and therefore would lead to a slightly larger (smaller) amplitude of \(\varepsilon_{xy}\). Here, 0.5 nm change corresponds to less than 10% change in the amplitude. Also, a comparatively small rotation of the extracted \(\varepsilon_{xy}\) can be observed. Overall, the effect due to uncertainty in the capping layer thickness is less pronounced than the effect due to the angle of incidence calibration.
Fourth, we note that the presented \(\varepsilon_{xy}\) reconstruction technique depends on accurate values for the non-magnetic refractive indices of all materials in the sample. However, by per
Figure 5: Determination of a static \(\varepsilon_{xy}\). (a) T-MOKE asymmetry for 11 incidence angles close to 45\({}^{\circ}\). Due to the high resolution of the spectrometer, we were able to evaluate the T-MOKE asymmetry at two energies for each high harmonic, indicated by points. The connecting lines serve as a guide to the eye for the T-MOKE asymmetry. Note that only the relative angles were directly determined from the sample, while the absolute angle was retrieved from the analysis. (b) For comparison, we have calculated the real and imaginary parts of \(\varepsilon_{xy}\) by TDDFT. We find a good agreement with regard to the photon energy dependence, although TDDFT predicts an overall stronger magneto-optical response.
forming the full analysis for different values of the refractive index of Ni (from Refs. [26; 44; 49]), we find only a minor effect on the shape and strength of \(\epsilon_{xy}\). Related to that, also an accurate calibration of the photon energy is necessary. Within the experimental uncertainty of the photon energy calibration, which is \(<2\%\), we find no significant change in the reconstructed \(\epsilon_{xy}\).
### Probing the transient dielectric tensor
Next, we proceed to determine the time-resolved \(\epsilon_{xy}\) during optically-induced ultrafast demagnetization with sufficient time resolution to trace the full demagnetization process and thereby provide an ideal basis for comparison with theoretical methods that describe the dynamics of the electron, spin and lattice degrees of freedom. Compared to the static case that was previously discussed, we make two observations: First, as a consequence of the optical excitation, it is not possible to fix the refractive index to literature values; rather, the dynamical behaviour of \(\beta\) and \(\delta\) (see Eq. (2)) must be extracted from the experimental data. Second, the number of measured angles must be limited as much as possible to facilitate the measurement of a larger number of pump-probe delays. We find that these points can be addressed by choosing the reflectivity \(R_{\uparrow/\downarrow}\) (Eq. (3); more precisely its equivalent from the transfer matrix formalism) as a starting point for the time-resolved analysis, rather than the T-MOKE asymmetry (Eq. (4)). Instead of a single value per photon energy and pump-probe delay, this yields two measurement values (for two magnetization directions) which contain both magnetic and non-magnetic contributions.
As the absolute reflectivity cannot be measured in our experiment, we consider only the relative changes compared to the unpumped (\(t<0\)) case. From a measurement series of the delay-dependent intensity \(I_{\uparrow/\downarrow}(t)\), we extract the relative reflectivity
\[\Delta R_{\uparrow/\downarrow}(t)=\frac{1}{2}\frac{R_{\uparrow/\downarrow}(t )}{R_{\uparrow/\downarrow}(t_{\text{ref}})}=\frac{I_{\uparrow/\downarrow}(t )}{I_{\uparrow}(t_{\text{ref}})+I_{\downarrow}(t_{\text{ref}})}, \tag{7}\]
where \(t_{\text{ref}}\) indicates the reference time interval where the probe arrives at the sample before the pump pulse. This yields a signal that reflects the static asymmetry at times before the pump pulse and provides a measure of the transient magneto-optical and non-magnetic reflectivity changes at times after the pump pulse has arrived. Fig. 6a and b show such measurement data for the same experimental situation that was presented in Fig. 1, namely the optically-induced demagnetization of Ni.
Figure 6: Fitting of the time-resolved data to extract both \(\epsilon_{xy}\) and \(\beta\). (a, b) The measured (points) and reconstructed (line) transient relative reflectivity \(\Delta R_{\uparrow/\downarrow}(t)\) for opposite magnetization directions (cf. Eq. (7)) during optically-induced magnetization dynamics for photon energies just below (a) and close to (b) the Ni M-edge.
Figure 7: (a, b) Transient evolution of \(\epsilon_{xy}\) after the optical excitation. \(\epsilon_{xy}\) rotates in the complex plane, because the real and imaginary part do not change identically. The rotation of \(\epsilon_{xy}\) is of particular importance to understand the differences in the asymmetry time-traces for different incidence angles below the resonance. (c) Transient changes of the imaginary part of the refractive index (\(\beta\)) of Ni retrieved from the angle-resolved EUV T-MOKE data. We find a strong change of \(\beta\) at \(h\nu=63.9\) eV that is indicative for the strong excitation of electrons.
The full data set contains the magnetization-dependent relative reflectivity for the full high-harmonic spectrum at 3 incidence angles \(\theta_{i}\) (44\({}^{\circ}\), 44.8\({}^{\circ}\) and 45.9\({}^{\circ}\), as determined from the static analysis). In order to extract the time-dependent \(\varepsilon_{xy}\), we apply a similar fitting procedure as was used in the static case: Each photon energy and each pump-probe delay are considered independently. For each combination of these, our reconstruction algorithm determines the optimal values for \(\text{Re}(\varepsilon_{xy})\), \(\text{Im}(\varepsilon_{xy})\), \(\beta\) and \(\delta\).
We find that \(\delta\) generally does not influence the predicted reflectivity strongly, and consequently it cannot accurately be determined with a least-square fitting routine. To address this, we employ a self-consistent two-step approach: 1. keeping \(\delta\) fixed, we use a least-square fitting to determine \(\text{Re}(\varepsilon_{xy})\), \(\text{Im}(\varepsilon_{xy})\) and \(\beta\). 2. We use a Kramers-Kronig (KK) transform to determine \(\delta\)[52]. Repeating steps 1 and 2 up to ten times has shown excellent convergence and a good reconstruction of the measured values, as shown in Fig. 6a and b. We note that although \(\text{Re}(\varepsilon_{xy})\) and \(\text{Im}(\varepsilon_{xy})\) are also connected by the KK relations, the implementation of such a transform is challenging due to the sparsely sampled nature of our spectrum. We have therefore achieved better results without linking \(\text{Re}(\varepsilon_{xy})\) and \(\text{Im}(\varepsilon_{xy})\) by KK.
## IV Interpretation
### Rotation of \(\varepsilon_{xy}\)
Having reconstructed the transient dielectric tensor including both the magnetic (\(\varepsilon_{xy}\)) and non-magnetic (\(\varepsilon_{xx}\)) part, we proceed with the interpretation of these results. In a first step, it is instructive to return to the special case that was identified in Ni at 64 eV photon energy (Fig. 1). Fig. 7a and b show the real and imaginary part of the off-diagonal dielectric tensor element, respectively, as a function of time for two selected energies (more data, including the extracted variation of \(\delta\), are shown in Fig. S5 in the SM). First, we recognize that the overall magnitude \(|\varepsilon_{xy}|\) decreases over time to approximately 40% of its original length after 600 fs. More interesting, however, we also see that \(\text{Re}(\varepsilon_{xy})\) and \(\text{Im}(\varepsilon_{xy})\) do not quench identically as a function of time, especially for \(h\nu=64\) eV, where a transient increase of \(\text{Re}(\varepsilon_{xy})\) is observed. This means that \(\varepsilon_{xy}\) must transiently rotate in the complex plane. This rotation in the complex plane is visualized in Fig. 8a and is a key factor in the explanation of the diverse transient effects that were observed in Ni at 64 eV (Fig. 1). However, we note that a rotation of \(\varepsilon_{xy}\) can also be observed for many other photon energies, and does not necessarily have a large impact on the T-MOKE asymmetry data, as is evident from the measurement shown in Fig. 4. Therefore, a rotation of \(\varepsilon_{xy}\) alone is not sufficient to explain the peculiar data of Fig. 1.
At 64 eV, we find that, in addition to the transient rotation of \(\varepsilon_{xy}\), the probe vector \(\vec{p}_{\theta}\) and the off-diagonal dielectric tensor element \(\varepsilon_{xy}\) are close to orthogonal. This is evident by the zero-crossing of the static magnetic asymmetry for different angles-of-incidence (see Figs. 3 and 5). This situation does not occur at the other photon energies that we have observed, and it provides the second key to the explanation of the data in Fig. 1. The T-MOKE asymmetry can be approximated by the projection of \(\varepsilon_{xy}\) on the probe vectors \(\vec{p}_{\theta}\). In Fig. 8b, we now apply this method again to visualize how the experimentally-determined transient rotation of \(\varepsilon_{xy}\) can influence the measured T-MOKE asymmetry. We find that changes of the orientation of \(\varepsilon_{xy}\) can lead to dramatic effects, which furthermore depend strongly on the precise angle \(\sphericalangle(\vec{p},\vec{\varepsilon}_{xy})\), and in particular if this angle is more or less than 90\({}^{\circ}\). We can readily construct scenarios that predict a delay, a rapid increase and a rapid decrease in the T-MOKE asymmetry - exactly as was observed for Ni (cf. yellow areas in Fig. 8b). Furthermore, we note that a complete reversal of the T-MOKE asymmetry might also occur. A naive interpretation of the T-MOKE
Figure 8: (a) Transient evolution of \(\vec{\varepsilon}_{xy}\) for selected times after the optical excitation. \(\vec{\varepsilon}_{xy}\) rotates in the complex plane, as the real and imaginary part do not change identically. (b) A rotation of \(\vec{\varepsilon}_{xy}\) in the complex plane can lead to an increase, a decrease, or no effect on the magnetic asymmetry, which can be understood from the alignment of \(\vec{\varepsilon}_{xy}\) and \(\vec{p}_{\theta}\). Here, we specifically consider the change in \(\vec{\varepsilon}_{xy}\) of Ni at 64 eV and 40 fs after the onset of demagnetization. At 44\({}^{\circ}\), the rotation of \(\vec{\varepsilon}_{xy}\) is such that the projection on \(\vec{p}_{\theta}\) does not change. Hence, no change in the magnetic asymmetry is observed. At 44.8\({}^{\circ}\), the rotation leads to an increase of the projection (magnetic asymmetry), while at 45.9\({}^{\circ}\), a pronounced decrease is observed. The change of the projection from \(\vec{\varepsilon}_{xy}\) on \(\vec{p}_{\theta}\) between unpumped and at 40 fs is highlighted in yellow.
asymmetry might confuse this observation for a reversal of the magnetic moment, which we emphasize is not the case here. Thus, we conclude that in experiment, where the transient change of the T-MOKE asymmetry is always normalized to its value before the pump excitation, even very small angle changes of \(\epsilon_{xy}\) can have a huge impact on the qualitative trend of the transient T-MOKE asymmetry data.
Since the T-MOKE asymmetry depends on the relative angle \(\sphericalangle(\vec{p},\vec{\epsilon}_{xy})\), it is clear that not just a rotation of \(\epsilon_{xy}\), but also a rotation of \(\vec{p}_{\theta}\) might lead to unexpected dynamics in the time-resolved T-MOKE asymmetry signal. Since such rotations can arise due to changes of the refractive index, they are commonly referred to as non-magnetic artifacts in the ultrafast magnetism community [24, 53]. However, we note that in our time-resolved analysis, \(\beta\) (Fig. 7c) and \(\delta\) (Fig. S5 in SM) are reconstructed in addition. Therefore, this analysis allows to separate the rotation of \(\epsilon_{xy}\) from any rotation in \(\vec{p}_{\theta}\). Equivalently, this analysis separates the magnetic and non-magnetic effects. For Ni at 64 eV, we find that the \(\vec{p}_{\theta}\) vectors change insignificantly in comparison to \(\epsilon_{xy}\), indicating that the angle-dependent T-MOKE asymmetry changes are dominated by the rotation of \(\epsilon_{xy}\).
In summary, we find that the optically-induced non-equilibrium excitation in Ni leads to a transient rotation of the off-diagonal element of the dielectric tensor. In consequence, we show that the T-MOKE asymmetry may exhibit increased, decreased, and delayed behavior, which must not be directly interpreted as transient dynamics of the magnetization of the sample.
### Comparison to theory
Before we carry out a direct comparison to theory, it is instructive to have a qualitative look at the resonant M-edge transitions that are probed in experiment with EUV T-MOKE spectroscopy. Fig. 9a shows a schematic of the spin-split 3d and 4s density of states of Ni and the spin-split Ni 3p core levels, for which we include the approximate intrinsic linewidth broadening. As can be seen from this schematic, the spin-split 3p core levels overlap and cover an energy range of \(\approx\)5 eV, which is comparable to the energy range of the full valence and conduction band structure of Ni. Therefore, a specific EUV energy (pink arrow in Fig. 9a) does not only probe a specific energy within the spin-split DOS, but an extended region of several electronvolts. In consequence, the collected T-MOKE asymmetry is strongly broadened, and it is not straightforward to identify spectrally-resolved dynamics in the T-MOKE asymmetry or even the extracted dielectric tensor with occupation changes or band renormalizations in the electronic structure of the investigated material.
However, if we look at our TDDFT calculations in Fig. 9b, such a comparison seems necessary at first. For 1.2 eV and 47 fs pump pulses, we find that the time-resolved change in minority and majority occupation of the DOS exhibits spectrally very distinct dynamics. For example, because of the large unoccupied DOS in the minority channel of Ni just above the Fermi-level (cf. Fig. 9a), Fig. 9b shows that pumping with 1.2 eV leads to a peaked increase of minority spins in a \(\approx\)300 meV spectral region above the Fermi level. In other words, we see a strong minority inter-energy transfer of spins from below to above the Fermi-level. In the majority channel, we also pump electrons from below to above the Fermi-level, but much less efficient, because there are much less empty states available for the transition. In total, this means that the optical excitation leads to a magnetic moment increase below the Fermi-level (loss of minority electrons), and an overall magnetic moment decrease (gain of minority electrons) just above the Fermi-level. However, all of these distinct spectral signatures will be broadened in experiment due to the large linewidth of the Ni 3p core levels.
We can overcome this problem by directly calculating the transient changes in the dielectric tensor, which takes all EUV transitions into account and moreover is a quantity that we now can directly compare with experiment. Specifically, we focus on the real part of \(\epsilon_{xy}\), which can be related to the spin polarization of the unoccupied states. Fig. 9c shows the theoretical transient dynamics of \(\mathrm{Re}(\epsilon_{xy})\) in the corresponding energy range. First, we recognize that the spectrally distinct dynamics as seen in the transient occupation changes in Fig. 9b are completely smeared out. Nevertheless, we still find broad energetic regions where the inter-energy spin transfer as seen in Fig. 9c can be verified: Below the energy marked with the grey arrow in Fig. 9b, \(\mathrm{Re}(\epsilon_{xy})\) shows an ultrafast relative increase, and an ultrafast decrease for energies above. If \(\mathrm{Re}(\epsilon_{xy})\) would be extracted exactly at the energy of the green arrow, we would expect a delayed behavior in the dynamics.
Having this theoretical result on transient dynamics of \(\mathrm{Re}(\epsilon_{xy})\), we now carry out a direct comparison with experiment, where we were able to extract \(\mathrm{Re}(\epsilon_{xy})\) at the energies marked with blue- and red-shaded areas in Fig. 9c. Figure 9d shows this comparison of time-resolved dynamics of \(\mathrm{Re}(\epsilon_{xy})\) for theory (solid lines) and experiment (data points). Clearly, the predicted relative increase and decrease in \(\mathrm{Re}(\epsilon_{xy})\) after optical pumping can be confirmed. We also find that for longer time-delays >100 fs, the experimental data shows a further reduction of \(\mathrm{Re}(\epsilon_{xy})\), while the theory data stays constant. This is to be expected, because TDDFT does not take all possible spin-flip scattering processes into account, which in experiment lead to the overall demagnetization of the sample on timescales larger than 100 fs.
In summary, TDDFT predicts an OISTR-like process, i.e. an optically-driven inter-energy spin transfer similar to the inter-site spin transfer in the usual OISTR process. This pumping of spins happens predominantly in the minority channel and leads to distinct spectrally-dependent magnetization increase and decrease below and above the Fermi-level, respectively. Via a direct calculation of the real part of the off-diagonal element of the dielectric tensor, \(\mathrm{Re}(\epsilon_{xy})\), we are able to pinpoint the inter-energy transfer in the transient dynamics of \(\mathrm{Re}(\epsilon_{xy})\). As \(\mathrm{Re}(\epsilon_{xy})\) can be directly extracted from our experimental data, we are able to unambiguously verify the optically-driven inter-energy spin-transfer process in Ni.
## V Conclusion
In conclusion, we have discussed an intriguing effect in EUV T-MOKE magnetism studies, where different transient dynamics were observed under almost identical experimental conditions. This observation conclusively illustrates that it is not always possible to directly compare time-resolved T-MOKE data with time-resolved calculations of the spin-resolved density of states. Here, we show that a quantitative comparison with TDDFT can be achieved by extracting the transient dynamics of the off-axis dielectric tensor element \(\varepsilon_{\mathrm{xy}}\).
We have presented a robust technique to retrieve \(\varepsilon_{\mathrm{xy}}\) from experimental T-MOKE data with full femtosecond time resolution at little experimental cost. The data that we have achieved solves the controversy that slightly different incidence angles lead to dramatically different dynamics in the T-MOKE asymmetry. Even more, our technique allows to directly reconstruct the magnetic and non-magnetic parts of the refractive index. Studying the prototypical case of femtosecond demagnetization in Ni, we show that this data is ideal for a quantitative analysis and comparison with TDDFT calculations and allows to trace the optically-induced spin and charge dynamics in exceptional detail. We want to emphasize that besides a comparison of the same quantity, i.e. \(\varepsilon_{\mathrm{xy}}\), this approach also ensures that spectral broadening, multiple edges, and overlapping edges from multiple elements in multi-component materials is properly taken into account.
Beyond this exemplary case, we expect that femtosecond \(\varepsilon_{xy}\) spectroscopy will be especially valuable to shed light on the recently discovered OISTR effect and similar femtosecond optically-induced spin dynamics. In Ref. [54], we perform such a study, and discuss the implications for OISTR, which was, after all, first experimentally evidenced by EUV T-MOKE.
## VI Acknowledgements
This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - project IDs
Figure 9: Femtosecond spin dynamics from TDDFT, compared to experiment. (a) Spin-resolved density of states from density functional theory and the equilibrium occupation at 300 K for the 3p, 3d and 4s states in Ni. Due to the exchange splitting, significantly more minority states (upper part) are unoccupied (unshaded region around \(E_{F}\)), thereby allowing for more optical transitions (purple arrow). (b) As seen in the population change at 40 fs, the optical excitation with 1.2 eV photons leads to a strong redistribution of the population in the minority channel (light blue), populating empty states at 0-0.5 eV above \(E_{F}\) and creating empty states below \(E_{F}\). In comparison, the majority channel (dark blue) is less affected. (c) The redistribution of spins and charge carriers leads to a modification of the magneto-optical response (\(\varepsilon_{\mathrm{xy}}\)). Depending on the position with respect to the M-edge photon energy \(E_{M}\), the spectral dynamics of \(\mathrm{Re}(\varepsilon_{\mathrm{xy}})\) show regions with a strong relative decrease of \(\mathrm{Re}(\varepsilon_{\mathrm{xy}})\) (indicated with a red arrow) but also regions with a relative increase of \(\mathrm{Re}(\varepsilon_{\mathrm{xy}})\) (blue arrow) or a delayed onset of dynamics (grey arrow). (d) The calculations can be directly compared to experiment by evaluating \(\mathrm{Re}(\varepsilon_{\mathrm{xy}})\) as a function of time. Here, we observe that the predicted relative increase in \(\mathrm{Re}(\varepsilon_{\mathrm{xy}})\) is confirmed from experiment (see Fig. 6a). The time axis of experiment and theory were shifted according to each, as visualized by the electric field of the pulse used for the theory calculations.
399572199 and 432680300/SFB 1456. G.S.M.J. acknowledges financial support by the Alexander von Humboldt Foundation. S.S. and J.K.D. would like to thank the DFG for funding through project-ID 328545488 TRR227 (project A04).
|
2301.09671 | Flexible conditional density estimation for time series | This paper introduces FlexCodeTS, a new conditional density estimator for
time series. FlexCodeTS is a flexible nonparametric conditional density
estimator, which can be based on an arbitrary regression method. It is shown
that FlexCodeTS inherits the rate of convergence of the chosen regression
method. Hence, FlexCodeTS can adapt its convergence by employing the regression
method that best fits the structure of data. From an empirical perspective,
FlexCodeTS is compared to NNKCDE and GARCH in both simulated and real data.
FlexCodeTS is shown to generally obtain the best performance among the selected
methods according to either the CDE loss or the pinball loss. | Gustavo Grivol, Rafael Izbicki, Alex A. Okuno, Rafael B. Stern | 2023-01-23T19:11:43Z | http://arxiv.org/abs/2301.09671v1 | # Flexible conditional density estimation for time series
###### Abstract
This paper introduces FlexCodeTS, a new conditional density estimator for time series. FlexCodeTS is a flexible nonparametric method, which can be based on an arbitrary regression method. It is shown that FlexCodeTS inherits the rate of convergence of the chosen regression method. Hence, FlexCodeTS can adapt its convergence by employing the regression method that best fits the structure of data. From an empirical perspective, FlexCodeTS is compared to NNKCDE and GARCH in both simulated and real data. FlexCodeTS is shown to generally obtain the best performance among the selected methods according to either the CDE loss or the pinball loss.
**Keywords:** nonparametric statistics, conditional density estimation, time series, machine learning
**MSC:** 00-01; 99-00
## 1 Introduction
Predicting future values of a time series is often not enough. For instance, traders can make better option pricing and estimate value at risk by forecasting the _full_ uncertainty about the future value of an asset, \(Y_{t+1}\). This evaluation can be made by estimating the conditional density, \(f(y_{t+1}|y_{1},\ldots,y_{t},\mathbf{x}_{t})\), where \(\mathbf{x}_{t}\) are exogenous variables. An estimate of \(f\) allows one to construct predictive intervals for new observations (Fernandez-Soto et al., 2001; Izbicki et al., 2020, 2022; Dey et al., 2022), perform simultaneous quantile regression (Takeuchi et al., 2006), and calculate arbitrary moments of the series, such as volatility, skewness and kurtosis (Filipovic et al., 2012; Kalda and Siddiqui, 2013; Gneiting and Katzfuss, 2014). Also, empirical models for asset pricing are incomplete unless the full conditional model is specified (Hansen, 1994).
In order to obtain conditional density estimators (CDEs), one often uses parametric models, such as ARMA-GARCH (Engle, 1982; Hamilton, 2020) and generalizations (Bollerslev, 1987; Hansen, 1994; Sentana, 1995). However, the assumptions in these models are often too restrictive, specially when the distribution of \(Y_{t+1}\) is asymmetric or multimodal. It can also be hard to capture the relationship between \(Y_{t+1}\) and exogenous variables through a parametric model. In such cases, nonparametric estimators are useful (Hansen, 1994; Jeon and Taylor, 2012).
The majority of the proposed nonparametric CDEs for time series are based on kernel smoothers (Jeon and Taylor, 2012; Hardle et al., 1997; Hyndman and Yao, 2002; Fan and Yao, 2008). This approach performs poorly when either many lags or exogenous variables do not affect the response, unless a different bandwidth is used for each covariate (Hall et al., 2004). Since selecting a large number of bandwidths is time consuming (Izbicki and Lee, 2016; Izbicki et al., 2017), kernel smoothers are effective only in low dimensions (Izbicki et al., 2014). Thus, nonparametric CDEs that scale better with the number of covariates are needed.
This paper introduces FlexCodeTS, a version of FlexCode (Izbicki and Lee, 2017) tailored for time series. FlexCodeTS transforms _any_ regression method into a time series CDE. Furthermore, it often inherits the properties of the chosen regression method. For instance, FlexCodeTS based on random forests (Breiman, 2001) yields a CDE that perform automatic variable selection and works well in high dimensional settings. Also, FlexCodeTS based on XGBoost (Chen et al., 2015) scales for large datasets.
Section 2 defines FlexCodeTS and describes its implementation. Section 3 derives a general rate of convergence for FlexCodeTS as a function of the rate of convergence of the chosen regression method. It also derives a specific rate of convergence for FlexCodeTS when Nadaraya-Watson regression is used. Section 4 compares the performance of FlexCodeTS to NNKCDE and GARCH on both simulated and real data.
## 2 Methodology
Let the data be \((\mathbf{u}_{1},y_{1}),\ldots,(\mathbf{u}_{T},y_{T})\), where \(y_{t}\in\mathcal{Y}\subseteq\mathbb{R}\) is of interest and \(\mathbf{u}_{t}\in\mathbb{R}^{d}\) contains lagged values of \(y_{t}\) as well as available exogenous variables. We assume that
this series is stationary and, therefore, the conditional density of interest, \(f_{y_{t}|\mathbf{u}_{t}}(y|\mathbf{u})\), does not depend on \(t\). Hence, we refer to this density by \(f(y|\mathbf{u})\).
In order to develop FlexCodeTS, the first step is to decompose \(f(y|\mathbf{u})\) into an orthonormal series. As long as \(\int f^{2}(y|\mathbf{u})dy<\infty\), one obtains for each orthonormal basis over \(\mathcal{L}^{2}(\mathbb{R})\), \((\phi_{i})_{i\geq 0}\), the decomposition:
**Definition 1** (Decomposition of \(f(y|\mathbf{u})\)).: \[f(y|\mathbf{u})=\sum_{i\geq 0}\beta_{i}(\mathbf{u})\phi_{i}(y), \text{where }\beta_{i}(\mathbf{u})=\int f(y|\mathbf{u})\phi_{i}(\mathbf{u})dy\]
Choices for \(\phi\) include, for instance, the Fourier basis and wavelets, which are more appropriate when \(f(y|\mathbf{u})\) is spatially inhomogeneous in \(y\)(Mallat, 1999).
Furthermore, if \(f\) is smooth with respect to \((\phi_{i})_{i\geq 0}\), then it is well approximated by the initial terms of the series:
\[f(y|\mathbf{u})\approx f_{I}(y|\mathbf{u}):=\sum_{i=0}^{I}\beta_{i}(\mathbf{u })\phi_{i}(y), \text{for some }I\in\mathbb{N}. \tag{1}\]
FlexCodeTS obtains an estimate for \(f(y|\mathbf{u})\) by choosing a value of \(I\), estimating \(\beta_{1}(\mathbf{u})),\ldots,\beta_{I}(\mathbf{u})\), and plugging in these estimates into eq. (1). Hence, FlexCodeTS breaks CDE into two tasks: estimating \(\beta_{i}(\mathbf{u})\) and choosing \(I\).
In order to estimate \(\beta_{i}(\mathbf{u})\) an arbitrary regression method can be used. The key insight is that \(\beta_{i}(\mathbf{u})\) is the projection of \(f\) over \(\phi_{i}\). Therefore, \(\beta_{i}(\mathbf{u})=\mathbb{E}\left[\phi_{i}(Y)|\mathbf{u}\right]\). That is, \(\beta_{i}(\mathbf{u})\) is the regression of \(\phi_{i}(Y)\) onto \(\mathbf{u}\). Hence, an estimate of \(\beta_{i}(\mathbf{u})\), \(\widehat{\beta}_{i}(\mathbf{u})\), can be obtained by applying an arbitrary regression method to \((\mathbf{u}_{1},\phi_{i}(y_{1})),\ldots,(\mathbf{u}_{T},\phi_{i}(y_{T}))\).
Next, the value of \(I\) controls the bias-variance trade-off and is chosen through temporal data splitting fig. 1. A FlexCodeTS estimator, \(\hat{f}_{I}(y|\mathbf{u})\), is defined for each choice of \(I\):
**Definition 2** (FlexCodeTS CDE).: \[\hat{f}_{I}(y|\mathbf{u})=\sum_{i=0}^{I}\widehat{\beta}_{i}(\mathbf{u})\phi_ {i}(y).\]
The performance of each \(\hat{f}_{I}(y|\mathbf{u})\) is evaluated according to the CDE loss:
\[L(\hat{f}_{I},f)=\int\left(\hat{f}_{I}(y|\mathbf{u})-f(y|\mathbf{u})\right)^{2 }dyd\mathbb{P}(\mathbf{u}).\]
Figure 1: Data-splitting used to tune the cutoff \(I\) in FlexCode.
Although the CDE loss depends on the unknown CDE, \(f\), it can be estimated up to a constant. First, the CDE loss admits the following simplification:
\[L(\hat{f}_{I},f)=\int\mathbb{E}[\hat{f}_{I}^{2}(y|\mathbf{U})]dy-2\mathbb{E}[ \hat{f}_{I}(Y|\mathbf{U})]+K \tag{2}\]
Using the above equation, the CDE loss can be estimated up to a constant. The training set estimates \(\beta\) and the validation set evaluates the expression in eq.2 by substituting expectations by empirical averages:
\[\hat{L}(\hat{f}_{I},f)=\frac{\sum_{i=T+1}^{T^{*}}\int\hat{f}_{I}^{2}(y|\mathbf{ U}_{i})dy}{T^{*}-T}-2\frac{\sum_{i=T+1}^{T^{*}}\hat{f}_{I}(Y_{i}|\mathbf{U}_{i})} {T^{*}-T} \tag{3}\]
Finally, the value of \(I\) is chosen by minimizing \(\hat{L}(\hat{f}_{I},f)\).
One of the main advantages of FlexCodeTS is that the choice of the regression method can follow the structure of the problem at hand. For instance, if there is a sparse relationship between \(Y_{t}\) and \(\mathbf{U}_{t}\), XGBoost and random forests (Chen et al., 2015), perform variable selection. If \(\mathbf{U}_{t}\) belongs to a low-dimensional submanifold, then kNN and spectral series (Lee and Izbicki, 2016) achieve optimal rates.
Next, we provide theoretical results regarding the rate of convergence of the proposed CDE.
## 3 Theory
This Section discusses the theoretical properties of FlexCodeTS. Two main results are provided. First, under mild conditions, we determine the rate of convergence of FlexCodeTS as a function of the rate of convergence of the estimates of the regression functions, \(\beta\). As a result, if the regression method that is used for estimating \(\beta\) is consistent, then FlexCodeTS is consistent. Second, under additional weak assumptions, if \(\beta\) is estimated using the Nadaraya-Watson estimator, then the rate of convergence of FlexCodeTS is obtained.
For all of the above results, data is assumed to be stationary:
**Assumption 3** (Stationarity).: \(\{(\mathbf{U}_{i},Y_{i}):i\in\mathbb{N}\}\) _is stationary._
Also, the target \(f(y|\mathbf{u})\) is assumed to be smooth over \(y\) for every \(\mathbf{u}\). Specifically, we assume that the \(s\)-th weak derivative of \(f(\cdot|\mathbf{u})\) is uniformly bounded over \(\mathbf{u}\). This concept is formalized by defining that the functions \(f(\cdot|\mathbf{u})\) belong to similarly smooth Sobolev spaces.
**Definition 4** (Wasserman (2006)).: _For \(s>\frac{1}{2}\) and \(c>0\), the Sobolev space, \(W_{\phi}(s,c)\) is_
\[W_{\phi}(s,c)=\left\{f\equiv\sum_{i=1}^{\infty}\beta_{i}\phi_{i}:\sum_{i=1}^{ \infty}(\pi i)^{2s}\beta_{i}^{2}\leq c^{2}\right\}\]
**Assumption 5** (Smoothness in the \(y\) direction).: \(f(\cdot|\mathbf{u})\in W_{\phi}(s(\mathbf{u}),c(\mathbf{u}))\)_, where \(\inf_{\mathbf{u}}s(\mathbf{u}):=\gamma>\frac{1}{2}\) and \(\mathbb{E}[c^{2}(\mathbf{U})]:=C<\infty\)._
The next subsection discusses the rate of convergence of FlexCodeTS as a function of the rate of convergence of the estimates of \(\beta\).
### General rate of convergence of FlexCodeTS
In order to study the rate of convergence of FlexCodeTS this section assumes that the functions \(\beta_{i}(\mathbf{u})\) are estimated using regressors, \(\widehat{\beta}_{i}(\mathbf{u})\), which converge uniformly at a rate \(O_{P}(T^{-r})\).
**Assumption 6** (Regression convergence).: _There exists \(r>0\) such that_
\[\sup_{i}\int\left(\widehat{\beta}_{i}(\mathbf{u})-\beta_{i}(\mathbf{u}) \right)^{2}d\mathbb{P}(\mathbf{u})=O_{P}(T^{-2r})\]
Using assumption 6, it is possible to obtain a rate of convergence for FlexCodeTS under the CDE loss:
**Theorem 7**.: _Under Assumptions 3, 5 and 6, FlexCodeTS (definition 2) satisfies the following bound on the CDE loss,_
\[\iint\left(\widehat{f}_{I}(y|\mathbf{u})-f(y|\mathbf{u})\right)^{2}dyd \mathbb{P}(\mathbf{u})\leq IO_{P}\left(T^{-2r}\right)+O\left(I^{-2\gamma} \right).\]
_Hence, by taking the optimal rate \(I\sim T^{\frac{2r}{2\gamma+1}}\),_
\[\iint\left(\widehat{f}_{I}(y|\mathbf{u})-f(y|\mathbf{u})\right)^{2}dyd \mathbb{P}(\mathbf{u})\leq O_{P}\left(T^{-\frac{4\gamma r}{2\gamma+1}}\right).\]
The rate of convergence in theorem 7 depends on the rate convergence of \(\widehat{\beta}_{i}(\mathbf{u})\), as in assumption 6. The rate in assumption 6 is typically related to the number of covariates and to smoothness assumptions regarding \(\beta\). Section 3.2 shows that, under suitable conditions, the Nadaraya-Watson regression estimator satisfies assumption 6 and, therefore, theorem 7 can be applied.
### Rate of convergence of FlexCodeTS using Nadaraya-Watson regression
This section studies the convergence of FlexCodeTS when \(\beta_{i}\) is estimated using a Nadaraya-Watson regression with a uniform kernel:
**Definition 8** (Nadaraya-Watson regression with uniform kernel).: \[\widehat{\beta}_{i}(\mathbf{u})=\frac{\sum_{j=1}^{T}\mathbb{I}(\|\mathbf{U}_{j }-\mathbf{u}\|_{2}\leq\delta_{T})\phi_{i}(Y_{j})}{\sum_{j=1}^{T}\mathbb{I}(\| \mathbf{U}_{j}-\mathbf{u}\|_{2}\leq\delta_{T})},\qquad\qquad\text{for some choice of $\delta_{T}$}.\]
The rate of convergence for \(\widehat{\beta}_{i}(\mathbf{u})\) is obtained following closely the results in Truong and Stone (1992). In order to apply these results, additional assumptions are required. First, in order to obtain that every \(\beta_{i}(\mathbf{u})\) is smooth over \(\mathbf{u}\), assumption 9 ensures that \(f(y|\mathbf{u})\) is Lipschitz continuous:
**Assumption 9** (Smoothness in \(\mathbf{u}\) direction).: \((Y,U)\) _is bounded and there exists \(M_{0}>0\) such that, for every \(y\in\mathcal{Y}\),_
\[|f(y|\mathbf{u}_{1})-f(y|\mathbf{u}_{2})|\leq M_{0}||\mathbf{u}_{1}-\mathbf{u }_{2}||.\]
Similarly, the value of \(\beta_{i}(\mathbf{u})\) is bounded. Since \(\beta_{i}(\mathbf{u})=\mathbb{E}[\phi_{i}(Y)|\mathbf{U}]\), this restriction is obtained by bounding the image of \(\phi_{i}\). This assumption is satisfied, for instance, by the cosine and the trigonometric basis.
**Assumption 10**.: _There exists \(M>0\) such that \(\sup_{i\in\mathbb{N},y\in\mathcal{Y}}|\phi_{i}(y)|<M\)._
Also, the joint distribution of \(\mathbf{U}_{0},\mathbf{U}_{1},\ldots,\mathbf{U}_{T}\) does not have extremal high probability or low probability regions:
**Assumption 11** (Bounded density).: _The distribution of \(\mathbf{U}_{0}\) is continuous and its density is bounded away from zero and infinity. Also, the conditional distribution of \(\mathbf{U}_{j}\) given \(\mathbf{U}_{0}\) is bounded away from zero and infinity._
Finally, \((\mathbf{U}_{i},Y_{i})\) is assumed to have a short memory. Such a constraint is obtained by assuming the series to be \(\alpha\)-mixing:
**Assumption 12** (Strong \(\alpha\)-mixing).: _Let \(\mathcal{F}_{0}\) and \(\mathcal{F}^{k}\) denote the \(\sigma\)-fields generated by \(\{(\mathbf{U}_{i},Y_{i}):i\leq 0\}\) and \(\{(\mathbf{U}_{i},Y_{i}):i\geq k\}\), respectively. Also, let_
\[\alpha(k)=\sup_{A\in\mathcal{F}_{0},B\in\mathcal{F}^{k}}\left|\mathbb{P}(A \cap B)-\mathbb{P}(A)\mathbb{P}(B)\right|.\]
_There exists \(\rho\in(0,1)\) such that \(\alpha(k)=O\left(\rho^{k}\right)\)._
Under the above assumptions, it is possible to obtain the rate of convergence of FlexCodeTS with Nadaraya-Watson regression:
**Theorem 13**.: _If \(\widehat{\beta}_{i}(\mathbf{u})\) is such as in definition 8 and satisfies \(\delta_{T}\sim T^{-\frac{1}{2+d}}\), and \(\hat{f}_{I}(y|\mathbf{u})\) is such as in definition 2 with \(I\sim T^{\frac{2}{(2\gamma+1)(2+d)}}\), then under assumptions 3, 5 and 9 to 12,_
\[\iint\left(\hat{f}_{I}(y|\mathbf{u})-f(y|\mathbf{u})\right)^{2}dyd\mathbb{P}( \mathbf{u})=O_{P}\left(T^{\frac{-2\gamma}{2\gamma+1+d\left(\gamma+\frac{1}{2} \right)}}\right).\]
The same proof strategy used in theorem 13 can be used to derive specific rates for other regression estimators for \(\beta_{i}(\mathbf{u})\) when the convergence rate is known. For example, this is the case for the neural networks-based method explored by Chen and White (1999).
The next section evaluates the empirical performance of FlexCodeTS in real and simulated datasets.
## 4 Experiments
This section compares FlexCodeTS to two other CDEs: NNKCDE (Pospisil and Dalmasso, 2019; Dalmasso et al., 2020; Izbicki et al., 2020) and GARCH (Bollerslev, 1986). NNKCDE takes the \(k\)-nearest neighbors of an evaluation point, \(\mathbf{u}\), and uses these points to estimate \(f(y|\mathbf{u})\) through a kernel density estimator. GARCH assumes that \(Y_{t}\) follows an ARMA model and that the residuals follow a SGARCH model.
While section 4.1 compares these methods using simulated data, section 4.2 compares them using real data regarding critpotcurrency prices and also the Individual Household Electric Power Consumption data from UCI.
### Simulated Data
This section compares FlexCodeTS, NNKCDE and GARCH using simulated data. We generate data using 3 sample sizes: \(n=1000\), \(2500\) and \(5000\). The following 6 simulation scenarios are studied:
1. **AR**: \(Y_{t}=0.2Y_{t-1}+0.3Y_{t-2}+0.35Y_{t-3}+\epsilon_{t}\), where \(\epsilon_{t}\sim N(0,1)\).
2. **ARMA JUMP**: \(Y_{t}=0.1Y_{t-1}+0.4Y_{t-2}+0.4Y_{t-3}+0.01-0.3Z_{t}+0.05(1+Z_{t})\epsilon_{t}\), where \(\epsilon_{t}\sim N(0,1)\) and \(Z_{t}\sim\text{Bernoulli}(0.05)\).
3. **ARMA JUMP** T: \(Y_{t}=0.1Y_{t-1}+0.4Y_{t-2}+0.4Y_{t-3}+0.01-0.3Z_{t}+0.05(1+Z_{t})\epsilon_{t}\), where \(\epsilon_{t}\sim t_{3}\) and \(z_{t}\sim\text{Bernoulli}(0.05)\), where \(t_{3}\) denotes Student's t-distribution with 3 degrees of freedom.
4. **NONLINEAR MEAN**: \(Y_{t}=\sin^{2}{(\pi Y_{t-3})}+\epsilon_{t}\), where \(\epsilon_{t}\sim N(0,\sigma^{2})\).
5. **NONLINEAR VARIANCE**: \(Y_{t}\sim N(0,\sigma_{t}^{2})\), where \(\sigma_{t}=0.1\) if \(|Y_{t-3}|>0.5\) and \(\sigma_{t}=1\), otherwise.
6. **JUMP DIFFUSION**: The model proposed by Christoffersen et al. (2016), "Time-varying Crash Risk: The Role of Market Liquidity".
The code for the above simulations as well as for training and evaluating the CDEs was implemented in the R language. The estimators were evaluated according to the CDE loss (eq. (3)) and the quantile loss (Koenker and Hallock, 2001).
As a first comparison, each CDE method is evaluated according to estimated CDE loss with 95% confidence intervals. In this scenario, for all the methods, the CDEs were fit using the correct number of lags. For instance, in the **NONLINEAR MEAN** model, 3 lags were used. Figure 2 shows that FlexCodeTS significantly outperforms GARCH and NNKCDE in all simulations except for **AR** and **JUMP DIFFUSION**. In **JUMP DIFFUSION**, FlexCodeTS and GARCH have similar losses and outperform NNKCDE. In **AR**, GARCH slightly outperforms FlexCodeTS and both of them outperform NNKCDE. The high performance of GARCH in **AR** is expected, since it is the only scenario in which the GARCH parametric model is correctly specified.
The analysis in fig. 2 can be complemented by evaluating whether the performance of each method is influenced by the incorrect specification of the number of lags. Figure 3 studies this influence in the **AR** scenario. While the performance of GARCH and NNKDCDE is heavily affected by the chosen number of lags, the performance of FlexCodeTS is robust and does not degrade as the number of lags increases. As a result, although GARCH outperforms FlexCodeTS when the correct number of lags (3) is chosen, FlexCodeTS as the number of adjusted lags increases.
Figure 4 explains why the performance of FlexCodeTS in **AR** is robust over the number of adjusted lags. We define the importance of each lag using FlexCodeTS with Random Forest regression as average Random Forest importance (Breiman, 2001) of the lag across all regression functions, \(\widehat{\beta}_{i}(\mathbf{u})\). Figure 4 shows that, even when 50 lags are
incorporated into the model, only the first 3 lags have a high importance. These lags are precisely the ones that are used in **AR**. This evidence suggests that FlexCodeTS automatically selects the relevant lags, which explains why its performance is robust to the number of adjusted lags.
### Applications to real data
This section compares the performance of FlexCodeTS, NNKCDE and GARCH in applications to real data: Individual Household Electric Power Consumption from UCI1 and criptiocurrency prices.
Footnote 1: Available at [http://archive.ics.uci.edu/ml/datasets/Individual+household+electric+power+consumption](http://archive.ics.uci.edu/ml/datasets/Individual+household+electric+power+consumption).
Figure 2: Estimated CDE loss and 95% confidence intervals for FlexCodeTS, GARCH and NNKCDE under 6 simulation scenarios. FlexCodeTS outperforms GARCH and NNKCDE in all scenarios except for **AR** and **JUMP DIFFUSION**.
#### 4.2.1 Individual Household Electric Power Consumption (IHEPC) data
The IHEPC data contains roughly \(2\cdot 10^{6}\) measurements gathered in a house located in France during 47 months. Although FlexCodeTS with XGBoost regression runs using the full data, NNKCDE and GARCH experience convergence issues. In order to be able to compare all methods, we restrict the analysis to the first \(4\cdot 10^{4}\) measurements. Among these measurements, 70% are used for training, 10% for validation, and the remaining 20% for testing the models. We treat "global active power" as the response variable and use as predictors the ten previous lags and also all the other covariates measured at the time, including dummies for the hour of the day.
FlexCodeTS had a high performance in the IHEPC data. For instance, FlexCodeTS had a CDE loss of -4.4, lower than those of GARCH (-3.9) and of NNKCDE (-3.2). Similarly, fig. 5 presents each method's log pinball loss for each quantile from 5% to 95%. FlexCodeTS has the lowest pinball loss for every quantile, except for 95%.
Figure 6 shows the importance of each variable in FlexCodeTS. The most relevant variable is the global reactive power. While the global active power measures the electrical consumption by appliances, the global reactive power measures how much electricity bounces back and forth without any usage. The hour variables also have a high importance, which shows that the predictions for electrical usage also depend on the time of the day.
#### 4.2.2 Currency data
This section studies conditional density estimation in currency data. The currency data is composed of daily measurements from 11/2017 to 11/2022 of the S&P500 index, the exchange rate from euro to dollar, and closing prices for Ether and Bitcoin. Due to
Figure 3: CDE loss for each method as a function of the number of lags that is used. The performance is assessed in the **AR** scenario, where the parametric model in GARCH is correct. While GARCH outperforms FlexCodeTS when the chosen number of lags is correct (3), FlexCodeTS outperforms GARCH as the number of lags included in the model increases.
their high volatility, cryptocurrency prices provide a useful complement to the previous analysis.
Using this data, we compare the performance of FlexCodeTS to that of GARCH. NNKCDE failed to run with the available RAM and, thus, was not included. Data from 2017 to 2020 was used for training, from 2020 to 2021 for validation, and from 2021 to 2022 for testing. In all comparisons, the response variable is one of the 4 chosen series. The following variables were added as predictors: 5 lags of daily return, daily amplitude, variance of prices in the last 3 days, minimum and maximum returns in the last 5 days. FlexCodeTS was trained with LASSO regression.
Figure 7 shows that FlexCodeTS generally outperforms GARCH for all of the response variables. The graph on the left side presents the CDE loss for each method. For all of the 4 series, the CDE loss is higher for GARCH than for FlexTS. The graph on the right side compares the pinball loss of each method for quantiles 1%, 5%, 95%, and 99%. For the Bitcoin, Ether and S&P500 series, the pinball loss for GARCH is Higher than that for FlexTS in all quantiles. The only exception to this thread occurs in the Eur series, in which both FlexTS and GARCH have similarly small pinball losses.
## 5 Conclusions
This paper introduces FlexCodeTS, a new time series conditional density estimator. FlexCodeTS is a flexible nonparametric CDE, which can inherit the properties of arbitrary regression methods. For instance, in this paper FlexCodeTS was implemented based on penalized linear regression, random forests, and XGBoost. As a future improvement, FlexCodeTS could also be implemented based on machine learning methods for time series, such as long short-term memory models (Hochreiter and Schmidhuber, 1997), attention-based networks (Vaswani et al., 2017), and gated recurrent units net
Figure 4: Average importance of each adjusted lag in the **AR** scenario according to FlexCodeTS with Random Forest regression. FlexCodeTS specifies a high importance to the first three lags, which are the only ones used in this scenario.
works (Cho et al., 2014).
FlexCodeTS also inherits useful properties from the chosen regression method. For instance, predictor importance scores are obtained with XGBoost and Random Forest. Moreover, from a theoretical perspective, we show that FlexCodeTS inherits the rate of convergence of the chosen regression method. Hence, FlexCodeTS can adapt its convergence by employing the regression method that best fits the structure of data.
Empirical studies with simulated and real data show that FlexCodeTS performs well, when compared to other CDE methods for time series, such as NNKCDE and GARCH. FlexCodeTS generally has the best performance amongst the compared methods when either the CDE or the pinball loss are considered.
Figure 5: Log pinball loss for GARCH, NNKCDE and FlexCodeTS for each quantile from 5% to 95%. FlexCodeTS outperforms NNKCDE and GARCH for every quantile except for 95%.
Figure 6: Average importance of predictors according to FlexCodeTS with XGBoost regression in the IHEPC data.
## Acknowledgments
We thank Marcelo Fernandes for early feedback and discussions on this work. This work was supported by the Silicon Valley Community Foundation (SVCF; grant #2018-188547). Rafael Izbicki is grateful for the financial support of FAPESP (grant 2019/11321-9) and CNPq (grants 309607/2020-5 and 422705/2021-7). Rafael Stern is grateful for the financial support of FAPESP Research, Innovation and Dissemination Center for Neuromathematics (grant #2013/07699-0).
|
2303.07025 | Experimental investigation of the effect of topological insulator on the
magnetization dynamics of ferromagnetic metal: $BiSbTe_{1.5}Se_{1.5}$ and
$Ni_{80}Fe_{20}$ heterostructure | We have studied ferromagnetic metal/topological insulator bilayer system to
understand magnetization dynamics of ferromagnetic metal (FM) in contact with a
topological insulator (TI). At magnetic resonance condition, the precessing
magnetization in the metallic ferromagnet ($Ni_{80}Fe_{20}$) injects spin
current into the topological insulator ($BiSbTe_{1.5}Se_{1.5}$), a phenomenon
known as spin-pumping. Due to the spin pumping effect, fast relaxation in the
ferromagnet results in the broadening of ferromagnetic resonance linewidth
($\Delta H$). We evaluated the parameters like effective Gilbert damping
coefficient ($\alpha_{eff}$), spin-mixing conductance ($g_{eff}^{\uparrow
\downarrow}$) and spin current density ($j_S^0$) to confirm a successful spin
injection due to spin-pumping into the $BiSbTe_{1.5}Se_{1.5}$ layer. TIs embody
a spin-momentum locked surface state that span the bulk band-gap. It can act
differently to the FM magnetization than the other normal metals. To probe the
effect of topological surface state, a systematic low temperature study is
crucial as surface state of TI dominates at lower temperatures. The exponential
growth of $\Delta H$ for all different thickness combination of FM/TI bilayers
and effective Gilbert damping coefficient ($\alpha_{eff}$) with lowering
temperature confirms the prediction that spin chemical bias generated from
spin-pumping induces surface current in TI due to spin-momentum locking. The
hump-like feature of magnetic anisotropy field ($H_K$)of the bilayer around 60K
suggests that the decrease of interfacial in-plane magnetic anisotropy can
result from exchange coupling between the TI surface state and the local
moments of FM layer. | Sayani Pal, Soumik Aon, Subhadip Manna, Sambhu G Nath, Kanav Sharma, Chiranjib Mitra | 2023-03-13T11:42:37Z | http://arxiv.org/abs/2303.07025v2 | Experimental investigation of the effect of topological insulator on the magnetization dynamics of ferromagnetic metal: \(BiSbTe_{1.5}Se_{1.5}\) and \(Ni_{80}Fe_{20}\) heterostructure
###### Abstract
We have studied ferromagnetic metal/topological insulator bilayer system to understand magnetization dynamics of ferromagnetic metal (FM) in contact with a topological insulator (TI). At magnetic resonance condition, the precessing magnetization in the metallic ferromagnet (\(Ni_{80}Fe_{20}\)) injects spin current into the topological insulator (\(BiSbTe_{1.5}Se_{1.5}\)), a phenomenon known as spin-pumping. Due to the spin pumping effect, fast relaxation in the ferromagnet results in the broadening of ferromagnetic resonance linewidth (\(\Delta H\)). We evaluated the parameters like effective Gilbert damping coefficient (\(\alpha_{eff}\)), spin-mixing conductance (\(g_{eff}^{\uparrow\downarrow}\)) and spin current density (\(j_{3}^{0}\)) to confirm a successful spin injection due to spin-pumping into the \(BiSbTe_{1.5}Se_{1.5}\) layer. TIs embody a spin-momentum locked surface state that span the bulk band-gap. It can act differently to the FM magnetization than the other normal metals. To probe the effect of topological surface state, a systematic low temperature study is crucial as surface state of TI dominates at lower temperatures. The exponential growth of \(\Delta H\) for all different thickness combination of FM/TI bilayers and effective Gilbert damping coefficient (\(\alpha_{eff}\)) with lowering temperature confirms the prediction that spin chemical bias generated from spin-pumping induces surface current in TI due to spin-momentum locking. The hump-like feature of magnetic anisotropy field (\(H_{K}\))of the bilayer around 60K suggests that the decrease of interfacial in-plane magnetic anisotropy can result from exchange coupling between the TI surface state and the local moments of FM layer.
## I Introduction
Spintronics is one of the emerging fields that has witnessed remarkable progress on both fundamental and technological fronts over the past couple of decades. Phenomena like spin-orbit torque [2], spin hall effect[3], giant magnetoresistance [4], tunnelling magnetoresistance [5], domain wall motion [6] provide basics for applications in memory devices[7], storage technology[8], logic gates [9] and magnetic sensors [10]. These devices utilise the spin degrees of freedom of electron and their interaction with orbital moments through spin-orbit coupling. Complete knowledge on the process of generation, manipulation and detection of spin degrees of freedom or the spin current is essential for widespread applications in this field. If one focuses on the currently available spin current generation processes, spin-pumping [13; 14] is definitely the most efficient method among them where the precessing magnetization of ferromagnetic (FM) layer in a ferromagnetic multilayer device injects spin current into the adjacent layer by transferring spin angular momentum. The concern now should be on spin pumping with new materials [15] having large spin torque, essential for significant spin-charge interconversion. Topological insulators (TI) are the new class of materials that have strong spin-orbit coupling (SOC) [16], capable of exerting spin-orbit torque (SOT) to the magnetic moments of ferromagnet which is essential for spintronic applications[17].
Three-dimensional(3D) TIs have received considerable attention for several possibilities in many scientific arenas including spintronics [18; 19]. 3D TIs have an insulating bulk with a narrow band gap and gapless two-dimensional (2D) spin polarized surface states. Conduction electrons on the surface state of TI behave as Dirac fermions whose direction of motion is uniquely specified by its spin direction due to spin-momentum locking [20; 21; 22]. There has been a significant development in the field of spin-transport phenomenon in FM/TI systems in recent years. Effect of spin pumping can be witnessed in the enhanced Gilbert damping coefficient (\(\alpha_{eff}\)) value of the ferromagnet. The enhancement is governed by spin-torque parameter of the interface or the mixing conductance (\(g_{eff}^{\uparrow\downarrow}\)). Spin transfer at the interface is believed to be analogous to scattering in transport theory. Once the spin current reaches the TI layer, it starts loosing the spin polarization there due to spin diffusion. The mechanism of spin diffusion into TI should affect the magnetization of FM differently than normal metal because of the exotic nature of TI surface state. For a complete understanding of the effect TI surface state on FM magnetization dynamics, low temperature study is necessary because surface state of TI dominates at lower temperatures.
In this paper, we present the study of spin-pumping from \(Ni_{80}Fe_{20}\) into \(BiSbTe_{1.5}Se_{1.5}\). We confirm a successful spin injection into the \(BiSbTe_{1.5}Se_{1.5}\) layer from the obtained values of the spin-transport parameters like \(\alpha_{eff}\), \(g_{eff}^{\uparrow\downarrow}\) and spin current density (\(j_{3}^{0}\)) in our experiment. In low temperature measurements of FWHM and Gilbert damping coefficients we have witnessed
exponential enhancement at lower temperatures which has a connection with the spin chemical bias induced surface current in TI. For further investigation of the effect of TI surface state on the FM magnetization we have done low-temperature measurements of effective magnetization and anisotropy field. The anisotropy field of in-plane magnetization shows a hump-like pattern around 60K and then decreases with further decrease in temperature. It supports the theory of exchange coupling between the TI surface state and FM layer which acts normal to the TI/FM interface.
## II Sample preparation and characterization
For this particular work, we have prepared different thickness combinations of topological insulator(TI)/ferromagnet(FM) bilayer heterostructure. \(BiSbTe_{1.5}Se_{1.5}(BSTS)\) has been taken as the TI material and Permalloy(\(Ni_{80}Fe_{20}\)) has been used as the ferromagnetic material. BSTS thin films were grown on silicon (Si 111) substrate using pulsed laser deposition(PLD) technique [27; 28]. The target material was prepared using 99.999% pure Bi, Sb, Te, and Se in a 1:1:1.5:1.5 stoichiometric ratio. The films were deposited through ablation of the target by a KrF excimer laser (248 nm, 25 ns pulsed width) at a low repetition rate of 1Hz and \(1.2Jcm^{-2}\) laser fluence keeping the substrate temperature fixed at \(250^{0}C\) and the chamber partial pressure at 0.5 mbar (base pressure \(2\times 10^{-5}\) mbar) with a continuous flow of Ar gas. After deposition, TI films were immediately transferred into the thermal evaporation chamber for ferromagnet deposition. Commercially available 99.995% pure permalloy (\(Ni_{80}Fe_{20}\)) pallets were used for deposition. Py film was deposited [29] on top of TI film at a rate of \(1.2\AA\) (crystal monitor: Inficon SQM 160) keeping the chamber pressure fixed at \(1\times 10^{-6}\) torr (base pressure \(1\times 10^{-7}\) torr). For characterization of the films X-ray diffraction analysis (XRD), field emission scanning electron microscope (FE-SEM) imaging and atomic force microscopy (AFM) facilities have been used. X-ray reflectometry technique has been used for thickness measurements here. For convenience we are defining the BSTS of different thickness as follows: 10nm BSTS as BSTS1, 21nm BSTS as BSTS2, 28nm BSTS as BSTS3, 37nm BSTS as BSTS4.
## III Results and discussion
For a systematic study of FM/TI bilayer system, we have done in-plane FMR measurements in reflection mode geometry using a short circuited CPW as shown in fig.1. We have shown typical FMR signal at different microwave frequencies for Py(15nm)/BSTS1 sample in fig.2a, frequency dependence of the field linewidth (\(\Delta H\) vs. \(f\)) in fig.2b and the resonance field (\(f\) vs \(H\)) in fig.2c. These give us valuable information about the magnetization dynamics in ferromagnet which can be described within the framework proposed by Landau, Lifshitz and Gilbert,
\[\frac{d\vec{M}}{dt}=-\gamma\vec{M}\times H_{eff}^{-}+\frac{\alpha_{eff}}{M_{S} }\vec{M}\times\frac{d\vec{M}}{dt} \tag{1}\]
where, \(\gamma\) is the gyromagnetic ratio, \(\vec{M}\) is the magnetization vector, \(M_{S}\) is the saturation magnetization, \(H_{eff}\) is the effective magnetic field which includes the external field, demagnetization and crystalline anisotropy field and \(\alpha_{eff}\) is the effective Gilbert damping coefficient of the system. For a given magnetic material at ferromagnetic resonance, the resonance field and frequency follows Kittel equation[30] given by,
\[f=\frac{\gamma}{2\pi}\sqrt{(H+H_{k})(H+H_{k}+4\pi M_{eff})} \tag{2}\]
where \(H\), \(H_{k}\) and \(4\pi M_{eff}\) are the external applied field, magnetic anisotropy field and effective magnetization respectively. We have obtained \(H_{k}\) and \(4\pi M_{eff}\) for different FM/TI bilayer systems by fitting the Kittel equation to the \(f\) vs. \(H\) curve as shown in fig.2c. The obtained \(4\pi M_{eff}\) value contains saturation magnetization(\(4\pi M_{s}\)) and other anisotropic contributions. We can evaluate \(4\pi M_{s}\) value by analysing the thickness dependent measurement of \(4\pi M_{eff}\) of the FM layer. In the lower thickness region of the ferromagnetic thin films, \(4\pi M_{eff}\) is inversely proportional to the film thickness and follows the equation[31],
\[4\pi M_{eff}=4\pi M_{s}-\frac{2K_{s}}{M_{s}d} \tag{3}\]
Figure 1: Sample has been placed upside down on top of the CPW for FMR measurements. On the right hand side of the schematic diagram, magnetization precession and spin current propagation in the TI/FM bilayer sample has been shown.
where \(K_{S}\) is the surface anisotropy constant and \(d\) is the thickness of the FM film. The slope of the linear fit gives the anisotropy field contribution to \(4\pi M_{eff}\) and the intercept gives the \(4\pi M_{s}\) value as shown in fig.2d. The \(4\pi M_{eff}\) does not depend on the thickness variation of BSTS at room temperature. The \(4\pi M_{s}\) value for bare Py and Py/BSTS bilayer sample is 10.7kOe and 8.48 kOe respectively. The \(K_{s}\) value have decreased from \(0.092\pm 0.008erg/cm^{2}\) in bare Py film to \(0.091\pm 0.015erg/cm^{2}\) in Py/BSTS2 bilayer. So interfacial anisotropy constant,\(K_{i}(=K_{s}^{Py/TI}-K_{s}^{Py})\) for the Py/BSTS2 sample is \(-0.001erg/cm^{2}\). From the negative value of \(K_{i}\) we can ensure an in-plane magnetic anisotropy (IMA) in the Py/BSTS interface at room temperature. A detailed discussion of IMA has been provided in the last section where the temperature variation of \(H_{k}\) is discussed.
The damping coefficient can be determined by analysing the FMR linewidth (\(\Delta H\)) at different frequencies. \(\Delta H\) is the full linewidth at half maximum of the FMR absorption spectra. It contains both the intrinsic and extrinsic contributions to the damping. Linewidth due to intrinsic Gilbert damping(\(\alpha_{eff}\)) is directly proportional to the resonance frequency(\(f\)) and follows the equation[32],
\[\Delta H=\Delta H_{0}+(\frac{2\pi\alpha_{eff}}{\gamma})f \tag{4}\]
Here \(\Delta H_{0}\) describes inhomogeneous linewidth broadening due to different extrinsic contributions like magnetic inhomogeneities, surface roughness and defects in the sample. We have evaluated the \(\alpha_{eff}\) values by fitting the \(\Delta H\) vs \(f\) curve for different FM/TI systems as shown in fig.2b. This \(\alpha_{eff}\) consists of Gilbert damping in the bulk ferromagnet(\(\alpha_{FM}\)) and the enhanced damping(\(\Delta\alpha_{SP}\)) resulting from spin pumping into the adjacent TI layer [33; 34; 35].
\[\alpha_{eff}=\alpha_{FM}+\Delta\alpha_{SP} \tag{5}\]
In FM/NM heterostructure, spin current arises in the non magnetic layer as FM magnetization precession pumps spin angular momentum into the NM layer and the spin chemical potential gradient due to spin flip relaxation in NM aids the spin angular momentum transfer. This cost of angular momentum in the magnetic layer is brought about by the rapid relaxation of magnetization precession which in turn is enabled by the resonance (FMR) condition, where the angular momentum transfer is maximum. The associated spin current is related to the rapidly decaying magnetisation precession (\(\frac{d\vec{M}}{dt}\)) which in turn exerts a further torque \(\vec{M}\times\frac{d\vec{M}}{dt}\) that is related to the net spin current (\(I_{S}\)) into the FM. Thus the dynamics of magnetization in TI/FM bilayer system can be modelled [49] as follows by incorporating this additional torque term into the original LLG equation,
\[\frac{d\vec{M}}{dt}=-\gamma\vec{M}\times\vec{H}_{eff}+\alpha_{FM}\vec{M} \times\frac{d\vec{M}}{dt}+\frac{\gamma}{M_{S}V}\vec{I}_{S} \tag{6}\]
Figure 2: (a) Ferromagnetic Resonance spectra of absorption at different frequencies for Py/BSTS bilayer system at room temperature after background subtraction ;(b) Field linewidth variation with resonance frequencies at 300K for Py/BSTS bilayer samples with different Py thicknesses. Eq.4 has been used for fitting the curve and to determine the Gilbert damping coefficient ;(c) resonance field variation with resonance frequencies at different temperature for Py(20nm)/BSTS2 system. Eq.2 has been used for fitting the curve and to determine the effective magnetization;(d) Linear fit of the thickness vs. effective magnetization data to evaluate saturation magnetization for Py(t), Py(t)/BSTS2 and Py(15nm/BSTS(t) at room temperature.
where
\[\vec{I}_{S}\propto\vec{M}\times\frac{d\vec{M}}{dt} \tag{7}\]
\(I_{S}\) is the net spin current (\(I_{S}^{pump}-I_{S}^{back}\)) pumped into the TI layer. \(I_{S}\) contains the FM magnetization precession induced spin pump current (\(I_{S}^{pump}\)) into the NM as well as the spin accumulation driven spin back flow current (\(I_{S}^{back}\)) into the FM. A non magnetic layer is not an ideal spin sink. So spin accumulation in the NM layer creates a non-equilibrium chemical potential imbalance near the interface which results \(I_{S}^{back}\) into the FM. The spin-back-flow current reduces the total spin-pump current generated by the magnetization precession.
We can extract the \(\alpha_{FM}\) value from the total \(\alpha_{eff}\) value by performing the FM thickness(\(t_{FM}\)) dependent measurement of \(\alpha_{eff}\) for a particular thickness of the TI layer. \(\alpha_{eff}\) varies linearly with the inverse thickness of the ferromagnet[36] as shown in the following equation,
\[\alpha_{eff}=\alpha_{FM}+\frac{g\mu_{B}}{4\pi M_{s}t_{FM}}g_{eff}^{\uparrow \downarrow} \tag{8}\]
and
\[\Delta\alpha_{SP}=\alpha_{SP}/t_{FM}=\frac{g\mu_{B}}{4\pi M_{s}t_{FM}}g_{eff}^ {\uparrow\downarrow} \tag{9}\]
where, \(g\),\(\mu_{B}\) and \(g_{eff}^{\uparrow\downarrow}\) are the \(g\)-factor, Bohr magneton and the effective spin mixing conductance respectively. The conductance parameter, \(g_{eff}^{\uparrow\downarrow}\) governs the transport of spin current into the adjacent NM layer from FM and is proportional to the torque acting on the FM in the presence of spin accumulation in the NM layer. The derivation of \(g_{eff}^{\uparrow\downarrow}\) is based on the magnetoelectronic circuit theory [37; 38] for spin transport at the FM/NM interface. In fig 3a we can see the linear fit of \(\alpha_{eff}\) vs \(t_{FM}^{-1}\). In this heterostructure, damping coefficient value for all the different thicknesses of the Py got enhanced in comparison to our previously reported [29]\(\alpha\) value for bare Py thin films where the obtained value of \(\alpha\) was in the range of 0.004 to 0.008. This clearly gives the evidence of spin injection into the BSTS layer from the Py layer due to spin pumping. Here the \(\alpha_{FM}\) and \(\alpha_{SP}\) values obtained for Py/BSSTS2 from the linear fit (Eq.8) are \(0.00639\pm 0.0009\) and \(0.15102\pm 0.0204\)\(nm\) respectively. Using the \(4\pi M_{s}\) value obtained from Eg.3 we can determine the \(g_{eff}^{\uparrow\downarrow}\) value. We obtained \(g_{eff}^{\uparrow\downarrow}=0.526\times 10^{19}\pm 0.071\times 10^{19}m^{-2}\) for the \(Py/BSTS\) bilayer film using Eq.8. It is in the range of previously reported \(g_{eff}^{\uparrow\downarrow}\) value of other combination of ferromagnet and TI bilayer structures [39; 40; 42; 54]. We can also calculate the spin current density(\(j_{s}^{0}\)) of the FM/TI heterostructure using the \(g_{eff}^{\uparrow\downarrow}\) value in the following equation[41; 42]:
\[j_{s}^{0}=\frac{g_{eff}^{\uparrow\downarrow}\gamma^{2}h_{m}^{2}\hbar[4\pi M_{ s}\gamma+\sqrt{(4\pi M_{s})^{2}\gamma^{2}+4\omega^{2}}]}{8\pi\alpha^{2}[(4\pi M_{s})^{2} \gamma^{2}+4\gamma^{2}]} \tag{10}\]
Here \(\gamma\), \(h_{m}\), \(\hbar\), \(\omega\), \(\alpha\) are the gyromagnetic ratio, microwave magnetic field, Planck's constant, Larmour precession frequency and effective Gilbert damping parameter respectively. Using Eq.10 the \(j_{s}^{0}\) value for Py/BSTS2 was calculated to be \(0.978\times 10^{6}\pm 0.133\times 10^{6}Jm^{-2}\) in our experiment. For this particular bilayer system of Py and BSTS2, the value of spin-mixing conductance and spin current density do not differ significantly from the other high SOC heavy metals [43; 44; 45; 46]. But studies on CoFeB/BS by Jamali _et al._ found the \(g_{eff}^{\uparrow\downarrow}\) and \(j_{s}^{0}\) value to be in the range of \(1.2-14\times 10^{19}m^{-2}\) and \(2.6-10\times 10^{6}Am^{-2}\) respectively [47]. They suggest that the interfacial SOC can enhance the pumped surface current into the TI and depending upon the strength of SOC of the surface state one can get lager spin mixing conductance. There are not many theories which can explain the spin-pumping formalism that explicitly includes the SOC at the interface of FM/TI. We believe that in our case the spin injection into the TI layer is largely influenced by the interface.
Figure 3: (a) \(\alpha_{eff}\) as a function of Py thickness for Py/BSTS2 heterostructure at room temperature which fits pretty well in the linear equation;(b) \(\alpha_{eff}\) as a function of BSTS thickness for Py(15nm)/BSTS heterostructure at room temperature
In the next section, we report the TI thickness-dependent study of \(\alpha_{eff}\) to understand how the surface and bulk of TI affect the magnetization damping of FM at room temperature. For non-magnetic metals (NM), \(\alpha_{eff}\) in general increases exponentially with increasing NM thickness and then gets saturated after reaching a certain thickness value of NM. It can be described by the conventional spin diffusion theory [49; 50] for FM/NM where the relaxation of spin current in the NM layer was assumed solely as a property of NM material. Later the theory was modified by including the FM/NM interfacial spin-orbit interaction (SOI) in the formulation. But a TI/FM heterostructure does not necessarily fit the equation proposed by Tserkovnyak _et al._ as the topological surface state (TSS) and bulk can influence the spin transfer mechanism differently than NM. One can see from fig.3b that for bilayer structures of Py(15nm)/BSTS2(t) there is a sudden jump of \(\alpha_{eff}\) from the \(\alpha_{FM}\) value, 0.0074 and then with the thickness variation of TI layer in the range of 10nm to 37nm, \(\alpha_{eff}\) increases slowly from 0.015 to 0.02. The TI thickness dependence of \(\alpha_{eff}\) for Py(15nm)/BSTS(t) bilayer is almost linear. It indicates an efficient spin-sink nature of the TI bulk with increasing thickness at least at room temperature [51].
Figure 4: Temperature dependence of the FMR spectra linewidth for different thickness combination of BSTS/Py bilayer systems and for a bare Py thin film. The solid lines are exponential fits.
Figure 5: (a)Temperature dependence of the resistivity of BSTS sample of thickness 15nm deposited on Si(111) substrate; (b)Temperature dependence of enhanced Gilbert damping coefficient, \(\alpha_{eff}\) of Py(20nm)/BSTS2 bilayer and bare Py(15nm) film
Figure 6: (a)Temperature dependence of effective magnetization of Py(20nm/BSTS2) (b)Temperature dependence of the anisotropy field of Py(20nm)/BSTS2
In the final section, we have focused on low-temperature measurements to understand the effect of topological surface state on magnetization relaxation. We performed temperature dependent study of FMR linewidth (FWHM), enhanced Gilbert damping coefficient(\(\alpha_{eff}\)), anisotropy field (\(H_{k}\)) and effective magnetization (\(4\pi M_{eff}\)). For different thickness combinations of Py/BSTS bilayer, we obtained the \(\Delta H\) variation with temperature. It enhances exponentially with decreasing temperature that fits \(exp(-T/T_{0})\) as shown in fig.4. The characteristic temperatures for \(T_{0}\) are \(74.45\pm 6.89K\), \(52.25\pm 3.61K\) and \(60.76\pm 4.46K\) for Py(15nm)/BSTS3, Py(15nm)/BSTS2 and Py(20nm)/BSTS2 respectively. For bare Py(15nm) film, we can note that there is no significant variation in \(\Delta H\) at low temperatures compared to the TI/FM bilayers [29]. For further understanding, the temperature variation of \(\alpha_{eff}\) has also been studied for Py(20nm)/BSTS2 as shown in fig. 5b). From the enhancement of \(\alpha_{eff}\) value for Py(20nm)/ BSTS2, the spin pumping effect is very evident even at room temperature. At low temperatures, the enhancement in \(\alpha_{eff}\) is even more pronounced and increases exponentially with decreasing temperature. Comparing the afore-mentioned low temp behaviour of \(\Delta H\) and \(\alpha_{eff}\) with the resistivity vs. temperature data of BSTS2 we can say that the enhancement is most prominent at the lower temperature region where the surface state of TI starts to dominate. The low temperature behaviour of \(\alpha_{eff}\) and \(\Delta H\) is in line with the model given by Abdulahad _et al._[54]. The spin chemical potential bias due to spin-pumping of FM can generate pure spin current into the bulk of TI and spin current along the surface of TI. The injected spin current subsequently gets converted into charge current due to spin momentum locking property of the surface state of TI. We attribute the origin of exponential increase of \(\alpha_{eff}\) and \(\Delta H\) at lower temperature to the spin chemical potential bias induced surface current in TI. The increased surface current into the TI at lower temperature corresponds to the rapid relaxation of magnetization precession of FM which reflects in the exponential increase of FMR linewidth and damping constant of the ferromagnet.
To further investigate the effect of TI surface state on the magnetization of FM, we studied the temperature variation of \(4\pi M_{eff}\) and \(H_{k}\) for Py(20nm)/BSTS2. In fig.6a, \(4\pi M_{eff}\) increases monotonically as saturation magnetization increases with lowering temperature, except for the temperature range of 130K to 50K. The anomaly in \(4\pi M_{eff}\) is related to the anisotropy energy of the FM. It reflects in fig.6b where we can see a hump-like feature of \(H_{K}\) in the same temperature region as that of \(4\pi M_{eff}\). The temperature variation of \(H_{k}\) is concomitant with the resistivity vs temperature behavior of the BSTS2 sample. At room temperature, there is an in-plane magnetic anisotropy (\(K_{i}=-0.0011erg/cm^{2}\)) that we already have evaluated from Eq.3 before. From the fig.6b we can see that IMA increases with lowering temperature until a certain temperature (around 70K) is reached. As temperature goes down below 70K, the in-plane magnetic anisotropy (IMA) becomes weaker. The decrease in IMA can be justified with the argument proposed by Abdulahad _et al._[54]. Several theoretical as well as experimental predictions confirm the existence of gapless topological surface states even after transition metal deposition on TI [57; 58]. These surface states can couple with the local moments of the FM layer through exchange interaction without any long-range ferromagnetic order. According to model given by Abdulahad _et al._ this exchange coupling acts perpendicular to the TI surface. Due to the exchange interaction between the FM layer and TI surface state, the IMA can decrease at low temperatures where the surface state dominates and the surface state dominance of the TI can be witnessed in the same temperature region (fig.5a) below which the IMA starts getting weaker.
## Conclusions
In summary, by measuring the parameters like enhanced Gilbert damping coefficient, spin mixing conductance and spin current density, we can conclude a significant spin pumping into \(BiSbTe_{1.5}Se_{1.5}\) from \(Ni_{80}Fe_{20}\). However, to understand the surface and/or bulk state contribution of TI, low temperature measurements were necessary as surface state dominates at low temperature that we have seen from resistivity vs temperature data of BSTS. Our experiment shows an exponential growth of FWHM and effective damping coefficient with decreasing temperature. The magnetic anisotropy field and effective magnetization show anomaly in a particular temperature region. Both the results are concomitant with the resistivity data of the TI sample, BSTS. From the low temperature data, we can conclude that a combined effect of spin-pumping induced surface current in TI and the exchange interaction between the surface state of TI and ferromagnetic layer are responsible for such behaviour of the heterostructure. Our results supports the model proposed in a previous literature by Abdulahad _et al._
## Acknowledgements
The authors sincerely acknowledge the Ministry of Education, Government of India and Science and Engineering Research Board (SERB) (grant no: EMR/2016/007950), and Department of Science and Technology (grant no. DST/ICPS/Quest/2019/22) for financial support. S.P. acknowledges the Department of Science and Technology(DST)-INSPIRE fellowship India, S. A. acknowledges the Ministry of Education of the Government of India, and S.M. acknowledges the Council Of Scientific and Industrial Research(CSIR), India, for research fellowship. The authors would like to thank Dr. Partha Mitra of the Department of Physics, Indian Institute of Science Education and Research Kolkata, for
providing the lab facilities for sample deposition. The authors would like to acknowledge Prof. Anjan Barman of the Department of Physics, SN Bose National Centre for Basic Sciences for allowing thickness measurements using the XRR facility in their institute.
|
2303.08316 | MSF: Motion-guided Sequential Fusion for Efficient 3D Object Detection
from Point Cloud Sequences | Point cloud sequences are commonly used to accurately detect 3D objects in
applications such as autonomous driving. Current top-performing multi-frame
detectors mostly follow a Detect-and-Fuse framework, which extracts features
from each frame of the sequence and fuses them to detect the objects in the
current frame. However, this inevitably leads to redundant computation since
adjacent frames are highly correlated. In this paper, we propose an efficient
Motion-guided Sequential Fusion (MSF) method, which exploits the continuity of
object motion to mine useful sequential contexts for object detection in the
current frame. We first generate 3D proposals on the current frame and
propagate them to preceding frames based on the estimated velocities. The
points-of-interest are then pooled from the sequence and encoded as proposal
features. A novel Bidirectional Feature Aggregation (BiFA) module is further
proposed to facilitate the interactions of proposal features across frames.
Besides, we optimize the point cloud pooling by a voxel-based sampling
technique so that millions of points can be processed in several milliseconds.
The proposed MSF method achieves not only better efficiency than other
multi-frame detectors but also leading accuracy, with 83.12% and 78.30% mAP on
the LEVEL1 and LEVEL2 test sets of Waymo Open Dataset, respectively. Codes can
be found at \url{https://github.com/skyhehe123/MSF}. | Chenhang He, Ruihuang Li, Yabin Zhang, Shuai Li, Lei Zhang | 2023-03-15T02:10:27Z | http://arxiv.org/abs/2303.08316v1 | # MSF: Motion-guided Sequential Fusion for Efficient 3D
###### Abstract
Point cloud sequences are commonly used to accurately detect 3D objects in applications such as autonomous driving. Current top-performing multi-frame detectors mostly follow a Detect-and-Fuse framework, which extracts features from each frame of the sequence and fuses them to detect the objects in the current frame. However, this inevitably leads to redundant computation since adjacent frames are highly correlated. In this paper, we propose an efficient Motion-guided Sequential Fusion (MSF) method, which exploits the continuity of object motion to mine useful sequential contexts for object detection in the current frame. We first generate 3D proposals on the current frame and propagate them to preceding frames based on the estimated velocities. The points-of-interest are then pooled from the sequence and encoded as proposal features. A novel Bidirectional Feature Aggregation (BiFA) module is further proposed to facilitate the interactions of proposal features across frames. Besides, we optimize the point cloud pooling by a voxel-based sampling technique so that millions of points can be processed in several milliseconds. The proposed MSF method achieves not only better efficiency than other multi-frame detectors but also leading accuracy, with 83.12% and 78.30% mAP on the LEVEL1 and LEVEL2 test sets of Waymo Open Dataset, respectively. Codes can be found at [https://github.com/skyhehe123/MSF](https://github.com/skyhehe123/MSF).
## 1 Introduction
3D object detection [1, 2, 6, 7, 27, 28, 29, 14, 21, 2] is one of the key technologies in autonomous driving, which helps the vehicle to better understand the surrounding environment and make critical decisions in the downstream tasks. As an indispensable sensing device in autonomous driving systems, LiDAR collects 3D measurements of the scene in the form of point clouds. However, LiDAR can only produce partial view of the scene at a time, and the sparse and incomplete representation of point clouds brings considerable challenges to the 3D object detection task. In practice, the LiDAR sensor will continuously sense the environment and produce a sequence of point cloud frames over time. The multi-frame data can provide a denser representation of the scene as the vehicle moves. Therefore, how to fuse these multi-frame point cloud data for more accurate object detection is worth deep investigation.
Recent works mainly focus on deep feature fusion with multi-frame point clouds, for example, aggregating dense birds-eye-view features via Transformer models [31, 37], passing the voxel features to LSTM [8] or GRU [32] modules for temporal modeling. Some top-performing detectors [2, 18] focus on fusing proposal features, where a tracker is employed to associate the 3D proposals across frames, and a region-based network is applied to refine the current proposals by incorporating contextual features from the proposal trajectories. These approaches generally follow a "Detect-and-Fuse" framework, as shown in Fig. 1, where the model requires to process each frame of the sequence,
Figure 1: (a) The “Detect-and-Fuse” framework extracts features from each frame of the sequence and then fuses them, while (b) our proposed “Motion-guided Sequential Fusion” (MSF) method generates proposals on the current frame and propagates them to preceding frames to explore useful contexts in the sequence.
and the predictions on the current frame rely on the results of preceding frames. Since online detection is a causal system, such a detection framework might cause significant delay if the network is still processing a preceding frame when the current frame is loaded.
In this paper, we propose an efficient Motion-guided Sequential Fusion (MSF) method, as shown in Fig. 1(b), which leverages the continuity of object motion to extract useful contexts from point cloud sequences and improve the detection of current frame. Specifically, considering that the motions of objects are relatively smooth in a short sequence, we propagate the proposals generated on current frame to preceding frames based on the velocities of objects, and sample reliable points-of-interest from the sequence. In this way, we bypass extracting features on each frame of the sequence, which reduces the redundant computation and reliance on the results of preceding frames. The sampled points are then transformed to proposal features via two encoding schemes and passed to a region-based network for further refinement. Specifically, a self-attention module is employed to enhance the interaction of point features within proposals, while a novel Bidirectional Feature Aggregation (BiFA) module is proposed to enforce the information exchange between proposals across frames. The refined proposal features consequently capture both spatial details and long-term dependencies over the sequence, leading to more accurate bounding-box prediction.
It is found that the existing point cloud pooling methods [2, 19, 30, 23] are inefficient, taking more than 40 milliseconds when processing millions of points from sequential point clouds. We find that the major bottleneck lies in the heavy computation of pair-wise distances between \(n\) points and \(m\) proposals, which costs \(\mathcal{O}(nm)\) complexity. To further improve the efficiency, we optimize the point cloud pooling with a voxel sampling technique. The improved pooling operation is of linear complexity and can process millions of points in several milliseconds, more than eight times faster than the original method.
Overall, our contributions can be summarized as follows.
* An efficient Motion-guided Sequential Fusion (MSF) method is proposed to fuse multi-frame point clouds at region level by propagating the proposals of current frame to preceding frames based on the object motions.
* A novel Bidirectional Feature Aggregation (BiFA) module is introduced to facilitate the interactions of proposal features across frames.
* The point cloud pooling method is optimized with a voxel-based sampling technique, significantly reducing the runtime on large-scale point cloud sequence.
The proposed MSF method is validated on the challenging Waymo Open Dataset, and it achieves leading accuracy on the LEVEL1 and LEVEL2 test sets with fast speed.
## 2 Related Work
**Single-frame 3D object detection.** Recent research on single-frame 3D object detection is mainly focused on representation learning on point clouds. Voxel-based detectors [28, 33, 36] rasterize the point cloud into volumetric representation, followed by 3D CNN to extract dense features. Some works convert point clouds into 2D birds-eye-view [9] or range view [4, 11] representations, and process them with more efficient 2D CNN. Following PointNet++ [17], point-based methods [16, 23, 29, 30, 34] directly process point clouds in continuous space, and extract highly-semantic features through a series of downsampling and set abstraction layers. Voxel-point approaches [7, 12, 21] employ a hybrid representation, where the flexible conversion between voxel-based and point-based representations are explored, leading to better balance between efficiency and performance. Our method employs a high-quality voxel-based detector CenterPoint [33] as the proposal generation network to predict 3D proposals of current frame and their motions. We then employ an efficient region-based network to further refine these proposals by mining sequential points from point cloud sequence.
**3D object detection from point cloud sequence.** Multi-frame point clouds provide richer 3D information of the environment. While some single-frame detectors [3, 33] can be adapted to point cloud sequence by simply concatenating multi-frame point cloud as the input, the improvements are typically marginal and the performance can be even worse when encountering moving objects. Fast-and-furious [13] explores an intermediate fusion to align multi-frame point cloud by concatenating the hidden feature maps of the backbone network. However, it still suffers from the misalignment brought by the fast-moving objects in long sequence. Recent approaches [8, 32] demonstrate that an in-depth fusion can be achieved with recurrent networks. Unfortunately, the use of a single memory to store and update features across frames builds a potential bottleneck. To resolve such limitations, 3D-MAN [31] first attempts to employ the attention mechanism to align different views of 3D objects and then exploits a memory bank to store and aggregate multi-frame features for long sequence. Recently, Offboard3D [18] and MPPNet [2] improve much the detection performance, where they associate the detected boxes from each frame of the sequence as proposal trajectories, and extract high-quality proposal features by sampling sequential point cloud on the trajectories.
Our MSF method also samples points from the sequence, but it differs from those methods with proposal trajectories [2, 18] in that we only generate proposals on the current frame and propagate them to explore features in preceding frames. This makes our method much more efficient and favorable to online detection systems.
## 3 Motion-guided Sequential Fusion
This section presents our Motion-guided Sequential Fusion (MSF) approach for efficient 3D object detection on point cloud sequences. The overall architecture of MSF is illustrated in Fig. 2. In Sec. 3.1, we describe the details of motion-guided sequential pooling, which effectively mines reliable sequential points-of-interest based on the proposals of current frame. In Sec. 3.2, we present the region-based network, including the formulation of proposal features and a novel bidirectional feature aggregation module. In Sec. 3.3, we demonstrate a voxel-based sampling technique to accelerate the current point cloud pooling method.
### Motion-guided Sequential Pooling
Current multi-frame detection methods [2, 18] mostly explore proposal trajectories to generate high-quality point cloud representations. However, such a scheme relies on frame-by-frame proposal generation, which is not suitable for online detection systems. We observe that in a point cloud sequence, although objects move at different speeds, their motions are relatively smooth. That is to say, we can estimate their motion displacements and roughly localize their positions in preceding frames. To this end, given a point cloud sequence \(\{I^{t}\}_{t=1}^{T}\), we propose to propagate the proposals generated on the current frame \(I^{T}\) to preceding frames \(\{I^{t}\}_{t=1}^{T-1}\) based on their estimated velocities. Since moving objects may slightly deviate from the estimated positions in the preceding frames, we sample the points-of-interest in a cylindrical region of each proposal and gradually increase the diameter of the region by a factor \(\gamma\) as the proposal propagates. Let's denote a proposal of current frame as \((p_{x},p_{y},p_{z},w,l,h,\theta)\), where \((p_{x},p_{y},p_{z})\) denotes its center location, \(w,l,h\) and \(\theta\) denote its width, length, height and yaw angle, respectively. Suppose that the object has a unit-time velocity \(\vec{v}=(v_{x},v_{y})\). The corresponding points-of-interest \((x^{t},y^{t},z^{t})\) sampled from frame \(t\) will satisfy the following condition:
\[(x^{t}-p_{x}+v_{x}\cdot\Delta t)^{2}+(y^{t}-p_{y}+v_{y}\cdot\Delta t)^{2}<( \frac{d^{t}}{2})^{2}, \tag{1}\]
where \(\Delta t=T-t\) is the time offset of frame \(t\) and \(d^{t}=\sqrt{(w^{2}+l^{2})}\cdot\gamma^{\Delta t+1}\) is the diameter of cylindrical region.
In our preliminary study, we compare the overall recall rates of foreground points between our method and the trajectory-based method [2]. As can be seen in Tab. 1,
\begin{table}
\begin{tabular}{l|c c c} \hline & _4-frame_ & _8-frame_ & _16-frame_ \\ \hline \hline Trajectory [2] & 93.2\% & 92.8\% & 90.5\% \\ Ours (\(\gamma=1.0\)) & 92.3\% & 87.5\% & 78.3\% \\ Ours (\(\gamma=1.1\)) & 93.5\% & 91.7\% & 87.3\% \\ \hline \end{tabular}
\end{table}
Table 1: Recall rates of foreground points by using per-frame detection based proposal trajectory method [2] and our motion guided proposal generation method. We employ the CenterPoint [33] as proposal generator and evaluate on Waymo validation split.
Figure 2: The overall architecture of our proposed Motion-guided Sequential Fusion (MSF) approach. By taking a point cloud sequence as input, MSF employs a region proposal network to generate proposals on the current frame and sample points-of-interest from the sequence by using motion-guided sequential pooling. The sampled points are encoded as high-dimensional proposal features and passed to a region-based network, where three learning blocks are consequently applied to refine the proposal features. A Bidirectional Feature Aggregation (BiFA) module is introduced in the region-based network to facilitate the interactions of proposal features across frames. The red and blue cubes represent single-point features from the current frame and preceding frame, respectively.
our method can achieve very close result to the the proposal trajectory method on 4-frame sequences. For 8-frame sequences, since it is difficult to estimate the positions of fast-moving objects on distant frames, we use \(\gamma\)=1.1 to recall more points and obtain comparable result to the proposal trajectory method. The recall rates only drop slightly even when 16-frame sequences are used. Interestingly, we find that setting \(\gamma\)=1.1 is not only beneficial for detecting fast-moving objects but also beneficial for improving the performance on slow and stationary objects. We believe this is because the points sampled from regions of different sizes contain multi-level contextual information about the objects, which will be further discussed in Sec. 4.3.
### Region-based Network
**Proposal feature encoding.** After sampling \(K\) points in each proposal, we adopt two encoding schemes to generate proposal features. We first follow [19] to calculate the relative offsets between each point-of-interest \(l_{i}^{t}\in I^{t}\) and the nine key-points (eight corner points plus one center point) of the proposal box \(\{b_{j}^{t}\in P^{t}:j=0,...,8\}\). The resulted offsets are then converted to spherical coordinates and transformed, via a Multi-layer Perceptron (MLP), to a geometric embedding \(g^{t}\in\mathbb{R}^{K\times D}\) that encodes the spatial correlation between the sampled point and the proposal box. The encoding scheme can be formulated as:
\[g_{i}^{t}=\text{MLP}(\mathcal{S}(\{l_{i}^{t}-b_{j}^{t}\}_{j=0}^{8})),\text{ for }i=1,...,K, \tag{2}\]
where \(\mathcal{S}:(x,y,z)\rightarrow(r,\theta,\phi)\) denotes the spherical transform where \(r=\sqrt{x^{2}+y^{2}+z^{2}}\), \(\theta=\arcsin(z/r)\) and \(\phi=\arctan(y/x)\) respectively. The second scheme produces a motion embedding \(m^{t}\in\mathbb{R}^{K\times D}\), which encodes the displacements of the points-of-interest at each frame \(I^{t}\) relative to the key-points of the proposal boxes \(b^{0}\) in the first frame. The time offsets \(\Delta t\) are also concatenated, resulting in the motion embedding at frame \(t\) as:
\[m_{i}^{t}=\text{MLP}(\text{Concat}(\{l_{i}^{t}-b_{j}^{0}\}_{j=0}^{8},\Delta t) ),\text{for }i=1,...,K. \tag{3}\]
The proposal feature \(f^{t}\in\mathbb{R}^{K\times D}\) can be formulated as the summation of geometric and motion embeddings:
\[f^{t}=g^{t}+m^{t}. \tag{4}\]
**Bidirectional feature aggregation.** MSF employs a region-based network to explore _spatial-temporal_ dependencies among proposal features and fuse them into global representations for final bounding-box refinement. As shown in Fig. 2, the region-based network is composed of three learning blocks. Each block consists of a traditional Multi-Head Self-Attention (MHSA) layer, followed by a Feed-Forward Network (FFN) with residual connection, and our proposed Bidirectional Feature Aggregation (BiFA) module. The MHSA layer aims to encode rich _spatial_ relationships and point dependencies in the proposal for refining point features, while the proposed BiFA module aims to encode _temporal_ information by facilitating information exchange between proposals across frames.
Specifically, BiFA involves a forward path and a backward path, representing two ways of information flow along the sequence. Since proposal features have an unordered representation, we leverage the _Max-pool&Repeat_[36] tensor manipulation to obtain a summarized contextual features. In the forward path, aside from the first frame in the sequence, which is concatenated with its own contextual features, each of the other frames is concatenated with the contextual features of its preceding frame along channel dimension. A point-wise convolutional layer is thereby employed to refine the concatenated feature and halve their channels. Given proposal features \(f^{t}\) and \(f^{t-1}\) from the current frame and its preceding frame, the forward output of frame \(t\), denote by \(h_{F}^{t}\), can be obtained by:
\[h_{F}^{t}=\text{Conv}(\text{Concat}(f^{t},\text{Repeat}\circ\text{Max-pool}(f^{t- 1}))). \tag{5}\]
Unfortunately, introducing the forward path only will lead to information imbalance among different frames, _i.e._, the current frame receives information from other frames, whereas the last preceding frame receives no information from the sequence. To overcome this limitation, we augment the backward path, where the features of frame \(t\) are aggregated with the contextual features of frame \(t+1\), resulting in the backward output \(h_{B}^{t}\) as follows:
\[h_{B}^{t}=\text{Conv}(\text{Concat}(h_{F}^{t},\text{Repeat}\circ\text{Max-pool}(h _{F}^{t+1}))). \tag{6}\]
In this way, each frame can simultaneously exchange information with its two adjacent frames from both directions, and the information can be propagated to more distant frames after three learning blocks. The resulted proposal features can capture long-term dependencies over the point cloud sequence. It is worthy noting that, the parameters of the convolutional layer in the same path can be shared and all the tensor operations can be performed in parallel. This makes our BiFA module lightweight and much more efficient than other multi-frame fusion modules.
**Outputs and loss function.** Finally, we aggregate the point-wise features of each proposal into a global representation through a query-based transformer decoder layer, where a learnable query feature \(q\in\mathbb{R}^{D}\) attends to each point vector of proposal features \(h^{t}\in\mathbb{R}^{K\times D}\) through a cross-attention module. The decoder layer can be depicted as follows:
\[\hat{e}^{t} =\text{Attention}(q,h^{t},h^{t})+h^{t}, \tag{7}\] \[e^{t} =\text{FFN}(\hat{e}^{t})+\hat{e}^{t}. \tag{8}\]
The decoded outputs across frames \(\{e^{t}\}_{t=0}^{T}\) are concatenated and passed to a group of detection heads for
bounding-box refinement. The overall loss function \(\mathcal{L}_{\mathrm{total}}\) is the summation of the confidence prediction loss \(\mathcal{L}_{\mathrm{conf}}\) and the box regression loss \(\mathcal{L}_{\mathrm{reg}}\) as follows:
\[\mathcal{L}_{\mathrm{total}}=\mathcal{L}_{\mathrm{conf}}+\alpha\mathcal{L}_{ \mathrm{reg}}, \tag{9}\]
where \(\alpha\) is a balancing hyper-parameter. We adopt the same binary cross entropy loss and box regression loss employed in CT3D [19] as our \(\mathcal{L}_{\mathrm{conf}}\) and \(\mathcal{L}_{\mathrm{reg}}\).
### Efficient Point Cloud Pooling
In this subsection, we optimize the point cloud pooling method to sample a fixed number of points more efficiently from the cylindrical region of each proposal. As shown in Fig. 3, we perform proposal-based point sampling in two steps, _i.e_., the intra-voxel sampling and the voxel-field sampling. In the first step, the input space is discretized into a voxel grid with voxel size \(v\), and each point corresponds to a voxel coordinates \((\lfloor\frac{x}{v}\rfloor,\lfloor\frac{y}{v}\rfloor)\). Here the voxel partition in z-axis is omitted considering that the cylindrical regions have unlimited height. Then, up to \(k\) points are kept in each voxel and padding points are inserted if the voxel has less than \(k\) points. Due to the high memory consumption required to store all voxels in a dense grid, we follow [14] to store only non-empty voxels in a contiguous memory and use a hash table to store the mappings between the coordinates of non-empty voxels and their indices in memory space.
In the second step, we first query \(n\)-by-\(n\) voxel-field for each proposal and calculate the coordinates of the voxels within. Next, we convert the coordinates of voxel into hashed keys and look up the table for querying the sampled points generated from the first hierarchy. The hash-table will return "-1" if no keys are found, which means it is an empty voxel. The queried points are then drawn from these voxels and stored in an output buffer if the following conditions are met: 1) it is a valid point and 2) it falls into the cylindrical region, _i.e_. satisfying Eq. 1.
**Complexity and efficiency** We perform an in-depth analysis of our optimized pooling method and the previous cylindrical pooling methods [2, 19, 30] in continuous space. Given \(N\) points, \(M\) proposals and the requirement of sampling \(K\) points in each proposal, the original method costs \(\mathcal{O}(NM)+\mathcal{O}(MK)\) complexity due to the calculation of pair-wise distances between points and proposals. In contrast, our optimized version costs \(\mathcal{O}(N)+\mathcal{O}(M)+\mathcal{O}(MK)\) complexity, where the first term is the cost of intra-voxel sampling, the second term is the cost of querying voxel-field for each proposal and the third term is the cost of drawing points from the queried voxels. We evaluate the latency of point cloud pooling on sequences with different lengths. As shown in Tab. 2, our optimized pooling method can achieve an 8\(\times\) speedup over the original pooling method.
## 4 Experiment
### Dataset and Implementation Details
**Dataset.** We evaluate our MSF on Waymo Open Dataset (WOD), which is a large-scale multi-modality dataset for autonomous driving. WOD contains 1,150 sequences, which are divided into 798 training, 202 validation, and 150 testing sequences, respectively. Each sequence is 20s long, captured by a 64-line LiDAR sensor at 10Hz frequency. The evaluation metrics used in WOD are mean average precision (mAP) and mAP weighted by heading accuracy (mAPH). Three object classes, "Vehicle", "Pedestrian" and "Cyclist", are evaluated. Each object class is further categorized into two levels of difficulties, LEVEL1 and LEVEL2. The former refers to objects with more than 5 points and the latter refers to objects with less than 5 but at least 1 point.
**Implementation.** We employ the traditional CenterPoint [28, 33, 36] as our region proposal network (RPN). To include motion information, we concatenate four adjacent frames at input and add one additional head for pre
\begin{table}
\begin{tabular}{l|c|c|c} \hline & \(N\)=168k & \(N\)=674k & \(N\)=1382k \\ \hline Cylindrical Pooling & 8.2ms & 25.2ms & 40.1ms \\ Our Optimized & 2.3ms & 3.4ms & 5.0ms \\ \hline \end{tabular}
\end{table}
Table 2: The latency of point cloud pooling on 1-frame, 4-frames and 8-frames sequences.
Figure 3: Illustration of our optimized point cloud pooling method. We first perform intra-voxel sampling to keep a fixed number of points in each voxel. Then we query \(n\times n\) voxels fields for each proposal and uniformly draw points from the non-empty voxels within.
dicting the velocity of objects. In our experiments, we first train RPN using the official settings of OpenPCDet1, and use it to generate proposals of WOD. Based on these proposals, we then train our region-based network for 6 epochs by using the ADAM optimizer with an initial learning rate of 0.003 and a batch size of 16. The learning rate is decayed with One-Cycle policy and the momentum is set between [85%, 95%]. During the training, we sample region proposals with IoU\(>\)0.5 and conduct proposal augmentation following PointRCNN [23]. A number of 128 raw LiDAR points are randomly sampled for each proposal. The voxel size \(v\) and the points-per-voxel \(k\) used in voxel-based sampling are set to 0.4 and 32, respectively. The feature dimension of each point in the learning block is set to 256. At the training stage, we use the intermediate supervision by adding loss to the output of each learning block and sum all the intermediate losses to train the model. At the test stage, we only use the bounding boxes and the confidence scores predicted from the last learning block.
Footnote 1: [https://github.com/open-mmlab/OpenPCDet/](https://github.com/open-mmlab/OpenPCDet/)
### Results on WOD
**Waymo Validation Set.** Tab. 3 compares the performance of MSF and the current state-of-the-art methods. As can be seen, multi-frame detectors generally outperform single-frame methods. The best two multi-frame methods by far are MPPNet [2] based on proposal trajectory and CenterFormer [37] based on transformer model. Using the same number of frames, our MSF method significantly outperforms CenterFormer by 3.4% APH and 1.2% APH on LEVEL1 and LEVEL2, respectively. This demonstrates that fusing multiple frame feature at the region level is more effective than the fusion with convolutional features. Compared with MPPNet [2], which is also based on region-level fusion, our method outperforms it on almost all cases, except for the APH of Vehicle LEVEL1. It should be noted that our MSF method only use 8 frames, while MPPNet uses 16 frames. MSF uses different pooling sizes in different frames, therefore generating multi-level contextual features for each object. Specifically, MSF models with 4- and 8-frames achieve the mAPH of 74.62% and 75.13%, respectively, recording new state-of-the-arts. In addition, both CenterFormer and MPPNet extract features on each frame of the sequence with the need of a memory bank to store the intermediate results of preceding frames, while our method performs proposal generation only on the current frame, which is more practical to online inference.
**Waymo Test Set.** In Tab. 4, we show the results of our 8-frame model by submitting the prediction result to the online server for evaluation. Our method outperforms all the previously published methods. In particular, the improvements on Vehicle and Pedestrian classes are significant, outperforming the best competitor, _i.e_., CenterFormer, by 1.91% APH and 1.89% APH on LEVEL2, respectively.
\begin{table}
\begin{tabular}{c|c c c|c c|c c|c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Frames} & \multicolumn{2}{c|}{ALL (3D mAPH)} & \multicolumn{2}{c|}{Vehicle (AP/APH)} & \multicolumn{2}{c}{Pedestrian (AP/APH)} & \multicolumn{2}{c}{Cyclist (AP/APH)} \\ & & L1 & L2 & L1 & L2 & L1 & L2 & L1 & L2 \\ \hline SECOND [28] & 1 & 63.05 & 57.23 & 72.27/71.69 & 63.85/63.33 & 68.70/58.18 & 60.72/51.31 & 60.62/59.28 & 58.34/57.05 \\ PointPillar [9] & 1 & 63.33 & 57.53 & 71.60/71.00 & 63.10/62.50 & 70.60/56.70 & 62.90/50.20 & 64.40/62.30 & 61.90/59.90 \\ IA-SSD [35] & 1 & 64.48 & 58.08 & 70.53/69.67 & 61.55/60.80 & 69.38/58.47 & 60.30/50.73 & 67.67/65.30 & 64.98/62.71 \\ LiDAR R-CNN [10] & 1 & 66.20 & 60.10 & 73.50/73.00 & 64.70/64.20 & 71.20/58.70 & 63.10/51.70 & 68.60/66.90 & 66.10/64.40 \\ RSN [26] & 1 & - & 75.10/74.06 & 66.00/65.50 & 77.80/72.70 & 68.30/63.70 & 67.81/65.35/63.98 \\ PV-RCNN [21] & 1 & 69.63 & 63.33 & 77.51/76.89 & 68.98/68.41 & 75.01/65.65 & 66.04/57.61 & 67.81/66.35 & 65.39/63.98 \\ Part-A2 [24] & 1 & 70.25 & 63.84 & 77.05/76.51 & 68.47/67.97 & 75.24/66.87 & 66.18/58.62 & 68.60/67.36 & 66.13/64.93 \\ Centerpoint [33] & 1 & - & 65.50 & - & - & -66.20 & - & - & -/67.60 \\ VoTR [14] & 1 & - & - & 74.95/74.25 & 65.91/65.29 & - & - & - & - \\ VoxScr [6] & 1 & 72.24 & 66.22 & 74.50/74.03 & 65.99/65.56 & 80.03/72.42 & 72.45/65.93 & 71.56/70.29 & 68.95/67.73 \\ SST-IT [3] & 1 & - & - & 76.22/75.79 & 68.04/67.64 & 81.39/74.05 & 72.82/65.93 & - & - \\ SWFormer-IT [25] & 1 & - & - & 77.8/77.73 & 69.2/68.8 & 80.97/72.7 & 72.5/64.9 & - & - \\ PillarNet [20] & 1 & 74.60 & 68.43 & 79.09/78.59 & 70.92/70.46 & 80.59/74.01 & 72.28/66.17 & 72.29/71.21 & 69.72/68.67 \\ PV-RCNN++ [22] & 1 & 75.21 & 68.61 & 79.10/78.63 & 70.34/69.91 & 80.62/74.62 & 71.86/66.30 & 73.49/72.38 & 70.70/69.62 \\ \hline
3D-MAN [31] & 16 & - & - & 74.53/74.03 & 67.61/67.14 & 71.76/7.7 & 62.65/59.0 & - & - \\ SST-3f [3] & 3 & - & - & 78.66/78.21 & 69.98/69.57 & 83.81/80.14 & 75.94/72.37 & - & - \\ SWFormer-3f [25] & 3 & - & - & 79.47/8.9 & 11.71/70.6 & 82.97/9.0 & 74.87/1.1 & - & - \\ CenterFormer [37] & 4 & 77.0 & 73.2 & 78.1/77.6 & 73.4/72.9 & 81.77/8.6 & 77.27/4.2 & 75.67/4.8 & 73.47/2.6 \\ CenterFormer [37] & 8 & 77.3 & 73.7 & 78.8/78.33 & 74.73/73.8 & 82.1/79.3 & 78.7/5.0 & 75.27/4.4 & 73.27/2.3 \\ MPPNet [2] & 4 & 79.83 & 74.22 & 81.54/81.06 & 74.07/73.61 & 84.56/81.94 & 77.20/74.67 & 77.15/76.50 & 75.01/74.38 \\ MPPNet [2] & 16 & 80.40 & 74.85 & 82.74/**82.28** & 75.41/74.96 & 84.69/82.25 & 77.43/75.06 & 77.28/76.66 & 75.13/74.52 \\ \hline MSF (ours) & 4 & 80.20 & 74.62 & 81.36/80.87 & 73.81/73.35 & 85.05/82.10 & 77.92/75.11 & 78.40/77.61 & 76.17/75.40 \\ MSF (ours) & 8 & **80.65** & **75.46** & **82.83**/82.01 & **75.76/**75.31** & **85.24**/**82.21** & **78.32/75.61** & **78.52**/**77.74** & **76.32**/**75.47** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance comparison on the validation set of Waymo Open Dataset.
### Ablation Studies
In this section, we conduct in-depth analysis on MSF by evaluating the effectiveness of each individual component of it. We report the APH metric of our 4-frame model on the WOD validation set. The ablation study results are shown in Tab. 5 to Tab. 8, respectively.
**Motion embedding.** As can be seen from the 1st and 2nd rows of Tab. 5, without motion encoding, the performance of our proposed MSF will drop by 0.17% on LEVEL1 and 0.12% in LEVEL2. This indicates that the motion information is beneficial to infer the object's geometry information in point cloud sequences.
**Self-attention.** As can be seen from the 1st and 3rd rows of Tab. 5, without using self-attention for spatial interactions, MSF will suffer from significant performance drop, 3.69% on LEVEL1 and 2.71% on LEVEL2. This indicates that the intra-frame interaction is vital for learning internal geometry information of the proposals.
**Bidirectional feature aggregation.** From the 1st and 4th rows of Tab. 5, we can find that enforcing temporal interactions among hidden features of proposals by using the proposed BiFA module can bring an improvement of 1.95% on LEVEL1 and 1.51% on LEVEL2, which demonstrates the benefits of exploring long-term dependencies in the point cloud sequence. We also conduct experiments by performing unidirectional feature aggregation with either forward or backward path. The results are shown in Tab. 6, from which we see that the unidirectional model obtains lower APH scores than that of the bidirectional model.
**Query-based decoder layer.** As shown in the last row of Tab. 5, by replacing the final query-based decoder layer with a single max-pooling layer, the performance will drop slightly by 0.66% and 0.54%. This is because the final proposal representation after decoder can be regarded as a weighted sum of all point features, and the decoding weights of different points can intrinsically introduce more dynamics for feature representation.
**Spatial modeling with proxy points and MLP mixer**. We perform an analysis by integrating the proxy points and MLP mixer developed in our best competitor MPPNet [2] into MSP. We first follow MPPNet to generate proxy point
\begin{table}
\begin{tabular}{c c|c} \hline \hline Forward & Backward & L1 & L2 \\ \hline \hline ✓ & ✓ & 80.20 & 74.62 \\ ✓ & ✗ & 79.64 (-0.56) & 74.35 (-0.27) \\ ✗ & ✓ & 78.97 (-1.23) & 73.77 (-0.85) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Bidirectional feature aggregation vs. unidirectional feature aggregation.
\begin{table}
\begin{tabular}{c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{ALL (3D mAPH)} & \multicolumn{2}{c|}{Vehicle (AP/APH)} & \multicolumn{2}{c|}{Pedestrian (AP/APH)} & \multicolumn{2}{c}{Cyclist (AP/APH)} \\ & L1 & L2 & L1 & L2 & L1 & L2 & L1 & L2 \\ \hline PointPillar [9] & - & - & 68.10 & 60.10 & 68.00/55.50 & 61.40/50.10 & - & - \\ StarNet [15] & - & - & 61.00 & 54.50 & 67.80/59.90 & 61.10/54.00 & - & - \\ M3DETR [5] & 67.1 & 61.9 & 77.77/7.1 & 70.5/70.0 & 68.2/58.5 & 60.6/52.0 & 67.3/65.7 & 65.3/63.8 \\
3D-MAN [31] & - & - & 78.28 & 69.98 & 69.97/65.98 & 63.98/60.26 & - & - \\ PV-RCNN++ [22] & 75.7 & 70.2 & 81.6/81.2 & 73.9/73.5 & 80.4/75.0 & 74.1/69.0 & 71.9/70.8 & 69.3/68.2 \\ CenterPoint [33] & 77.2 & 71.9 & 81.1/80.6 & 73.4/73.0 & 80.5/77.3 & 74.6/71.5 & 74.6/73.7 & 72.2/71.3 \\ RSN [26] & - & - & 80.30 & 71.60 & 78.90/75.60 & 70.70/67.80 & - & - \\ SST-3f [3] & 78.3 & 72.8 & 81.0/80.6 & 73.1/72.7 & 83.3/79.7 & 76.9/73.5 & 75.7/74.6 & 73.2/72.2 \\ MPPNet [2] & 80.59 & 75.67 & 84.27/83.88 & 77.29/76.91 & 84.12/81.52 & 78.44/75.93 & 77.11/76.36 & 74.91/74.18 \\ CenterFormer [37] & 80.91 & 76.29 & 85.36/84.94 & 78.68/78.28 & 85.22/ 82.48 & 80.09/77.42 & 76.21/75.32 & 74.04/73.17 \\ \hline \hline MSF (ours) & **81.74** & **76.96** & **86.07/85.67** & **79.20/78.82** & **85.99/83.10** & **80.61/77.82** & **77.29/76.44** & **75.09/74.25** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance comparison on the test set of Waymo Open Dataset.
\begin{table}
\begin{tabular}{c c c|c c} \hline \hline MEA & SA & BiFA & Decoder & L1 & L2 \\ \hline \hline ✓ & ✓ & ✓ & Q & 80.20 & 74.62 \\ ✗ & ✓ & ✓ & Q & 80.03 (-0.17) & 74.50 (-0.12) \\ ✓ & ✗ & ✓ & Q & 76.51 (-3.69) & 71.91 (-2.71) \\ ✓ & ✓ & ✗ & Q & 78.25 (-1.95) & 73.11 (-1.51) \\ ✓ & ✓ & ✓ & M & 79.54 (-0.66) & 74.08 (-0.54) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablation experiments on the WOD validation set. “ME” refers to using Motion Embedding and “SA” refers to using Self-Attention modules. “Q” means using query-based decoder layer to generate global representation of proposal features and “M” means using single max-pooling layer for global feature generation. APH scores on LEVEL1 and LEVEL2 are reported.
of each proposal and apply a set abstraction [17] to aggregate the point-wise features to every proxy point. As shown in the 1st and 2nd rows of Tab. 7, using features of proxy points will degrade the performance on Vehicle and more significantly on Pedestrian and Cyclist classes. After replacing the self-attention module with the MLP mixer, the performance degradation on Pedestrian and Cyclist classes still exists, as shown in the 3rd row of Tab. 7. This phenomenon is contradictory to the conclusion in MPPNet that using proxy points to formulate proposal features is more effective than using raw points. We believe this is because MPPNet applies per-frame detection, and thus the proposals across frames may have different sizes, while proxy points can provide consistent representations for different frames. In contrast, our method uses propagated proposals with the same size over the sequence, and hence our proposal features are naturally on the same scale. For small objects such as pedestrian and cyclist, the features from raw points can provide more fine-grained details than proxy points.
**Effects of \(\gamma\)**. We evaluate how \(\gamma\) affects the performance on the objects with different speeds. As shown in Tab. 8, gradually expanding the proposal region with \(\gamma\)=1.1 will bring substantial improvements on the moving objects. This demonstrates that our model is able to capture the information of fast moving objects, even in the distant frames. Interestingly, we also see improvements on slow and stationary objects. This is because the different pooling sizes could provide multi-level contextual information, which is beneficial for detecting the objects. As shown in the 3rd row of Tab. 8, no further improvements can be made by increasing the value of \(\gamma\) to 1.2.
### Runtime Analysis
Fig. 4 illustrates the latency decomposition of different methods. Here the latency is computed as the average runtime over 100 samples that are randomly drawn from WOD validation set, which is measured by a single Nvidia GeForce RTX A6000 GPU. Since all the methods employ the same backbone network, we exclude the runtime of the backbone from the latency for better comparison. For MPPNet and CenterFormer, we assume that the features from the preceding frames are already available in the memory bank. As can be seen, MPPNet costs much time in the feature encoding stage because it performs set-abstraction [17] to group raw points on the proxy points, which requires manipulating point clouds in continuous space. In comparison, our method takes only 4 milliseconds to encode raw points as proposal features. The overall latency of MPPNet, CenterFormer and MSF are 99, 80 and 50 milliseconds, respectively. Tab. 9 further decomposes the network latency regarding the spatial and temporal modules in MPPNet and MSF. As can be seen, the self-attention module and BiFA module are much more efficient than MLP-Mixer and cross-attention module, respectively. Thanks to our optimized point cloud pooling, MSF achieves higher efficiency even compared with CenterFormer, which performs feature fusion directly on the convolutional features.
## 5 Conclusion
We presented a novel motion-guided sequential fusion method, namely MSF, for 3D object detection from point cloud sequence. Unlike previous multi-frame detectors that performed feature extraction on each frame of the sequence, MSF adopted a proposal propagation strategy to mine points-of-interest based on the proposals generated on the current frame, therefore reducing the redundant computations and relieving reliance on the preceding results. A bidirectional feature aggregation module was proposed to enable cross-frame interaction between proposal features, facilitating MSF to capture long-term dependencies over the sequence. Besides, we optimized the point cloud pooling process, allowing MSF to process large-scale point cloud sequences in milliseconds. The proposed MSF achieved state-of-the-art performance on the Waymo Open Datasets and it was more efficient than other multi-frame detectors. In future research, we plan to extend MSF to generate detection priors on the future frames to further reduce the overall computation of multi-frame detection.
\begin{table}
\begin{tabular}{c|c c c c} \hline & Stationary & Slow & Medium & Fast \\ \hline \hline \(\gamma\)=1.0 & 70.32 & 70.30 & 75.60 & 80.32 \\ \(\gamma\)=1.1 & 70.43 & 71.17 & 77.56 & 82.48 \\ \(\gamma\)=1.2 & 70.35 & 71.23 & 77.44 & 82.31 \\ \hline \end{tabular}
\end{table}
Table 8: The APH (L2) scores on objects with different velocities using the 8-frame model. The objects are categorized into stationary (\(<\)0.2m/s), slow (0.2\(\sim\)1m/s), medium (1\(\sim\)6m/s) and fast (\(>\)6m/s) groups.
Figure 4: Comparison of the runtime of different methods.
\begin{table}
\begin{tabular}{c c|c c} \hline & MPPNet & \multicolumn{2}{c}{MSF} \\ \hline MLP-Mixer & Cross-Attention & Self-Attention & BiFA \\
75 ms & 24 ms & 36 ms & 12 ms \\ \hline \end{tabular}
\end{table}
Table 9: The runtime decomposition of MPPNet and MSF. |
2310.10461 | Model Selection of Anomaly Detectors in the Absence of Labeled
Validation Data | Anomaly detection is the task of identifying abnormal samples in large
unlabeled datasets. While the advent of foundation models has produced powerful
zero-shot anomaly detection methods, their deployment in practice is often
hindered by the absence of labeled validation data -- without it, their
detection performance cannot be evaluated reliably. In this work, we propose
SWSA (Selection With Synthetic Anomalies): a general-purpose framework to
select image-based anomaly detectors without labeled validation data. Instead
of collecting labeled validation data, we generate synthetic anomalies without
any training or fine-tuning, using only a small support set of normal images.
Our synthetic anomalies are used to create detection tasks that compose a
validation framework for model selection. In an empirical study, we evaluate
SWSA with three types of synthetic anomalies and on two selection tasks: model
selection of image-based anomaly detectors and prompt selection for CLIP-based
anomaly detection. SWSA often selects models and prompts that match selections
made with a ground-truth validation set, outperforming baseline selection
strategies. | Clement Fung, Chen Qiu, Aodong Li, Maja Rudolph | 2023-10-16T14:42:22Z | http://arxiv.org/abs/2310.10461v3 | # Model Selection of Anomaly Detectors in the Absence of Labeled Validation Data
###### Abstract
Anomaly detection requires detecting abnormal samples in large unlabeled datasets. While progress in deep learning and the advent of foundation models has produced powerful unsupervised anomaly detection methods, their deployment in practice is often hindered by the lack of labeled data--without it, the detection accuracy of an anomaly detector cannot be evaluated reliably. In this work, we propose a general-purpose framework for evaluating image-based anomaly detectors with synthetically generated validation data. Our method assumes access to a small support set of normal images which are processed with a pre-trained diffusion model (our proposed method requires no training or fine-tuning) to produce synthetic anomalies. When mixed with normal samples from the support set, the synthetic anomalies create detection tasks that compose a validation framework for anomaly detection evaluation and model selection. In an extensive empirical study, ranging from natural images to industrial applications, we find that our synthetic validation framework selects the same models and hyper-parameters as selection with a ground-truth validation set. In addition, we find that prompts selected by our method for CLIP-based anomaly detection outperforms all other prompt selection strategies, and leads to the overall best detection accuracy, even on the challenging MVTec-AD dataset.
## 1 Introduction
Anomaly detection, automatically identifying samples that deviate from normal behavior, is an important technique for supporting medical diagnosis (Fernando et al., 2021), safeguarding financial transactions (Ahmed et al., 2016), bolstering cybersecurity (Mirsky et al., 2018; Siadati and Memon, 2017), and ensuring smooth industrial operations (Bergmann et al., 2019). There has been significant progress in data-driven approaches for unsupervised anomaly detection (Deecke et al., 2021; Liznerski et al., 2022; Jeong et al., 2023; Li et al., 2023; Li et al., 2023; Reiss et al., 2021). Before an anomaly detection method can be safely deployed in a new application, one must trust that it performs as well as expected, but performing such an evaluation is often hindered by a major barrier: the absence of labeled validation data. In many applications, validation data is typically absent because anomalies are rare and the large volume of available data is too expensive to label (Gornitz et al., 2013; Trittenbach et al., 2021).
Labeled validation data is also beneficial for zero-shot anomaly detection. With recent developments in foundation models, it has become possible to pre-train a large model on large-scale data from one domain and then to deploy it for a new anomaly detection task. Specifically, CLIP-based anomaly detection approaches (Jeong et al., 2023; Liznerski et al., 2022; Zhou et al., 2021; Esmaeilpour et al., 2022) have shown great performance in a variety of domains. While these approaches provide the exciting possibility to construct new anomaly detectors for new applications on demand, they could be more readily deployed in real-world applications if labeled validation data could aid with prompt selection and model evaluation.
In this work, we study the efficacy of synthetically generated anomalies for model selection of image-based anomaly detectors. Given a new image-based anomaly detection task, such as detecting faulty objects in a manufacturing plant, we assume access to a small set of normal samples (Zhao et al., 2021). Our approach leverages diffusion models (Ho et al., 2020; Song et al., 2021; Jeong et al., 2023) to generate synthetic anomalies from this small set of examples. The synthetic anomalies are then mixed with the normal examples to provide a synthetic validation set. In extensive experiments, ranging from natural images to industrial applications, we show, that the performance on anomaly detection benchmarks with synthetically generated validation sets often matches results on a validation set with ground truth labels.
Importantly, we develop a method for anomaly generation, that does not require training or fine-tuning a new diffusion model for the specialized anomaly detection task. Instead, we use the available normal samples and a diffusion model pre-trained on ImageNet (Deng et al., 2009; Dhariwal and Nichol, 2021) to interpolate between those normal samples (Jeong et al., 2023). We find that even for domains that are far from ImageNet, such as the industrial images in the MVTec-AD dataset (Bergman and Hoshen, 2020), this scheme generates synthetic anomalies with realistic backgrounds and consistent visual patterns.
Our work makes the following contributions:
* In Sec. 3.1, we present a framework for selecting image-based anomaly detection models based on synthetically generated anomalies. Figure 1 shows the outline of our approach.
* We propose a practical technique for generating synthetic anomalies that uses a general-purpose pre-trained diffusion model--without any fine-tuning or auxiliary datasets, described in Sec. 3.2.
* We empirically evaluate our method on a wide range of anomaly detection tasks and demonstrate its success on two use-cases: model selection amongst candidate anomaly detectors (Sec. 4.2) and prompt selection for anomaly detection with CLIP (Sec. 4.3).
## 2 Related Work
Unsupervised anomaly detection.Recent advances in anomaly detection models include autoencoder-methods (Chen and Konukoglu, 2018; Principi et al., 2017), deep one-class classification (Ruff et al., 2018; Ruff et al., 2019), and self-supervised learning-based methods (Bergman and Hoshen, 2020; Hendrycks et al., 2019; Qiu et al., 2021; Qiu et al., 2022; Schneider et al., 2022). While these methods are unsupervised, their architectures and training frameworks involve various hyperparameters, which can have a strong impact on detection accuracy (Campos et al., 2016; Goldstein and Uchida, 2016; Han et al., 2022). While Ding et al. (2022) propose to circumvent model selection by building an ensemble of candidate methods, model selection typically requires labeled validation data.
In outlier exposure (Hendrycks et al., 2019), which has been extended to anomaly detection with contaminated data (Qiu et al., 2022) and to meta-learning (Li et al., 2023), further improvements in detection accuracy come from using samples from an auxiliary dataset, usually Tiny-ImageNet. Although these auxiliary samples provide valuable additional training signal, they are too dissimilar from normal samples and can be easily detected; therefore, they are not useful for model evaluation or selection. Work in semi-supervised anomaly detection (Gornitz et al., 2013; Das et al., 2016; Trittenbach et al., 2021; Li et al., 2023) assumes access to a training set with some labeled anomalies but obtaining such data is unrealistic in most real-world applications.
Anomaly detection with foundation models.Nowadays, large models can be pre-trained on massive datasets to learn rich semantic image features which have proven useful for anomaly detectors in other vision domains--little to no additional training required. Large vision models such as vision transformers (ViT, Dosovitskiy et al. (2021)) or residual networks (ResNet, He et al. (2016)) pre-trained on ImageNet (Deng et al., 2009), can often be used as anomaly detectors in other vision domains, either without fine-tuning or by fine-tuning an outlier exposure objective (Deecke et al., 2021; Fort et al., 2021; Mirzaei et al., 2023; Reiss et al., 2021).
Another powerful class of foundation models are vision-language models like CLIP (Radford et al., 2021). Esmaeilpour et al. (2022) use a support set of normal samples to learn a text description generator for CLIP-based out-of-distribution detection. With hand-crafted text prompts CLIP can be
employed on a new anomaly detection (Liznerski et al., 2022) or anomaly segmentation task (Jeong et al., 2023; Zhou et al., 2021) without any training, i.e. in a zero-shot manner, which means no training data for the new task is required. However, detection performance depends on several key hyperparameters, especially the choice of prompts (Liznerski et al., 2022; Jeong et al., 2023b), which implies that labeled validation data for prompt selection would be highly beneficial. However, in the age of foundation models, when task-specific instances of foundation models require little to no data, it does not make sense to assume access to labeled validation data.
Evaluation of tabular and time-series anomaly detection.Nguyen et al. (2016); Marques et al. (2015, 2020) propose unsupervised model selection strategies with so called "internal metrics" that can be computed from predicted anomaly scores on unlabeled data. (See Ma et al. (2023) for a review.) Meta-training can offer another set of approaches for unsupervised anomaly detector selection (Schubert et al., 2023; Zhao et al., 2021, 2022). However, since meta-learning requires a large number of related labeled datasets, their application has been limited to tabular data, and prior work on internal metrics has been applied to tabular anomaly detection only (Nguyen et al., 2016; Marques et al., 2015, 2020; Ma et al., 2023). In contrast, we exploit recent advances in diffusion models to enable anomaly detection model selection without labeled validation data in the vision domain. In time-series anomaly detection, model evaluation is also known to be biased and unreliable due to poorly labeled test data (Wu and Keogh, 2020; Wagner et al., 2023). Here, hand-crafted anomalies can be useful (Lai et al., 2021). But for the evaluation of vision-based anomaly detectors, synthetic anomalies need to go beyond pixel-wise manipulation and are instead required to reflect abnormal patterns in a more complex, and often semantic, level of abstraction.
Generating images with guidance.Diffusion models (Ho et al., 2020; Song et al., 2021) have recently been found to outperform other generative models (Dhariwal and Nichol, 2021). Although generative models are traditionally used to generate in-distribution data, a variety of prior work has proposed techniques that guide generative models to generate new distributions. Specifically, text-guided generation (Kim et al., 2022; Kwon et al., 2023; Kawar et al., 2023; Mokady et al., 2023; Tumanyan et al., 2023) with CLIP (Radford et al., 2021) guided the diffusion-based image generation process with text prompts to generate samples from a distribution of interest, e.g. "cat with glasses". Luo et al. (2023) use CLIP embeddings of the labels of different classes to guide StyleGAN (Karras et al., 2019, 2020) to generate images for the evaluation of image classifiers. Similarly, Jain et al. (2023) use CLIP for model diagnosis. These approaches require the target labels (e.g, gender, glasses, etc.) as inputs, but the nature of anomalies is typically unknown. Instead, we rely on DiffStyle (Jeong et al., 2023a), which provides training-free guidance for image generation by interpolating two images using the reverse DDIM process (Song et al., 2021). We find that interpolating between two normal samples will typically preserve the dominant visual features, such as realistic textures and background, while introducing slight manipulations that make the generated images promising candidates for synthetic anomalies in our proposed model selection framework.
Figure 1: We propose to use a pretrained diffusion model to turn a small dataset of normal samples into a synthetic validation set containing both real normal samples and synthetic anomalies, described in Sec. 3.2. The synthetic validation set is then used for model selection and validation, as described in Sec. 3.1. Components in blue are frozen, components in green are real data, and components in orange are our techniques.
Method
In this section, we propose to use existing diffusion-based image generation techniques to generate synthetic anomalies and use them as a _synthetic validation dataset_, enabling model selection in domains where such a validation dataset does not exist. In Sec. 3.1, we describe how synthetic anomalies can be used for model selection. We then propose a synthetic anomaly generation approach in Sec. 3.2. Figure 1 demonstrates the overall process used by our method.
### Model Selection with Synthetic Anomalies
The absence of labeled validation data is a major roadblock in the deployment of anomaly detection methods. However, normal data can usually be obtained. For this reason, we follow Zhao et al. (2021), and assume access to a set of normal samples we call the _support set_\(X_{\text{support}}\). In our empirical study, we show that the support set can have as few as 10 normal samples. Using this support set, we wish to construct a synthetic validation set that can aid model selection.
Creating a synthetic validation set and using it for model selection entails the following steps:
**Step 1: Partitioning the support set.** The support set \(X_{\text{support}}\), is randomly partitioned into style images \(X_{\text{style}}\), content images \(X_{\text{content}}\), and normal validation images \(X_{\text{in}}\). \(X_{\text{style}}\) and \(X_{\text{content}}\) are used for anomaly generation, and \(X_{\text{in}}\) is held out for evaluation.
**Step 2: Generating synthetic anomalies.** The style and content images are processed with DiffStyle (Jeong et al., 2023) to produce synthetic anomalies \(\tilde{X}_{\text{out}}\). Details will be given in Sec. 3.2.
**Step 3: Mixing the synthetic validation set.** The normal validation images \(X_{\text{in}}\) are combined with the synthetic anomalies \(\tilde{X}_{\text{out}}\) to produce a labeled synthetic validation set,
\[\mathcal{D}=\{(x,1)|x\in\tilde{X}_{\text{out}}\}\cup\{(x,0)|x\in X_{\text{in }}\}, \tag{1}\]
where the label 1 indicates an anomaly and 0 a normal image.
**Step 4: Evaluating detection accuracy of candidate models.** In a final step, candidate models are evaluated in terms of their detection accuracy on the synthetic validation set \(\mathcal{D}\). Anomaly detection methods are typically evaluated in terms of AUROC, the area under the receiving operator characteristic curve (Emmott et al., 2015; Maxion and Tan, 2000).
Since we assume a small support set, training or fine-tuning a generative (diffusion) model is infeasible. Instead, in Sec. 3.2, we propose to use a DDIM (Song et al., 2021) pre-trained on ImageNet and DiffStyle, a training-free method for diffusion-based image-to-image style transfer, and adapt it for generating synthetic anomalies. Our method does not perform any model training or fine-tuning and does not require any data outside of the support set.
### Generating Synthetic Anomalies
We use DiffStyle (Jeong et al., 2023) to generate synthetic anomalies with a pretrained DDIM. From style images \(X_{\text{style}}\), content images \(X_{\text{content}}\), DiffStyle takes any style-context images pair as input--a "style image" \(I^{(1)}\) and a "content image" \(I^{(2)}\)--and produces a new, generated image with \(I^{(2)}\)'s content and \(I^{(1)}\)'s style. To achieve this, \(I^{(1)}\) and \(I^{(2)}\) are mapped into the diffusion model's latent space through the forward diffusion process, producing latent vectors \(x_{T}^{(1)}\) and \(x_{T}^{(2)}\). We refer to the h-space (i.e., the inner-most layer of the UNet) of \(x_{T}^{(1)}\) and \(x_{T}^{(2)}\) as \(h^{(1)}\) and \(h^{(2)}\) respectively. The h-space has been shown to be a meaningful semantic space in the image domain, enabling properties such as linearity and composition between images and can be manipulated during a diffusion model's image generation process to achieve desired properties. We refer readers for more details related to h-space to Kwon et al. (2023) and Jeong et al. (2023).
Given two latent vectors \(h^{(1)}\) and \(h^{(2)}\), a simple linear interpolation is performed to style-transfer between two images: \(h^{(\text{gen})}=(1-\gamma)h^{(1)}+\gamma h^{(2)}\) where \(\gamma\) represents the relative strength of the content image during style transfer1. We use \(\gamma=0.7\) as the default in our experiments. We then
perform the asymmetric reverse diffusion process using \(x_{T}^{(1)}\), replacing the h-space with \(h^{(\text{gm})}\):
\[x_{t-1}=\sqrt{\alpha_{t-1}}\mathbf{P}_{t}(e_{t}^{\theta}(x_{T}^{(1)}|h^{(\text{ gen})}))+\mathbf{D}_{t}(e_{t}^{\theta}(x_{T}^{(1)})). \tag{2}\]
Once the reverse process is completed (i.e., at \(t=0\)), the final output \(x_{0}\) is saved as a synthetic anomaly. To generate the full set of synthetic anomalies \(\tilde{X}_{\text{out}}\), we apply all possibilities of \((I^{(1)},I^{(2)})\) in the cross product of \(X_{\text{style}}\) and \(X_{\text{content}}\). Figure 2 shows examples of anomalies generated with this approach; we find that our generated images have realistic backgrounds and textures, but can contain various semantic corruptions expected of anomalous images. Our synthetic anomaly generation method assumes _no knowledge of the distribution of potential anomalies_: the general-purpose diffusion model is not fine-tuned and only images from the support set \(X_{\text{support}}\) are used as inputs.
## 4 Empirical Study
To study the efficacy of synthetic anomalies for model selection, we develop an anomaly detection validation benchmark and investigate whether our synthetic validation delivers similar results to ground-truth validation sets. Our evaluation spans various vision domains, including natural and industrial images. In evaluating our method, we find that our method selects the true best-performing model in five of six cases (Sec. 4.2) and outperforms all other strategies for CLIP prompt selection (Sec. 4.3); these results are achieved without any access to the real validation data. We first describe the datasets, anomaly detection tasks, and anomaly generation details in Sec. 4.1. Next, we showcase two use cases of synthetic data in anomaly detection validation-model selection and CLIP prompt selection-separately in Sec. 4.2 and Sec. 4.3.
### Experimental Setup
We present an experimental setup that can be used as a benchmark to evaluate synthetic validation data. The goal of this benchmark is to be able to evaluate, how well the results on the synthetic validation data correspond to results one would obtain with ground-truth validation data. Our benchmark contains anomaly detectors of varying strength and a battery of anomaly detection tasks. The tasks vary by difficulty from the easier one-vs-rest to the more difficult one-vs-one anomaly detection setting and span three different vision domains. Quantities of interest include predictions of the absolute detection performance based on the validation sets, the relative ranking of the detection methods based on the validation sets, and the hyper-parameters (such as prompts for CLIP-based anomaly detection) that are selected based on the validation sets.
Figure 2: Example pairwise interpolations for CUB class 1 (“Black Footed Albatross”) (left) and MVTec-AD product ”cable” (right). For each example, the top row of images (in green) are used as source “style” images, and the left column of images (in cyan) are used as source “content” images. The inner grid (in red) shows each pairwise interpolation between the source style and content image, performed with our modified DiffStyle process. All source images are drawn from the distribution of class 1 support images; no validation data or images from other classes are used.
Datasets.Our benchmark spans three frequently-used image datasets: MVTec Anomaly Detection dataset (MVTec-AD) (Bergmann et al., 2019), Caltech-UCSD Birds (CUB) (Wah et al., 2011), and Flowers (Nilsback and Zisserman, 2008). MVTec-AD contains 15 real industrial product categories; for each category, the training subset contains images of defect-free products, and the testing subset contains labeled images of both good and defective products. CUB and Flowers are multi-class datasets containing 200 bird species and 102 flower species respectively.
Anomaly detection tasks.Our benchmark contains 317 anomaly detection tasks: 15 from MVTec-AD, 200 from CUB, and 102 from Flowers, where each class or category is iteratively treated as normal. Besides considering the usual one-vs-rest anomaly detection setup for multi-class datasets CUB and Flowers, we also adopt the one-vs-one setup used in Mirzaei et al. (2023) to simulate more difficult anomaly detection tasks. Specifically, we individually select each class as the inlier class and treat all other images as anomalies--both when considering each out-class individually (one-vs-one) and when considering all out-classes as a single class (one-vs-rest). For MVTec-AD, we predict images of defective products from good products for each product; each product contains multiple defect types, which can be used as different out-classes for one-vs-one measurements or represented as a single class for one-vs-rest. For all tasks, images from the in-class training subset are used as the support set, and images from the relevant in-class and out-class testing subsets are used for constructing the ground-truth validation set. In summary, for each task, we report three variants of their resulting AUROC:
* One-Vs-Rest: the AUROC when considering all other classes as a single anomaly class
* One-Vs-One Average: the average AUROC when considering each out-class individually
* One-Vs-One Closest: the worst AUROC when considering each out-class individually
Generating synthetic anomalies.For all three datasets and all 317 anomaly detection tasks, we generate synthetic anomalies with training examples from the in-class distribution only. For the CUB and MVTec-AD datasets, we sample 10 images for \(X_{\text{style}}\) and 10 images for \(X_{\text{content}}\) images from the training set, generating 100 synthetic anomalies. For the Flowers dataset, only 10 images are included in the training set for each class, so we generate 25 synthetic anomalies, using five images for \(X_{\text{style}}\) and five images for \(X_{\text{content}}\). Figure 2 shows 15 examples of generated synthetic anomalies for a single CUB class (left) and a single MYTec-AD product (right).
Although prior results in Kwon et al. (2023) and Kim et al. (2022) suggest that using a diffusion model trained on the same dataset is required to generate high-quality images, we find that using a single, common diffusion model is sufficient to generate synthetic anomalies. We use the same pre-trained model (the 256x256 diffusion model trained on ImageNet without class conditioning from Dhariwal and Nichol (2021)) for all datasets and anomaly detection tasks.
### Model Selection on Synthetic Data
We first demonstrate the effectiveness of our synthetic validation framework for model selection. Given a set of candidate models, we show that one can select the suitable anomaly detection model with the help of our synthetic anomalies generated with DiffStyle.2
Footnote 2: We compare with using Tiny-ImageNet as synthetic anomalies for model selection in Appendix A.1. We found that Tiny-ImageNet images do not serve as effective anomalies for model selection.
Candidate anomaly detection models.We experiment across five pre-trained ResNet models (ResNet-152, ResNet-101, ResNet-50, ResNet-34, ResNet-18) and five pre-trained Vision Transformers (ViT-H-14, ViT-L-32, ViT-L-16, ViT-B-32, ViT-B-16). We use the ImageNet pre-trained model weights of Dosovitskiy et al. (2021); He et al. (2016). To perform anomaly detection, we follow the methodology of Mirzaei et al. (2023). First, a small set of examples are used to establish a feature bank. Then, to perform detection, each input example is compared to the feature bank: the total Euclidean distance to the 3-nearest neighbors is used as an anomaly score.
Evaluation setup.For each task, we vary the number of synthetic anomalies used in the synthetic validation set--from the full set of 100 synthetic anomalies to as few as three synthetic anomalies.
When reducing the number of synthetic anomalies, we always eliminate the easiest anomalies with the highest anomaly score at inference time until the desired number of synthetic anomalies remains. We compute the AUROC for each task and candidate model. For each dataset, the AUROCs from each task are averaged to be the overall "synthetic AUROC". We then repeat this protocol with the real validation dataset, again averaging over anomaly detection tasks to compute the "real AUROC". We finally compare the real and synthetic AUROCs to investigate if the candidate model rankings produced by these benchmarks are similar.
Evaluation results.Figure 3 shows the results for the CUB (left) and Flowers (right) datasets. An ideal method would produce model rankings with synthetic data (along the y-axis) that closely match the rankings of models with real data (along the x-axis). As expected, we find that our synthetic validation best approximate the one-vs-one average and one-vs-rest benchmarks, as they are more robust to variations between classes. Figures 2(c)-2(f) show that the rankings of models are consistent: the best-performing model is the same whether real or synthetic validation data is used, and relative rankings of models are similar. We find that the one-vs-one closest benchmark, an approximation of worst-case performance, is more difficult to estimate with synthetic anomalies. Figures 2(a) and 2(b) show that the best-performing model is the same when the model ranks are used.
Figure 3: Comparing the real validation AUROC against the AUROC with our synthetic validation set, for the CUB (left) and Flowers (right) datasets and for all three benchmarks: one-vs-one closest (top), one-vs-one average (middle), and one-vs-rest (bottom). When comparing model rankings with real validation AUROC and model rankings based on synthetic AUROC, we find that our synthetic anomalies are effective estimates for the one-vs-one average and one-vs-rest benchmarks. In these four cases, the best-performing model is the same whether real or synthetic validation data is used.
show that, although the real and synthetic AUROCs are relatively aligned for the Flowers dataset, the performance is less consistent for CUB.
When performing model selection on the MVTec-AD dataset, we found that our results were noisy at approximating the real AUROC and selecting the true best models. The results for this experiment are shown in Appendix A.1 and Figure 5. Since our method relies on ImageNet-trained artifacts (the ViT and ResNet models, CLIP, and our pre-trained diffusion models were all trained with ImageNet) we expect model selection to improve with foundation models that are more representative of the close-up images of industrial products found in the MVTec-AD dataset. However, as shown in the next section, our synthetic validation is still extremely effective for selecting CLIP prompts for all datasets, including MVTec-AD.
### CLIP Prompt Selection on Synthetic Data
Zero-shot anomaly detection methods (Jeong et al., 2023; Liznerski et al., 2022) save the effort on collecting training samples. However, the performance of CLIP-based anomaly detection models depend on the choice of prompts. Prior works evaluate across a variety of candidate CLIP prompts for zero-shot image anomaly detection on real labeled validation data. Validation data might be also absent in the zero-shot setting. Next, we evaluate the efficacy of our generated synthetic anomalies for selecting CLIP prompts on our 317 anomaly detection benchmark tasks.
Zero-shot anomaly detection with CLIP.We perform zero-shot image anomaly detection with CLIP (with the ViT-B/16 backbone) as suggested by Liznerski et al. (2022); Jeong et al. (2023b): given an input, we submit two text prompts and predict the class with the higher similarity. We assume that the name of the inlier class is known, and use "some" or "something" for anomalous classes. For example, for the CUB dataset, if "red cardinal" is the name of the inlier class, we compare the CLIP similarities of "a photo of a red cardinal" to "a photo of some bird". We select amongst a pool of prompt templates from Jeong et al. (2023b) for our anomaly detection tasks. A full list of candidate prompts used for each dataset can be found in Appendix A.4.
Evaluation setup.We consider a set of ten candidate prompt templates shown in Appendix A.4. We evaluate each candidate prompt for the one-vs-one closest and one-vs-one average task variants of the 317 anomaly detection tasks and select the prompt with the best AUROC computed on three
\begin{table}
\begin{tabular}{c r|r r r r r} \hline \hline & & \multicolumn{2}{c}{Flowers} & \multicolumn{2}{c}{CUB} & \multicolumn{2}{c}{MVTec-AD} \\ & & (102 tasks) & (200 tasks) & (15 tasks) \\ \hline \multirow{6}{*}{\begin{tabular}{c} One-Vs-One \\ Closest \\ \end{tabular} } & Random Choice & 10.0\% & & 10.0\% & & 10.0\% \\ & Default Prompt & 0.0\% & (1) & 2.5\% & (5) & 6.7\% & (1) \\ & Tiny-ImageNet Choice & 17.6\% & (18) & 17.0\% & (34) & 6.7\% & (1) \\ & Global Best Prompt (Oracle) & 41.2\% & (42) & 18.5\% & (37) & 26.7\% & (4) \\ \cline{2-6} & Our Method & **37.2\%** & (38) & **23.0\%** & (46) & **40.0\%** & (6) \\ \hline \multirow{6}{*}{
\begin{tabular}{c} One-Vs-One \\ Average \\ \end{tabular} } & Random Choice & 10.0\% & & 10.0\% & & 10.0\% \\ & Default Prompt & 0.0\% & (1) & 0.0\% & (0) & 6.7\% & (1) \\ & Tiny-ImageNet Choice & 17.6\% & (18) & 13.5\% & (27) & 6.7\% & (1) \\ & Global Best Prompt (Oracle) & 44.1\% & (45) & 32.5\% & (65) & 40.0\% & (6) \\ \cline{2-6} & Our Method & **40.2\%** & (41) & **32.0\%** & (64) & **46.7\%** & (7) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Synthetic outliers can effectively select CLIP prompts in the absence of real validation data. We compare five methods for selecting CLIP prompts: (i) our method (described in Sec. 4.3), (ii) a random choice of candidate prompt, (iii) always choosing the default prompt, (iv) our method with Tiny-ImageNet as the synthetic anomalies, and (iv) the single, global best candidate prompt an oracle would provide (grayed out as this cannot be obtained in practice). For each method, we record the tasks where the selected prompt matches the best task-specific prompt, reporting the accuracy and raw count (in parentheses) for the one-vs-one closest and one-vs-one average benchmarks. Across three datasets and two benchmarks, our method outperforms all other strategies, and even outperforms the global best prompt in three out of six cases.
validation sets: our proposed synthetic validation set, an alternative validation set with anomalies sampled from Tiny-ImageNet, and the ground-truth validation dataset. We measure how often each strategy's selected prompt matches the prompt selected with real validation data. For comparison, we also include two baseline strategies: Random Choice that randomly selects candidate prompt and Default Prompt that always selects the default prompt template (e.g., "a photo of a [class name] bird" vs "a photo of some bird"). For reference, as an oracle result, we also report how often a global best prompt matches the task-specific best prompts.
Evaluation results.We report the prompt selection accuracy in Table 1 for all three datasets. We also report the averaged AUROCs for different prompt-selection strategies in Table 2. In Table 1, compared to all baselines, our method performs best for all tasks and improves prompt selection accuracy by 6% to 40%. This suggests that our generated synthetic anomalies are helpful in selecting the best prompts for tuning CLIP-based anomaly detectors on various tasks. Furthermore, our method outperforms the single-best-prompt strategy on in three out of six settings (two task variants in three datasets shown in Table 1). In these cases, the prompt selection accuracy difference ranges from -4% to +16%. We emphasize that selecting the single best-performing prompt relies on access to the real validation data, whereas our method only uses synthetic anomalies generated from in-class training sources.
As shown in Table 2, by selecting better prompts with our generated synthetic anomalies, we improve the zero-shot anomaly detection results in the challenging one-vs-one closest setting over the popular default choice of prompt, i.e., "a photo of a [class name]" by 3.4% on Flowers, 2% on CUB, and 2.7% on MVTec-AD. Our proposed prompt selection also provides a consistent performance improvement over the averaged results of various prompts and the results of prompts selected with Tiny-ImageNet, showcasing the general effectiveness of our method.
## 5 Conclusion
In this work, we investigate an approach for anomaly detection model selection and validation _without any validation data_. To substitute for a validation dataset, we propose that a general-purpose diffusion model can be used generate synthetic outliers, using only a small support set of in-class examples as input. Unlike prior methods for synthetic anomaly generation, we do not rely on any model training, fine-tuning, or any domain-specific architectures or techniques. Our empirical study
\begin{table}
\begin{tabular}{c c|c c c} \hline \hline & & Flowers & CUB & MVTec-AD \\ & & (102 tasks) & (200 tasks) & (15 tasks) \\ \hline \multirow{5}{*}{One-Vs-One} & Default Prompt & 0.697 & 0.570 & 0.436 \\ & Tiny-ImageNet Choice & 0.718 & 0.582 & 0.438 \\ & Average of Prompts & 0.708 & 0.577 & 0.441 \\ \cline{1-1} & Worst Prompt (Oracle) & 0.658 & 0.527 & 0.367 \\ & Best Prompt (Oracle) & 0.759 & 0.626 & 0.511 \\ \cline{1-1} \cline{2-5} & Our Method & **0.729** & **0.590** & **0.463** \\ \hline \multirow{5}{*}{One-Vs-One} & Default Prompt & 0.954 & 0.970 & **0.585** \\ & Tiny-ImageNet Choice & **0.964** & 0.971 & **0.592** \\ \cline{1-1} & Average of Prompts & 0.957 & 0.971 & **0.582** \\ \cline{1-1} & Worst Prompt (Oracle) & 0.948 & 0.964 & 0.522 \\ \cline{1-1} & Best Prompt (Oracle) & 0.968 & 0.976 & 0.646 \\ \cline{1-1} \cline{2-5} & Our Method & **0.964** & **0.973** & **0.598** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Using our synthetic outliers to select CLIP prompts always results in the best average AUROC, even in the absence of real validation data. We compare our method for selecting CLIP prompts (described in Sec. 4.3) to the default prompt, the prompt choice made with Tiny-ImageNet, the average of prompts, the hypothetical worst-case (always picking the worst prompt), and the hypothetical best-case (always picking the best prompt). Oracle results are grayed out since they cannot be obtained in practice, but we include them for reference.
shows that synthetic validation datasets constructed with this approach can be used effectively in two ways: for selecting amongst a set of candidate anomaly detection models, and for selecting hyper-parameters for zero-shot anomaly detection models, specifically CLIP prompt templates.
|
2301.06542 | Data-Driven Encoding: A New Numerical Method for Computation of the
Koopman Operator | This paper presents a data-driven method for constructing a Koopman linear
model based on the Direct Encoding (DE) formula. The prevailing methods,
Dynamic Mode Decomposition (DMD) and its extensions are based on least squares
estimates that can be shown to be biased towards data that are densely
populated. The DE formula consisting of inner products of a nonlinear state
transition function with observable functions does not incur this biased
estimation problem and thus serves as a desirable alternative to DMD. However,
the original DE formula requires knowledge of the nonlinear state equation,
which is not available in many practical applications. In this paper, the DE
formula is extended to a data-driven method, Data-Driven Encoding (DDE) of
Koopman operator, in which the inner products are calculated from data taken
from a nonlinear dynamic system. An effective algorithm is presented for the
computation of the inner products, and their convergence to true values is
proven. Numerical experiments verify the effectiveness of DDE compared to
Extended DMD. The experiments demonstrate robustness to data distribution and
the convergent properties of DDE, guaranteeing accuracy improvements with
additional sample points. Furthermore, DDE is applied to deep learning of the
Koopman operator to further improve prediction accuracy. | Jerry Ng, Haruhiko Harry Asada | 2023-01-16T18:04:09Z | http://arxiv.org/abs/2301.06542v1 | # Data-Driven Encoding: A New Numerical Method for Computation of the Koopman Operator
###### Abstract
This paper presents a data-driven method for constructing a Koopman linear model based on the Direct Encoding (DE) formula. The prevailing methods, Dynamic Mode Decomposition (DMD) and its extensions are based on least squares estimates that can be shown to be biased towards data that are densely populated. The DE formula consisting of inner products of a nonlinear state transition function with observable functions does not incur this biased estimation problem and thus serves as a desirable alternative to DMD. However, the original DE formula requires knowledge of the nonlinear state equation, which is not available in many practical applications. In this paper, the DE formula is extended to a data-driven method, Data-Driven Encoding (DDE) of Koopman operator, in which the inner products are calculated from data taken from a nonlinear dynamic system. An effective algorithm is presented for the computation of the inner products, and their convergence to true values is proven. Numerical experiments verify the effectiveness of DDE compared to Extended DMD. The experiments demonstrate robustness to data distribution and the convergent properties of DDE, guaranteeing accuracy improvements with additional sample points. Furthermore, DDE is applied to deep learning of the Koopman operator to further improve prediction accuracy.
## I Introduction
Dynamic Mode Decomposition (DMD) was presented as a method to produce linear models from data generated through nonlinear dynamical processes by using Singular Value Decomposition (SVD) [1]. Later, this method was developed further to create Extended Dynamic Mode Decomposition (EDMD), which introduced the concept of using observable functions, nonlinear functions of state variables, as a method of augmenting the state space [2]. EDMD reference the Koopman Operator as justification and a theoretical underpinning for lifting the state space. In his seminal work, Bernard Koopman showed the existence of this operator that transforms nonlinear systems into linear systems [3]. Another extension of DMD has shown the viability of using DMD for control on non-autonomous systems [4]. This enabled complex nonlinear Model Predictive Control (MPC) to be converted to linear MPC [5], leading to numerous studies utilizing the methodology to real systems [6, 7, 8, 9].
To improve the accuracy of the models based on the Koopman Operator, two avenues of research have formed. The first avenue regards the selection of the observable functions used for constructing a lifted state space, as these functions are a key ingredient in creating an accurate linear model. Various methods have been developed, including deep neural networks for learning effective observable functions [10, 11, 12] and optimization [13]. However, an efficient selection of observables does not solve all the issues that arise when attempting to construct an accurate linear model. For example, it is known that unstable modes are involved in Koopman-based DMD models and their extensions although the underlying nonlinear systems are known to be stable. The second avenue of research involves the formulation of the linear transition matrix. Extensive studies have been done to create stable linear models to remedy the situations where an outright use of DMD would lead to the creation of an unstable linear model [14, 15, 16, 17]. Recently, an extension of DMD, called Robust Dynamic Mode Decomposition (RDMD), utilizes statistical measures to suppress the effect of outliers on modeling the linear Koopman matrix [18].
A fundamental difficulty in constructing a proper linear model is data dependency. Least Squares Estimation (LSE), involved in all DMD based methods, often produces a significant bias due to the distribution of the dataset. To eliminate this dependency on data distributions, the current work takes an alternative approach to LSE.
Recently, a new formulation of the Koopman Operator, termed Koopman Direct Encoding (DE), was produced [19]. This method directly encodes the nonlinear function of state transition using observables as basis functions to obtain a Koopman linear model. Inner products of observable functions in composition with the nonlinear state transition function are used to construct the state transition matrix without use of LSE. While DE theoretically guarantees the exact linear model, it requires access to the nonlinear state equations, which are often not available in practical applications. The current work aims to fill the gap between DE and data-driven approaches.
There are four significant contributions presented in this work. The first is the conversion of the DE formula of the Koopman Operator to a data-driven formula. The second is a computational algorithm and proof of its convergence to the true inner products that constitute the DE formula. The third is numerical experiments that provide evidence that the proposed method, unlike EDMD, does not exhibit biases to data distribution, but can produce consistently higher accuracy compared to EDMD. Finally, the DDE algorithm is utilized in modeling a high order nonlinear system in combination with deep learning.
## II Koopman Operator and the Direct Encoding Method
In this section, we give a brief overview of the Koopman Operator and dynamic mode decomposition, and introduce the direct encoding method for obtaining a Koopman operator directly from nonlinear dynamics.
### _Least Squares Estimation of the Koopman operator_
Consider a discrete-time dynamical system, given by
\[x_{t+1}=f(x_{t}) \tag{1}\]
where \(x\in X\subset\mathbb{R}^{n}\) is the independent state variable vector representing the dynamic state of the system, \(f\) is a self-map, nonlinear function \(f:X\to X\), and \(t\) is the current time step. Also consider a real-valued observable function of the state variables \(g:X\rightarrow\mathbb{R}\). The Koopman Operator \(\mathcal{K}\) is an infinite-dimensional linear operator acting on the observable function \(g\) :
\[\mathcal{K}g=g\circ f \tag{2}\]
where \(g\circ f\) is the composition of function \(g\) with function \(f\): \((g\circ f)(x)=g(f(x))\).
A common data-driven method for constructing the operator is Extended Dynamic Mode Decomposition (EDMD) [2], where observables that are experimentally obtained or simulated from the governing equation of the system are augmented by including real-valued observable functions of the independent state vector \(x_{t}\). Collectively, a high-dimensional state vector is formed.
\[z_{t}=\begin{bmatrix}g_{1}(x_{t})\\ g_{2}(x_{t})\\ \vdots\\ g_{m}(x_{t})\end{bmatrix} \tag{3}\]
where \(m\) is the order of the lifted state corresponding to the number of observable functions. Underpinned by the Koopman Operator theory, EDMD assumes the existence of a linear state transition matrix \(A\) relating \(z_{t+1}\) to \(z_{t}\), and determine \(A\) by solving a least squares regression that minimizes the Sum of Squared Error (SSE).
\[A=\arg\min_{A}\sum_{t}||z_{t+1}-Az_{t}||^{2} \tag{4}\]
Singular Value Decomposition (SVD) is used for the least squares optimization.
### _Direct Encoding of the Koopman Operator_
An alternative to the least squares estimate and EDMD is to obtain the exact \(A\) matrix by directly encoding the self-map, nonlinear state transition function \(f(x)\) with an independent and complete set of observable functions through inner product computations. This Direct Encoding method is introduced next, while the full proof can be found in [19].
Let us first consider the case where \(g_{1},g_{2},g_{3},...\) are orthonormal basis functions spanning a Hilbert space \(\mathcal{H}\). We assume that the self-map nonlinear function \(f(x)\) is continuous and that the composition of \(g_{j}\) with \(f\) is also involved in the Hilbert space.
\[g_{j}\circ f\in\mathcal{H}\quad\forall j \tag{5}\]
This implies that the function \(g_{j}\circ f\) can be expanded in \([g_{1},g_{2},g_{3},...]\).
\[g_{j}\circ f=\sum_{k=1}^{\infty}\langle g_{j}\circ f,g_{k}\rangle g_{k} \tag{6}\]
Concatenating \(g_{1},g_{2},g_{3},...\) and \(g_{1}\circ f,g_{2}\circ f,g_{3}\circ f...\) in infinite dimensional column vectors, respectively,
\[\bar{z}_{t}=\begin{bmatrix}g_{1}(x_{t})\\ g_{2}(x_{t})\\ \vdots\end{bmatrix},\bar{z}_{t+1}=\begin{bmatrix}g_{1}[f(x_{t})]\\ g_{2}[f(x_{t})]\\ \vdots\end{bmatrix} \tag{7}\]
eq. (6) can be written in matrix and vector form.
\[\bar{z}_{t+1}=\bar{A}\bar{z}_{t} \tag{8}\]
where \(\bar{A}\) is an infinite dimensional matrix consisting of the inner products involved in eq. (6),
\[\bar{A}=\begin{bmatrix}\langle g_{1}\circ f,g_{1}\rangle&\langle g_{1}\circ f,g_{2}\rangle&\cdots\\ \langle g_{2}\circ f,g_{1}\rangle&\langle g_{2}\circ f,g_{2}\rangle&\cdots\\ \vdots&\vdots&\ddots\end{bmatrix} \tag{9}\]
Eq. (8) manifests that the state lifted to the infinite dimensional space \(\bar{z}_{t}\) makes linear state transition with matrix \(\bar{A}\).
The observables \(g_{1},g_{2},g_{3},...\) were assumed to be orthonormal basis functions in the above derivation. This assumption can be relaxed to an independent and complete set of basis functions spanning the Hilbert space. Hereafter, let \([g_{1},g_{2},g_{3},...]\) be an independent and complete set of basis functions spanning the Hilbert space.
It can be shown that the time evolution of lifted state \(z_{t}\) is given by a constant matrix \(A_{f}\) for the independent and complete set of basis functions \([g_{1},g_{2},g_{3},...]\).
\[z_{t+1}=A_{f}z_{t} \tag{10}\]
The matrix \(A_{f}\) can be computed directly from the self-map, state transition function \(f(x)\) and an independent and complete set of observables \([g_{1},g_{2},g_{3},...]\) through inner product computations. Post-multiplying the transpose of \(z_{t}\) to both sides of eq. (10) and integrating them over \(X\) yield:
\[\int_{X}z(f(x))z^{T}(x)dx=A_{f}\int_{X}z(x)z^{T}(x)dx \tag{11}\]
which can be written as
\[Q=A_{f}R \tag{12}\]
where
\[Q=\begin{bmatrix}\langle g_{1}\circ f,g_{1}\rangle&\langle g_{1}\circ f,g_{2 }\rangle&\cdots\\ \langle g_{2}\circ f,g_{1}\rangle&\langle g_{2}\circ f,g_{2}\rangle&\cdots\\ \vdots&\vdots&\ddots\end{bmatrix} \tag{13}\]
\[R=\begin{bmatrix}\langle g_{1},g_{1}\rangle&\langle g_{1},g_{2}\rangle& \cdots\\ \langle g_{2},g_{1}\rangle&\langle g_{2},g_{2}\rangle&\cdots\\ \vdots&\vdots&\ddots\end{bmatrix} \tag{14}\]
Because the observables are independent, the matrix \(R\) is non-singular. Therefore, the matrix \(A_{f}\) is given by
\[A_{f}=QR^{-1} \tag{15}\]
This formula for obtaining the matrix \(A_{f}\) directly from the governing nonlinear state equation with the function \(f(x)\) and the independent observables through inner products, which are guaranteed to exist in Hilbert space \(\mathcal{H}\), is the Direct Encoding method.
## III Data-Driven Koopman Encoding
The prevailing method for construction of the Koopman Operator, EDMD, is based on LSE and SVD. This method, however, cannot provide a consistent estimate; the result is highly dependent on the distribution of data as the Koopman Operator is being approximated [20]. This dependency on distribution occurs because a core assumption of LSE is that the model structure is correct; when this assumption is violated, LSE is unable to create an unbiased estimator [21]. As the Koopman operator is truncated in practical use, this assumption does not hold.
Non-uniform data distributions inevitably occur in practical applications. For a nonlinear dynamical system with a stable equilibrium, for example, data collected from experiments and/or simulation of the system tend to be dense in the vicinity of the equilibrium, as all trajectories that begin within a region of attraction converge to the equilibrium. Because LSE applies equal weighting to all data points, the model is heavily tuned to the behavior of the densely populated region.
The Direct Encoding method described previously enables us to obtain the exact linear state transition matrix \(A\) through inner product computations. As the formulation is based on integration over the entire state space, there is no bias towards particular parts of the domain.
However, the original form of the Direct Encoding method utilizes the nonlinear state equation, i.e. the self-map \(f(x)\), to compute the inner products. In practical applications, such a nonlinear function is not always available; only data are available. The objective of this section is to establish a computational algorithm to obtain the \(A\) matrix by numerically computing the inner products, \(\langle g_{i},g_{j}\rangle,\langle g_{i}\circ f,g_{j}\rangle\), from a given set of data.
The method presented consists of three operations.
* The integral of the inner products is reduced in range from the entire state space to the dynamic range encapsulated by the data.
* The dynamic range is discretized with data points.
* The inner product integral is reduced to a weighted summation of the integrand evaluated at each data point multiplied by the volume \(\Delta v\) associated to each point.
Naturally, if data are densely populated in a small region, the discretized integral interval is small and thereby the volume also becomes small. Similarly, the volume tends to be larger where the data are sparse. In the summation, the integrand evaluated at individual data points are "weighted" by the size of the volume. This numerical inner product calculation prevents overemphasis of clustered data.
### _Inner Product Computation_
We present the data-driven encoding method (DDE) as an alternative data-driven method to DMD for calculating a finite order approximation of the Koopman Operator. The objective of this method is to compute the matrices \(R\) and \(Q\) in (13) and (14) from data. This entails the computation of inner products:
\[\langle g_{i},g_{j}\rangle=\int_{X}G_{ij}(\xi)d\xi \tag{16}\] \[\langle g_{i}\circ f,g_{j}\rangle=\int_{X}F_{ij}(\xi)d\xi \tag{17}\]
where
\[G_{ij}(x)=g_{i}(x)\bar{g}_{j}(x),F_{ij}(x)=g_{i}[f(x)]\bar{g}_{j}(x) \tag{18}\]
are assumed to be Riemann Integrable; the functions are bounded and continuous [22].
There are two data sets used for the inner product computation. The first data set is
\[D_{N}=\{x_{i}\mid i=1,\cdots,N;x_{i}\in X\} \tag{19}\]
Note that all the data values are finite, \(|x_{i}|<\infty\). As such, the integral interval of the inner products is finite in computing them from the data. To define the integral interval, we consider the dynamic range of the system, \(X_{D}\), determined from the data set \(D_{N}\). See Fig.1. The dynamic range \(X_{D}\) is defined to be the minimum domain in the space \(X\) that includes all the data points in \(D_{N}\), \(X_{D}\supset D_{N}\), and that is convex. Namely, for any two states in \(X_{D}\), \(x,x^{\prime}\in X_{D}\),
\[\xi=\alpha x+(1-\alpha)x^{\prime}\in X_{D} \tag{20}\]
where \(0\leq\alpha\leq 1\).
Each data point \(x_{i}\) is mapped to \(f(x_{i})\), following the state transition law in eq.(1). We assume that the transferred state, too, stays within the same dynamic range \(X_{D}\). Collecting all the transferred states yields the second data set.
\[D_{N}^{f}=\{f(x_{i})\mid i=1,\cdots,N;x_{i}\in D_{N}\} \tag{21}\] \[D_{N}^{f}\subset X_{D} \tag{22}\]
Fig. 1: Illustration of the dynamic range of a dataset defined by the convex hull containing all points in the set, partitioned using a triangulation method. The data points are in black and the convex hull that encapsulates all data points is in grey.
This implies that the state space of the nonlinear system under consideration is closed within the dynamic range \(X_{D}\).
With this dynamic range, we redefine our objective to compute the inner products over \(X_{D}\).
\[R_{ij}=\int_{X_{D}}G_{ij}(x)dx \tag{23}\]
\[Q_{ij}=\int_{X_{D}}F_{ij}(x)dx \tag{24}\]
The integrals can be computed by partitioning the domain \(X_{D}\) into many segments \(X_{1},\cdots X_{P}\), as shown in Fig. 1.
\[X_{D}=\bigcup_{p=1}^{P}X_{p} \tag{25}\]
We generate these segments by applying a meshing technique to the data set \(D_{N}\), where the \(n\)-dimensional coordinates of individual data points are treated as nodes of a mesh. Delaunay Triangulation, for example, generates a triangular mesh structure with desirable properties [23]. As illustrated in Fig. 1, each triangular element is convex and has no internal node. The volume of the dynamic range \(V(X_{D})\) is the sum of the volumes of all the elements.
\[V(X_{D})=\sum_{p=1}^{P}\Delta v_{p} \tag{26}\]
Accordingly, the integral \(R_{ij}\) in eq.(23) can be segmented to
\[R_{ij}=\sum_{p=1}^{P}\int_{X_{p}}G_{ij}(x)dx \tag{27}\]
Suppose that the \(p\)-th element has \(K_{p}\) nodes, as shown in Fig.2. Renumbering these nodes 1 through \(K_{p}\),
\[\{x[k_{p}]\Big{|}x[k_{p}]\in X_{p};k_{p}=1,\cdots,K_{p}\},p=1,\cdots,P \tag{28}\]
The integrand \(G_{ij}\) within the \(p\)-th element can be approximated to the mean of the \(K_{p}\) nodes involved in the \(p\)-th element.
\[G_{ij}(x;p)\approx\bar{G}_{ij,p}=\frac{1}{K_{p}}\sum_{k_{p}=1}^{K_{p}}G_{ij}( x[k_{p}]),x[k_{p}]\in X_{p} \tag{29}\]
If Delaunay Triangulation is used, \(K_{p}=n+1\). See Fig. 2. Substituting this into (23) yields the approximate value of \(R_{ij}\).
\[\hat{R}_{ij}=\sum_{p=1}^{P}\frac{1}{K_{p}}\sum_{k_{p}=1}^{K_{p}}G_{ij}(x[k_{p} ])\Delta v_{p} \tag{30}\]
where
\[\Delta v_{p}=\int_{X_{p}}1\cdot dx \tag{31}\]
Similarly, each component of the matrix \(Q\) can be computed by using the same meshing.
\[\hat{Q}_{ij}=\sum_{p=1}^{P}\frac{1}{K_{p}}\sum_{k_{p}=1}^{K_{p}}F_{ij}(x[k_{p} ])\Delta v_{p} \tag{32}\]
Note that \(F_{ij}\) is evaluated by using the data points in both \(D_{N}^{f}\) and \(D_{N}\),
\[F_{ij}(x[k_{p}])=g_{i}[f(x[k_{p}])]\bar{g}_{j}(x[k_{p}]) \tag{33}\]
where \(f(x[k_{p}])\in D_{N}^{f}\).
### _Convergence_
Consider the center of each partition, \(\bar{x}_{p}=\int_{X_{p}}xdx/\Delta v_{p}\), and the distance between \(\bar{x}_{p}\) and each point, \(x[k_{p}]\):
\[\Delta x[k_{p}]=\bar{x}_{p}-x[k_{p}] \tag{34}\]
See Fig.2. The maximum distance from the center of the partition to each point that makes up the partition is
\[|\Delta x_{p}|=\max\{|\Delta x[1]|,\,...\,|\Delta x[k_{p}-1]|,\,|\Delta x[k_{p }]|\} \tag{35}\]
Consider a sequence of refining the approximate inner product integral \(\hat{R}_{ij}\) by increasing data points \(N\). We can show that, as the number of partition \(P\) tends infinity and the maximum subintervals \(|\Delta x_{p}|\) approach zero, the approximate inner product integral \(\hat{R}_{ij}\) converges to its true integral.
\[R_{ij}=\lim_{\begin{subarray}{c}P\rightarrow\infty\\ |\Delta x_{p}|\to 0\end{subarray}}\ \sum_{p=1}^{P}\frac{1}{K_{p}}\sum_{k_{p}=1}^{K_{p}}G_{ij}(x[k_{p}]) \Delta v_{p} \tag{36}\]
This formulation takes the form of weighted sums, specifically Riemann sums. Given functions that are bounded and continuous over the subdomain of interest, sequences of this form are known to have a common limit and thus converge upon refinement to the Riemann integral value over that subdomain, according to Numerical Integration theory [22, Section 1.5].
Fig. 2: Visualization of the integrand calculation process. The volume of the partition is encapsulated by the data points is denoted as \(\Delta v_{p}\). With this grouping, the value of \(G_{ij}\) is calculated for each point and the average among this group is computed, \(\bar{G}_{ij,p}\). In turn, this value, weighted by the volume of this partition, is summed across other partitions (not shown) to approximate the value of the element \(R_{ij}\).
### _Algorithm_
In the prior section, integrals (30) and (32) are presented as summations over partitions. This computation can be streamlined by converting the summations over partitions to the one over nodes. Consider node 3 associated to data point \(x_{3}\) in Fig.1, for example. This node is an apex of the 5 surrounding triangles. This implies that integrand \(G_{ij}(x)\) is calculated or recalled 5 times in computing (30) and (32). This repetition can be eliminated by computing volume \(\Delta v_{k}\) associated to node \(k\), rather than partition \(p:\Delta v_{p}\). Namely, we compute
\[\Delta v_{k}=\sum_{p=1}^{P}\frac{\Delta v_{p}}{K_{p}}I(k,p) \tag{37}\]
where \(I(k,p)\) is a membership function that takes value 1 when node \(k\) is an apex of triangle \(p\), that is, node \(k\) is involved in partition \(p\). Using this volume as a new weight we can rewrite (30) and (32) to be
\[\hat{R}_{ij} =\sum_{k=1}^{K}G_{ij}(x[k])\Delta v_{k} \tag{38}\] \[\hat{Q}_{ij} =\sum_{k=1}^{K}F_{ij}(x[k])\Delta v_{k} \tag{39}\]
Using this conversion, the computation can be streamlined and cleanly separated into three steps, as shown by pseudocode in Algorithm 1. The steps are: (1) _Graph Creation:_ data are connected to create partitions of the domain using a mesh generator: lines 3 to 8, (2) _Weighting Calculation:_ calculation of the weights for each data point: lines 10 to 17, and (3) _Matrix Calculation:_ the calculation of the \(R\) and \(Q\) matrices to find the matrix \(A\), lines 19 to 21.
## IV Experiments
In this section, the DDE algorithm is implemented for the sake of evaluating its validity and comparing its modeling accuracy to EDMD. Consider a 2nd order nonlinear system consisting of a pendulum with a nonlinear damper. See Fig. 3. The pendulum also bounces against walls with nonlinear compliance. The state variables for this system are \(x=[\theta,\dot{\theta}]^{T}\), and the equation of motion can be written as:
\[\ddot{\theta}=-sin(\theta)+F_{k}+F_{c} \tag{40}\]
where \(F_{k}\) and \(F_{c}\) are wall reaction moment and damping moment, respectively,
\[F_{k} =\begin{cases}-\text{sign}(\theta)\:k(|\theta|-\frac{\pi}{4})^{2 }&\text{if }|\theta|\geq\frac{\pi}{4}\\ 0&\text{if }|\theta|<\frac{\pi}{4}\end{cases} \tag{41}\] \[F_{c} =-\text{sign}(\dot{\theta})\:c\:\dot{\theta}^{2} \tag{42}\]
where \(k=200\) and \(c=1\). We present a two part numerical experiment for this system. The first experiment regards variations in dataset size and distribution, and the second experiment varies the usage of observable functions.
### _Dataset Variations_
The datasets tested are of three types:
1. **Uniform**: These datasets are composed of a rectangular dynamic range which is sampled uniformly, like an evenly divided grid. The range varies from \(\theta=[-0.8,0.8]\) and \(\dot{\theta}=[-2,2]\), where the mass can hit the walls and the damping can vary from 0 to a significant value. See Fig. 3-(b), (c).
2. **Gaussian**: Data points are sampled with a finite-support Gaussian distribution. The data are distributed non-uniformly with their highest density at the peak
```
Input:
1\(D_{N}\), \(D_{N}^{f}\) Output:
2\(P\): \(p\), \(K\): \(k\), \(R\), \(Q\) A
3Graph Creation:
4for\(x_{i}\) in \(D_{N}\) and corresponding \(f(x_{i})\) in \(D_{N}^{f}\)do
5 Create node, \(k\)
6 Assign node attributes for current data point \(k.x_{t}=x_{t}\) and \(k.x_{t+1}=x_{t+1}\)
7 Append node to node list \(K\).
8
9 end for
10
11 end for
12
13Weighting Calculation:
14 Create list of triangles, \(P\) using Delaunay Triangulation on node list \(K\) using \(k.x_{t}\)
15for\(p\) in \(P\)do
16 Calculate volume, \(V\), of \(p\)
17for\(k\) corresponding to \(K_{p}\)do
18 Update volume of each node \(k.\Delta v_{k}=k.\Delta v_{k}+\frac{V}{n+1}\)
19 end for
20
21 end for
22
23then
24Matrix Calculation:
25 Find \(R\) and \(Q\) via (38) and (39)
26 Find \(A=QR^{-1}\)
```
**Algorithm 1**Algorithm for DDE in pseudocode
Fig. 3: Diagram of pendulum with with walls. (a) depicts the range of the pendulum where the walls are equally angularly displaced from the vertical. (b) depicts the forces exerted on the pendulum due to contact with walls. (c) depicts the damping force exerted onto the pendulum.
of the Gaussian placed at diverse locations. In addition, each dataset contains 100 data points uniformly distributed along the boarder of the dynamic range to guarantee the same dynamic range as the uniform datasets. Samples outside the dynamic range are excluded.
3. **Trajectories**: These datasets are composed of trajectories, beginning from 100 initial conditions that are simulated forward the same number of time steps. The dynamic range of this dataset differs from the two other dataset types.
The models constructed for DDE and EDMD use the same observable functions. The observable functions chosen are two dimensional radial basis functions (RBFs), uniformly distributed between the maximum and minimum values of each state variable in their respective dataset, and the state variables. The total order of the system is 27th order with 25 RBFs and 2 state variables.
A trajectory dataset graph is generated using Delaunay Triangles in DDE, shown in Fig. 4.
The accuracy of the models is tested through calculating sum of squared errors (SSE) for one-step ahead predictions over the dynamic range of the datasets. These error values are calculated for a uniform grid of points, similar to that used in the uniform datasets. A visualization of the SSE is plotted in Fig. 5. The results of these calculations are shown in Table I and II. In the computation, the dynamic range is discretized, and the SSE value of each point is summed. For the Gaussian datasets, the test is run for eight iterations of each dataset to account for randomness and the average result is noted.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Dataset Size} & \multicolumn{3}{c}{Total SSE} \\ & Center: \([0,0]\) & Center: \([0.8,0]\) & Center: \([0,2]\) \\ & EDMD / DDE & EDMD / DDE & EDMD / DDE \\ \hline \hline \multicolumn{4}{c}{_Gaussian Datasets_} \\ \hline \hline
1000 & 25.565 / **24.377** & 25.161 / **22.98** & 24.338 / **22.445** \\
2500 & 24.991 / **23.739** & 25.421 / **22.747** & 24.221 / **21.590** \\
5000 & 24.662 / **23.518** & 26.805 / **22.415** & 23.914 / **21.195** \\
1000 & 24.395 / **23.085** & 28.424 / **21.825** & 25.770 / **21.052** \\
25000 & 25.219 / **22.167** & 30.788 / **21.687** & 26.941 / **20.704** \\ \hline \hline \end{tabular}
\end{table} TABLE II: Sum of Squared Errors over dynamic range for Gaussian distributions with different centers. Centers of the distribution are noted above each column.
Fig. 4: Graph of connections for a trajectory dataset composed of 10000 points formed when utilizing the data-driven direct encoding method. The lightly red shaded region denotes the dynamic range. In the zoomed-in image, the difference in volumes associated to different data points can be observed. Noting the difference in size of the purple triangle versus the green triangle.
\begin{table}
\begin{tabular}{c c c} \hline \hline \multirow{2}{*}{Dataset Size} & Total SSE & SSE \\ & EDMD / DDE & EDMD / DDE \\ \hline \hline \multicolumn{4}{c}{_Uniform Datasets_} \\ \hline \hline
900 & 19.470 / **17.167** & **0.0094** / 0.0097 \\
2500 & 17.995 / **16.471** & **0.0095** / 0.0103 \\
10000 & 17.010 / **16.184** & **0.0099** / 0.0105 \\
22500 & 16.698 / **16.133** & **0.0101** / 0.0105 \\ \hline \hline \multicolumn{4}{c}{_Trajectory Datasets_} \\ \hline \hline
1000 & 56.532 / **33.392** & 0.0349 / **0.0193** \\
2500 & 33.330 / **25.064** & 0.0200 / **0.0130** \\
5000 & 31.690 / **25.099** & 0.0195 / **0.0129** \\
10000 & 30.184 / **25.101** & 0.0186 / **0.0129** \\
25000 & 29.380 / **25.106** & 0.0181 / **0.0129** \\ \hline \hline \end{tabular}
\end{table} TABLE I: Sum of Squared Errors over dynamic range with varying dataset sizes.
Fig. 5: Sum of Squared Error plots for various datasets. (a) and (b) are EDMD models for Gaussian datasets; (a) uses a dataset that is centered at \([0,]\) and (b) uses a dataset that is centered at \([0,2]\). (c) DDE and (d) EDMD models using the trajectory dataset composed of 2500 data points and using 25 RBFs. Cricled in red are the regions with greatest variation in SSE between models. Only the dynamic range is shown in all plots.
### _Observable Function Variation_
The second experiment varies the number of observable functions selected, thus increasing the order. In this experiment, the number of RBFs is varied through uniformly increasing the density of the centers of the function over the range of the dataset. The results are noted in Table III. In the table, the number detailing number of observables is the number of functions including the state variables.
### _Discussion_
From these results we can make the following observations.
* All the numerical experiments show that DDE outperforms EDMD in total SSE.
* For uniform datasets, both models are nearly equivalent, though DDE has slightly lower SSE in all cases, shown in Table I. This result is expected as all data points are weighted equally in a uniform distribution.
* EDMD models exhibit significantly different distributions of prediction error, depending on dataset distribution. In Fig. 5-(a), the EDMD model learned from a dataset with high density near the origin produces a prediction error distribution that is low in accuracy in the top-right and the bottom-left corners of the dynamic range. In contrast, Fig.5-(b) illustrates that when using EDMD to learn from data with high density at \(\theta=0,\hat{\theta}=2\), the model results in low accuracy in the lower half of the dynamic range. DDE does not exhibit this high distribution dependency and achieves lower total SSE, as shown in Table II.
* In the trajectory datasets in Table I, DDE is not only lower in total SSE than EDMD but is also smaller in variance. Fig. 5-(c) shows that DDE has a uniformly low error distribution across the dynamic range, while EDMD in Fig.5-(d) has two regions, as circled in the figure, with significantly larger error. These regions coincide with sparsity in the dataset, providing evidence of EDMD's bias towards regions of high data density and explaining the difference in performance between models for the non-uniform datasets.
* In the trajectory datasets, the total SSE converges for DDE beginning with small dataset sizes. This result implies that the elements in the \(R\) and \(Q\) matrices of DDE, that is, the inner product integral computations, converge as the data size and the data density increase. This convergence is confirmed in Fig. 6, where several elements of the \(Q\) matrix are plotted against the data size.
* The second experiment, regarding variations in observable function numbers demonstrates the effect of DDE remains even for significant increases in observable functions, referring to Table III. In all cases tested, DDE significantly outperforms EDMD over the dynamic range, as expected for a non-uniform dataset.
## V Application to Neural Net Koopman Modeling
The use of deep neural networks for finding effective observable functions and constructing a Koopman linear model has been reported by several groups [10, 11, 12]. This method, sometimes referred to as Deep Koopman, is effective for approximating the Koopman operator to a low-order model, compared to the use of locally activated functions, such as RBFs, which scale poorly for high-order nonlinear systems. The proposed DDE method can be incorporated into Deep Koopman, further improving approximation accuracy.
Fig. 7 shows the architecture of the neural network similar to the prior works [10, 11, 12]. The input layer receives training data of independent state variables. The successive hidden layers produce observable functions; these functions feed into the output layer consisting of linear activation functions. This linear output layer corresponds to the \(A\) matrix that maps the observables of the current time to those of the next time step, i.e. the state transition in the lifted space. In the Deep Koopman approach, the output layer, that is, the \(A\) matrix, is trained together with observable functions in the hidden layers. Using the observable functions learned from deep learning, the output layer weights are replaced with the \(A\) matrix obtained from DDE, captioned in Fig. 7.
The effectiveness of this Deep Koopman-DDE method is applied to a multi-cable manipulation system [6]. A simplified single cable version is utilized consisting of six independent state variables. The network is constructed using PyTorch, trained with an Adam optimizer, to generate 40 observable functions. These model concatenates the state variables of the nonlinear system, resulting in a 46th order model. Table IV shows parameters used for training the Deep Koopman model. The training dataset is composed of 3000 data points drawn from trajectories. Table V compares the Deep Koopman model to the proposed model that uses DDE. Results are in terms of sum of squared error over a set of test trajectories. A significant improvement is achieved by incorporating DDE into the deep learning method.
Practical concerns arose from the use of Delaunay Tri
Fig. 6: Change in values of diagonal elements in the \(Q\) matrix of DDE with respect to trajectory dataset size. The specific elements shown correspond to a state variable, \(Q_{0,0}\), and three RBF functions of varying distances away from the equilibrium.
angulation in this experiment. Specifically, the method was found to be limiting and inconsistent for systems that are eighth order or higher, due to computational cost and numerical instability. Alternative numerical integration approaches or alternative partitioning methods may be necessary when applying the method to higher order nonlinear systems.
## VI Conclusion
In this work, a new data-driven approach to generating a Koopman linear model based on the direct encoding of Koopman Operator (DDE) is presented as an alternative to dynamic mode decomposition (DMD) and other related methods using least squares estimate (LSE). The major contributions include: 1) The analytical formula of Direct Encoding is converted to a numerical formula for computing the inner product integrals from given data; 2) An efficient algorithm is developed and its convergence conditions to the true results are analyzed; 3) Numerical experiments demonstrate a) greater accuracy compared to EDMD, b) lower sensitivity to data distribution, and c) rapid convergence of inner product computation. Furthermore, the DDE method is incorporated to Deep Koopman, i.e. neural network based methods for construction of the Koopman Operator, for improving prediction accuracy.
|
2302.00463 | Uncertain Quality-Diversity: Evaluation methodology and new methods for
Quality-Diversity in Uncertain Domains | Quality-Diversity optimisation (QD) has proven to yield promising results
across a broad set of applications. However, QD approaches struggle in the
presence of uncertainty in the environment, as it impacts their ability to
quantify the true performance and novelty of solutions. This problem has been
highlighted multiple times independently in previous literature. In this work,
we propose to uniformise the view on this problem through four main
contributions. First, we formalise a common framework for uncertain domains:
the Uncertain QD setting, a special case of QD in which fitness and descriptors
for each solution are no longer fixed values but distribution over possible
values. Second, we propose a new methodology to evaluate Uncertain QD
approaches, relying on a new per-generation sampling budget and a set of
existing and new metrics specifically designed for Uncertain QD. Third, we
propose three new Uncertain QD algorithms: Archive-sampling,
Parallel-Adaptive-sampling and Deep-Grid-sampling. We propose these approaches
taking into account recent advances in the QD community toward the use of
hardware acceleration that enable large numbers of parallel evaluations and
make sampling an affordable approach to uncertainty. Our final and fourth
contribution is to use this new framework and the associated comparison methods
to benchmark existing and novel approaches. We demonstrate once again the
limitation of MAP-Elites in uncertain domains and highlight the performance of
the existing Deep-Grid approach, and of our new algorithms. The goal of this
framework and methods is to become an instrumental benchmark for future works
considering Uncertain QD. | Manon Flageat, Antoine Cully | 2023-02-01T14:12:04Z | http://arxiv.org/abs/2302.00463v2 | # Uncertain Quality-Diversity:
###### Abstract
Quality-Diversity optimisation (QD) has proven to yield promising results across a broad set of applications. However, QD approaches struggle in the presence of uncertainty in the environment, as it impacts their ability to quantify the true performance and novelty of solutions. This problem has been highlighted multiple times independently in previous literature. In this work, we propose to uniformise the view on this problem through four main contributions. First, we formalise a common framework for uncertain domains: the Uncertain QD setting, a special case of QD in which fitness and descriptors for each solution are no longer fixed values but distribution over possible values. Second, we propose a new methodology to evaluate Uncertain QD approaches, relying on a new per-generation sampling budget and a set of existing and new metrics specifically designed for Uncertain QD. Third, we propose three new Uncertain QD algorithms: Archive-sampling, Parallel-Adaptive-sampling and Deep-Grid-sampling. We propose these approaches taking into account recent advances in the QD community toward the use of hardware acceleration that enable large numbers of parallel evaluations and make sampling an affordable approach to uncertainty. Our final and fourth contribution is to use this new framework and the associated comparison methods to benchmark existing and novel approaches. We demonstrate once again the limitation of MAP-Elites in uncertain domains and highlight the performance of the existing Deep-Grid approach, and of our new algorithms. The goal of this framework and methods is to become an instrumental benchmark for future works considering Uncertain QD.
Quality-Diversity optimisation, Uncertain domains, MAP-Elites, Neuroevolution, Behavioral diversity.
## I Introduction
Recent works have shown the competitiveness of diversity-seeking approaches to optimisation, such as Quality-Diversity (QD) [1, 2, 3]. Encouraging creative and diverse solutions have proven to help exploration [4, 5], foster stepping stones toward novel solutions [6] and allow adaptation to unseen situations [7]. QD in particular has been applied to a wide range of domains ranging from robotics [7, 8] to content generation [5, 9] or design [4]. Recently, many QD works have shifted focus toward Neuroevolution and the evolution of large closed-loop controllers for robotics [10, 11, 12, 13].
These new domains raise new challenges for QD optimisation, among which the issue of uncertainty [14, 15, 16, 17]. Many environments are subject to uncertainty, either arising in components such as actuators or sensors, or occurring in the dynamics of the environment itself. This uncertainty might lead the exact same solution to perform differently from one evaluation to another. Thus, evaluating a solution only once might lead to an inaccurate estimate of its novelty or quality: solutions might be lucky and appear more novel or better performing than they truly are. We illustrate this issue in Fig. 1. Unfortunately, standard QD algorithms maintain lucky solutions due to their elitism, leading them to prefer lucky solutions to truly novel or high-performing ones [14, 15, 16, 17]. While this issue is well known in the larger Evolutionary Algorithm (EA) community [18, 19], to the best of our knowledge, it stays largely understudied in QD works.
A naive approach to address this uncertainty problem is to perform an extensive and expensive amount of evaluations for each solution to obtain the average or expected performance and novelty metrics. While this approach is effective it is often prohibitively computationally expensive. Nonetheless, recent advances in computer systems enable the high-parallelisation of evaluations. Recent libraries such as QDax [20], or EvoJax [21] based on the Brax simulator [22] allowed to speed-up computation by a large order of magnitude thanks to the high-parallelisation of evaluations. With such tools, we now have access to \(10\) or \(100\) times more evaluations per generation within the same amount of time. This completely
Fig. 1: Illustration of the Uncertain QD setting: (1) we represent the fitness and descriptors of \(3\) hypothetical solutions in the usual QD setup (left) versus in the Uncertain QD setup (right): their fitness and descriptors are now distributions over possible values, instead of fixed constant values. (2) We give the final archive of the Uncertain Arm task (left), we sample one of the solution (red cross), replicate it \(64\) times and plot the descriptor and fitness distributions, keeping the archive in the background for comparability (right).
changes the game for Uncertain QD, as simpler sample-heavy approaches could now be more interesting than more advanced model-driven approaches. However, this shift toward sample-heavy approaches to uncertainty should also lead to reconsider how algorithms are compared. Most QD works use comparison based on batch-size, i.e. number of offspring produced per generation. However, if every offspring require multiple evaluations, these additional samples should also be taken into account. We propose to replace batch-size with sampling-size: per-generation sampling-budget allowing fair comparison of strategies to uncertainty.
Overall, our paper aims to answer one core question: in an uncertain domain, given a per-generation sampling budget, what would be the optimal Uncertain QD approach? Our contribution can be summarised as follows:
* A formalisation of the Uncertain QD setting.
* A new methodology to evaluate Uncertain QD approaches relying on the sampling-size introduced above, as well as a set of new and existing metrics.
* Three new Uncertain QD approaches: Archive-sampling, Parallel-Adaptive-sampling and Deep-grid-sampling.
* A comparison of existing and novel Uncertain QD approaches using these new comparison methods.
To facilitate later advances in Uncertain QD we open-source our code at [https://github.com/adaptive-intelligent-robotics/Uncertain_Quality_Diversity](https://github.com/adaptive-intelligent-robotics/Uncertain_Quality_Diversity).
## II Background
### _Quality-Diversity and MAP-Elites algorithm_
Quality-Diversity (QD) methods propose to search for collections of high-performing and diverse solutions to an optimisation problem. In the QD framework, a solution does not only get attributed a fitness \(f\) but also a descriptor \(d\). The descriptor \(d\) characterises the behaviour induced by the solution, according to a fixed number \(n\) of dimensions of interest \(d=(d^{j})_{1\leq j\leq n}\). It allows one to quantify the novelty of a solution, as the average distance to its nearest neighbours.
The most widely used QD algorithm is MAP-Elites [23], the focus of this work. MAP-Elites discretises the descriptor-space into a grid, referred to as archive \(\mathcal{A}\). The goal of MAP-Elites is to fill all the cells of this grid with the best possible solutions. This grid constitutes the final collection returned at the end of the algorithm. The archive \(\mathcal{A}\) is first initialised with randomly-generated solutions. Then, an iteration of MAP-Elites consists of the following steps: 1. sample uniformly parent solutions from the archive \(\mathcal{A}\), 2. apply variation operators to these solutions to produce offspring solutions, 3. evaluate the offspring solutions to get their descriptor \(d\) and fitness \(f\), and finally 4. attempt to add the offspring back to the archive \(\mathcal{A}\). For a new solution to be added to the grid, it has to improve either the quality or the diversity of the overall grid. Thus, it has to either fill in an empty cell or perform better than the solution already in its cell. Overall, MAP-Elites aims to maximise the QD-Score of the archive \(\mathcal{A}\), defined as follow:
\[\max_{\mathcal{A}}\left(\text{QDScore}(\mathcal{A})\right)=\max_{ \mathcal{A}}\sum_{i\in\mathcal{A}}f_{i} \tag{1}\] \[\text{w.r.t}\quad\forall i\in\mathcal{A},d_{i}\in\text{cell}_{i}\]
Where \(f_{i}\) is the fitness of solution \(i\) and \(d_{i}\) is its descriptor, which determines its cells in the archive \(cell_{i}\).
### _Hardware-accelerated Quality-Diversity_
Recent advances in hardware acceleration have led to new QD libraries such as QDax [20] or EvoJax [21]. These tools rely on highly-parallelised simulators like Brax [22] that can run on accelerators (e.g., GPUs and TPUs) and thus target simulated domains, for example, robotics control, where they drastically reduce the evaluation time. In addition, they have given us access to \(10\) or \(100\) times more evaluations per generation within the same amount of time. Lim et al. [20] prove that the performance of MAP-Elites is robust to large increases in batch-size values (i.e. large increases in the number of solutions generated per generation). These results lead to drastic speed ups in the run-time of QD algorithms, opening the door to promising future applications. These new frameworks completely change the game for Uncertain QD, as naive approaches using extensive and expensive amount of evaluations for each solution could now be more interesting than more advanced model-driven approaches.
## III Uncertain QD setting
### _Uncertain domains_
Usually, QD algorithms rely on the hypothesis that each solution can be awarded a fixed fitness \(f\) and descriptor \(d\), i.e. the reevaluation of a solution would give the same results as its initial evaluation. In this work, we consider domains where this assumption does not hold: a solution can be assigned different values of fitness and descriptor from one evaluation to another. Examples of such domains would be real-world applications where the sensors used to compute the fitness and the descriptor of a solution are slightly noisy or inaccurate. Similarly, in locomotion robotics, the initialisation of the robot might change from one evaluation to another, leading to different behaviours for the same controller. In other words, each solution is not assigned one value of fitness and descriptor anymore, but a distribution of potential fitness and descriptor values: \(f\sim D_{f}\), \(d\sim D_{d}\). Such domains have been extensively studied in the Evolutionary Algorithm literature where only the fitness is defined [18, 19], and referred to as uncertain domains. Inspired by these works, we propose the Uncertain QD setting, illustrated in Figure 1. We would like to emphasise that Uncertain QD with a null variance on both fitness and descriptor corresponds to the usual QD case.
### _Impact of uncertainty on QD approaches_
In uncertain domains, evaluating a solution only once might lead to inaccurate estimates of its diversity or quality: this unique evaluation might be an outlier and not be representative of the fitness and descriptor distributions as a whole. This outlier evaluation might appear more novel or higher performing than the solution actually is when averaging multiple evaluations. As QD algorithms are elitists, they tend to maintain solutions that get such lucky outlier evaluations. This leads to final collections containing a majority of lucky sub-performing
solutions instead of truly novel and high-performing ones [14, 15, 16, 17]. Instead, QD approaches designed for uncertain domains aim to get high certitude on the descriptor and fitness estimate that can be consistently observed across multiple evaluations of the same solution. Thus, Eq. 1 becomes:
\[\max_{\mathcal{A}}\left(\text{QDScore}(\mathcal{A})\right)=\max_{ \mathcal{A}}\sum_{i\in\mathcal{A}}\text{E}_{f_{i}\sim\mathcal{D}_{f}}\left[f_{ i}\right] \tag{2}\] \[\text{w.r.t}\quad\forall i\in\mathcal{A},\text{E}_{d_{i}\sim \mathcal{D}_{d}}\left[d_{i}\right]\in\text{cell}_{i}\]
### _Performance estimation and reproducibility_
To mitigate the impact listed above, two problems should be addressed in uncertain domains:
* **Performance estimation:** getting a good estimate of the expected fitness \(\text{E}_{f_{i}\sim\mathcal{D}_{f}}\left[f_{i}\right]\) and expected descriptor \(\text{E}_{d_{i}\sim\mathcal{D}_{d}}\left[d_{i}\right]\) of solutions. Uncertain QD algorithms need specific mechanisms to estimate the fitness and descriptor distributions and infer meaningful expectations.
* **Reproducibility:** estimating for each solution the variance of the fitness \(\text{V}_{f_{i}\sim\mathcal{D}_{f}}\left[f_{i}\right]\) and the variance of each descriptor dimension \(\text{V}_{d_{i}^{\prime}\sim\mathcal{D}_{d_{i}^{\prime}}}\left[d_{i}^{\prime}\right]\). In uncertain domains, solutions with lower variance are more reproducible: they are more likely to get similar fitness and descriptor values from one evaluation to another. Reproducibility is a desirable property and algorithms need specific mechanisms to first, estimate solutions' reproducibility, and second, to eventually favour it.
Not all uncertain domains present the reproducibility problem: all solutions might have the same variance. This depends on the properties of the source of uncertainty. For example, if the only uncertainty is coming from the sensor giving the fitness that present Gaussian noise on its measurements, all solutions would be identically impacted and there would be no way to improve the reproducibility of solutions. On the contrary, if the uncertainty is on the initial joint position of a locomotion robot for example, not all controllers would be equally impacted: hopping controllers have higher chances to lead to a fall if the wrong leg is initialised at a high position than controllers using the two legs equally. Thus, most realistic scenarios suffer from this problem. Still, most Uncertain QD works consider domains with only the performance estimation problems.
### _Evaluation metrics in Uncertain QD setting_
Assessing the performance of QD algorithms usually relies on computing metrics such as QD-Score or Max-Fitness using fitness and descriptors values collected during training. In Uncertain QD, these training data cannot be trusted as they might be poor-quality estimates. Also, in most uncertain domains, the ground-truth values are not accessible, even to an expert. Thus, in the Uncertain QD setting, the evaluation metrics have to be approximated with large number (hundreds) of samples. These samples do not count toward the sample efficiency of algorithms, as they are used for metric computation only.
Previous work in Uncertain QD [14, 15, 16, 17] uses as estimations the average fitness and descriptor of \(M=50\) reevaluations of each cell in the algorithm's archive. These estimations are then used to fill a new archive called the "Corrected Archive" that is used to compute metrics such as Coverage and QD-Score. For comparability, the \(M\) reevaluations of each cell are done by sampling each cell \(M\) times using its in-cell selector. In other words, for approaches that only return the best solution of each cell, the estimation is the average of \(M\) replications of the best solutions of each cell; and for approaches considering multiple solutions per-cell, the estimation is the average of \(M\) solutions sampled from the cell.
In this work, we also consider a Corrected Archive. However, relying on results from Doerr and Sutton in [24] as well as an experimental study provided in Appendix B, we choose to use the median over \(M\) replications instead of the average. We also performed a study to choose an appropriate value of \(M\) and decided to keep \(M=512\) (Appendix B).
## IV Previous work in Uncertain QD
### _Explicit-sampling and Implicit-sampling approaches in EA_
The literature in EA applied to uncertain domains distinguishes two main classes of algorithms: explicit sampling and implicit sampling approaches [18, 19]. Explicit-sampling approaches evaluate each solution multiple times to get a better estimate of their fitness distributions [25]. The number of evaluations might be fixed or chosen dynamically depending on the methods [26, 27]. Implicit-sampling approaches rely on the common hypothesis that the fitness is continuous with respect to the search-space. Thanks to this property, neighbouring solutions can be used as samples to build distribution estimates, thus avoiding re-evaluations. Most of the time Implicit-sampling approaches consist in augmenting the size of the population - dynamically or not - to increase the density of solutions and get better distribution estimates [28]. Some Implicit-sampling approaches also propose keeping archives of previously encountered solutions as estimates [29].
### _Previous QD approaches_
We apply the same classification to QD approaches, distinguishing explicit and implicit mechanisms to approximate the distribution of fitness and descriptor. To the best of our knowledge, three QD approaches based on MAP-Elites specifically target uncertain domains: two can be classified as Explicit samplings and one as Implicit sampling. We summarise the properties of these approaches as well as the new ones introduced in this paper in Table I.
#### Iv-B1 MAP-Elites-sampling
The simplest way to use sampling in MAP-Elites is to set a fixed number of reevaluation \(N\) for every solution. Thus, any new offspring \(o\) is sampled \(N\) times and considered for addition to the grid using the average fitness \(\bar{f}_{o}=\frac{1}{N}\sum_{n=0}^{N}f_{o}^{n}\) and descriptor \(\bar{d}_{o}=\frac{1}{N}\sum_{n=0}^{N}d_{o}^{n}\). This approach has been used in previous works [30] and proven to greatly improve the performance of MAP-Elites in uncertain domains. However, it has three main disadvantages: first, it is highly sample-inefficient, second, it relies on a hyperparameter \(N\) that is critical to guarantee a meaningful estimate and third, it improves the performance estimation but does not favour reproducible solutions (see Sec. III-C).
#### Iv-B2 Adaptive sampling
To mitigate MAP-Elites-sampling's limitations, Justesen et al. [14] introduced Adaptive-Sampling. It uses new mechanisms to distribute samples more wisely across solutions, inspired by similar work in EAs [26]:
* First, solutions are considered for addition to the grid only if they are evaluated the same number of times as the elite already in their cell. If, before reaching this number of evaluations, they prove not promising enough then they are discarded. This improves sample efficiency.
* Second, if an elite in the grid is challenged by a new solution, it is immediately re-evaluated one more time, allowing to dynamically tune the number of evaluations. If during this re-evaluation, the elite proves more likely to belong to another cell, it is moved to this new cell.
* Third, if an elite is evaluated too many times outside its main cell, it is discarded. This mechanism keeps solutions that prove reproducible in terms of descriptor (Sec. III-C).
* Fourth, to compensate for the loss of elites induced by the two last mechanisms, each cell keeps \(D\) elites. Thus, if one of the elites drifts to another cell or is discarded, it can be immediately replaced.
Adaptive-Sampling tackles both the performance estimation and the reproducibility problem, despite being mainly studied in domains where all solutions are equally reproducible. However, its main limitation is that it is sequential and thus, not parallelisable. First, it requires evaluating each offspring the same number of times as the elite of its cell. This number differs between offspring and is likely to change during the process as successive reevaluations might move the offspring to other cells. Second, it also requires reevaluating contested elites, modifying the repertoire during the reevaluation process. Thus, these mechanisms make Adaptive-sampling extremely slow in real-time computation, regardless of its sample efficiency.
#### Iv-B3 Deep-Grid
Deep-Grid MAP-Elites [15] is an alternative Implicit sampling approach to Uncertain QD. It works by adding a depth to the MAP-Elites grid to keep \(D\) solutions per cell, so that each cell sub-population empirically estimates the distribution of the elite's fitness and descriptor. To this aim, Deep-Grid modify the grid-maintenance rules:
* Each cell contains \(D\) solutions, but the best-performing of these \(D\) solutions might have been lucky. Thus, the selection is modified to consider all depth-solutions as potential parents: first, a cell is selected uniformly, as in MAP-Elites; second, a solution is selected from the cell using the biased roulette wheel (fitness proportional).
* All \(D\) solutions in a cell are also assumed to be equally uncertain and should all be questionable, i.e. replaceable. While in MAP-Elites, a high-performing solution can only be replaced by an even higher-performing one; in Deep-Grid, it can be replaced by any solution. Thus, Deep-Grid adds all solutions to the grid. The convergence within the cell relies on the convergence of the population as a whole induced by the MAP-Elites loop.
These combined mechanisms allow the cells to be slowly populated with reproductions of reproducible solutions. This approach thus tackles both the performance estimation and the reproducibility problems. Its main limitation is its lack of elitism: due to its questioning mechanisms, Deep-Grid struggles to converge to the best-performing solutions.
#### Iv-B4 Notes on gradient-augmented QD
Flageat et al. [17] have shown that gradient-augmented QD approaches such as MAP-Elites-ES [12] or PGA-MAP-Elites [11] are promising in the Uncertain QD setting. However, these gradient-augmented QD approaches do not aim to tackle specifically the Uncertain QD setting. Their performance in these domains only arises as a side effect of their strategy of modelling the environment. More importantly, approaches inspired by Reinforcement Learning, such as PGA-MAP-Elites, require additional assumptions on the domains as they rely on Markov Decision Processes (MDPs); and approaches based on approximated gradients such as MAP-Elites-ES require a really large order of evaluations to generate each offspring (MAP-Elites-ES uses \(10030\) evaluations per offspring). In this paper, we consider both MDP and non-MDP domains, as MDP tasks are only a small portion of the domains considered by QD. Additionally, we consider sample efficiency as a critical comparison criterion, and MAP-Elites-ES require orders of magnitude more evaluations than the approaches compared in this paper. Thus, we chose to exclude these approaches from this study. As highlighted by Flageat et al. [17] these approaches stay promising for domains with large-dimensional search space where mutation-based QD approaches prove inefficient. We keep this research dimension for future work.
## V New approaches for Uncertain QD in highly-parallelised context
Hardware-accelerated libraries require specific adaptation of existing algorithms to leverage their full parallelisation potential. Below we detail how MAP-Elites-sampling and Deep-Grid can be adapted to hardware-accelerated libraries. Furthermore, we introduce three novel approaches: Deep-Grid-sampling, Archive-sampling and Parallel-Adaptive-sampling. We also detail these algorithms in Table I.
### _MAP-Elites-sampling_
Using explicit sampling with MAP-Elites (Sec. IV-B1) does not require any adaptation to high-parallelisation: the samples required to evaluate any solution are fully independent and can be executed in parallel. Hardware-accelerated libraries make MAP-Elites-sampling a strong Uncertain QD competitor.
### _Deep-Grid_
Deep-Grid (Sec. IV-B3) mainly differs from MAP-Elites in that it adds depth to the grid. The addition and selection from this deep grid remain independent for each solution, making it fully parallelisable. However, the Deep-Grid addition mechanism, which adds all offspring to the archive, might be less efficient with a very large batch-size. First, if the batch-size is greater than the number of solutions in the archive (i.e. grid size times depth), a random subset of the offspring will systematically not be added to the grid, regardless of their fitness and descriptor, making Deep-Grid less sample-efficient. Second, Deep-Grid relies on the intuition that reproducible solutions slowly populate cells. Large batch-size might lead to renew the full cell content of the grid at every generation, and may limit the effect of this mechanism.
### _Deep-Grid-sampling (novel)_
As Deep-Grid's mechanisms do not scale well to large batch-size value, it seems that the available evaluations offered by hardware acceleration could be better employed. We thus propose Deep-Grid-sampling, a new algorithm that combines Deep-Grid with explicit sampling. In Deep-grid-sampling, every offspring is evaluated \(N\) times to obtain a more accurate estimation of the descriptor \(d\) and the fitness \(f\), which are then used to place the offspring in the corresponding cell.
```
1:Initialise \(\mathcal{A}\)
2:repeat
3:#Archivereevaluation
4:\(x\leftarrow\)content of \(\mathcal{A}\)
5:\(\mathcal{A}\leftarrow\)empty(\(\mathcal{A}\))
6:\(f\), \(d\leftarrow\)revaluations(\(x\), \(N=1\))
7:\(\mathcal{A}\leftarrow\)add(\(\mathcal{A}\), \(x\), \(f\), \(d\))
8:#Offspringgeneration
9:\(o\leftarrow\)generate_offspring(\(\mathcal{A}\))
10:\(f\), \(d\leftarrow\)revaluations(\(o\), \(N=1\))
11:\(\mathcal{A}\leftarrow\)add(\(\mathcal{A}\), \(o\), \(f\), \(d\))
12:until reach convergence of \(\mathcal{A}\)
```
**Algorithm 1** Archive-sampling (V-D)
### _Archive-sampling (novel)_
We propose a new approach for uncertain domain: Archive-sampling (Alg. 1). Archive-sampling aims to improve MAP-Elites-sampling by distributing samples more wisely: instead of sampling every offspring multiple times, Archive-sampling re-samples only solutions already in the archive as they constitute the more promising solutions. Thus, at every generation of Archive-sampling, the archive is emptied, all solutions in it are reevaluated once and then added back to the archive using their moving-averaged fitness and descriptor. This new archive then undergoes a usual MAP-Elites loop.
This periodical re-evaluation improves the knowledge on promising solutions, however, it might lead to drifting elites, i.e. elites moving to other cells. Archive-sampling thus stores the \(D\) best solutions per cell as a reservoir to avoid empty cells. For a given cell, offspring are added until the depth \(D\) is full; then, new offspring only replace lower-performing solutions. The selection mechanism within these cells stays the same: only the best solution can be selected as a parent. The depth \(D\) gives a second chance to unlucky solutions that have been evaluated with lower fitness than they could expect. Conversely, lucky solutions are rapidly re-evaluated and removed from the grid as necessary. Thus, Archive-sampling addresses the main limitation of MAP-Elites in uncertain domains: its elitism. While MAP-Elites never question the content of the archive and favour lucky solutions; Archive-sampling incrementally improves its certainty on solutions performance.
### _Parallel-Adaptive-sampling (novel)_
The Archive-sampling approach still has a major limitation: solutions in the archive will gradually become more certain as they get reevaluated; however, any newcomer offspring with only one reevaluation could replace them. In short, Archive-sampling compares solutions with different degrees of certainty. To overcome this, we propose a second algorithm: Parallel-Adaptive-sampling (Alg. 2). It relies on two mechanisms: (1) the archive-reevaluation mechanism (the same as Archive-sampling) and (2) the offspring-reevaluation mechanism (introduced below). The offspring-reevaluation mechanism (2) proposes to evaluate all new offspring \(N\) times, with \(N\) chosen based on the number of reevaluations of solutions already in the repertoire. Based on initial experiments, we take \(N\) as the median number of evaluations of the solutions in the grid. Due to (1), solutions in the archive get more and more reevaluated, so (2) leads to increase with time the number
of samples spent on each offspring. Thus compared solutions have the same order of magnitude of "certainty", adressing the main limitation of Archive-sampling.
Parallel-Adaptive-sampling can be seen as a version of Adaptive-Sampling for hardware-accelerated libraries. (1) reevaluates all elites, approximating the way Adaptive-sampling reevaluates contested elites during the offspring evaluation process. (2) can be seen as a proxy for the sampling mechanism of Adaptive Sampling, the main difference being that the number of samples is not offspring-dependent.
## VI sampling-size: alternative for comparability
Lim et al [20] studied the scalability of MAP-Elites to large batch-size (i.e. number of offspring generated per generation). In MAP-Elites, each solution is evaluated once so the batch-size is strictly equivalent to the number of evaluations per generation. However, this is not the case for the methods considered in this work, which tend to perform multiple evaluations per solution. We study in this paper how the benefits offered by hardware-accelerated libraries can benefit QD in uncertain domains. As some algorithms perform multiple evaluations per offspring, we have to distinguish the number of evaluations performed in total by generation and the number of evaluations performed for each offspring. For this reason, we introduce the **sampling-size**: the maximum allocated per-generation sampling budget. For example, for a sampling-size of \(1024\), MAP-Elites would spend one evaluation on each offspring, thus generating \(1024\) offspring solutions per generation. For the same sampling-size of \(1024\), a MAP-Elites-sampling approach spending \(8\) samples on each offspring would only generate \(128\) offspring per generation. Using the sampling-size, we can compare the performance of algorithms when given the same maximum number of samples per generation, making the comparison more meaningful.
## VII Experimental setup
To facilitate later advances in Uncertain QD we open-source our comparison code at [https://github.com/adaptive-intelligent-robotics/Uncertain_Quality_Diversity](https://github.com/adaptive-intelligent-robotics/Uncertain_Quality_Diversity).
### _Tasks_
We perform our comparison on the tasks suite illustrated in Table II, taken from [15, 31, 10] and chosen for their diversity of controller architectures, noise structures, and complexity. Among these tasks, the Walker and Ant tasks are controlled in closed-loop and the Arm and the Hexapod in open-loop. Closed-loop controllers use Deep Neural Networks and get information on the current states, so they have the ability to output correction to compensate for uncertainty. As a consequence, in the Walker and Ant tasks where the noise is applied on the initialisation, controllers are not all equally reproducible. These tasks present both the issue of performance estimation and reproducibility (see Sec. III-C). On the contrary, open-loop controllers do not have access to the environment state, preventing them from compensating for uncertainty. Thus, the Arm and the Hexapod tasks are only subject to the performance estimation problem (see Sec. III-C).
### _Algorithms and baselines_
In the following, we compare the approaches from Sec. V as well as **MAP-Elites** and **Random-MAP-Elites**. Random-MAP-Elites generates random solutions at each generation and add them to a MAP-Elites grid following the MAP-Elites addition condition. Among the approaches from Sec. V, Archive-sampling and Parallel-Adaptive-sampling reevaluate the entire content of the archive at each generation. For fairness, we count these evaluations as part of the sampling-size: we deduct the maximum number of solutions in the archive from their sampling-size. Thus, these two approaches are undefined for values of sampling-size lower than the maximum number of
solutions in the archive. We replicate each algorithm 10 times on each task for each sampling-size, for a total of 1120 runs. All approaches use CVT-shaped MAP-Elites archives [32].
### _Metrics_
Our experimental analysis is divided into two parts: first, we compare the performance of the considered approaches; second, we study their ability to address the two problems raised in Sec. III-C: Performance estimation and Reproducibility.
#### Iii-C1 Comparison in uncertain domains
We quantify the performance of an algorithm using two metrics. First, the **Corrected QD-Score**, the QD-Score of the Corrected Archive described in Sec. III-D. Second, the **Time to convergence**, the wall-clock time, in seconds, needed to reach \(95\%\) of the final Corrected QD-Score value. We estimate this time on the same hardware for all algorithms and replications.
#### Iii-C2 Performance estimation and Reproducibility
To quantify the ability of the algorithm to get good performance estimates, we use the **QD-Score loss**[17]: the difference between the QD-Score and the Corrected QD-Score, normalised by the QD-Score. In addition, to assess the algorithm's ability to keep solutions with reproducible descriptors, we propose a new metric: the **Reproducibility-Score**. To compute it, we first get the descriptor variance of the \(M\) reevaluations of each solution, normalised within each cell using the maximum observed-variance. This normalisation accounts for the difference in descriptor variance across the descriptor space. The Reproducibility-Score is then computed as the sum over the archive of \(1-normalised\_variance\), to avoid penalising approaches that find more solutions. One can similarly consider the fitness and define the Fitness-Reproducibility-Score.
### _Hyperparameters_
We run our experiments on Nvidia RTX6000 with \(24GB\) of RAM, each run being allocated to one device only. Most of the hyperparameters are common to all MAP-Elites variants, so we use the same values for all algorithms to guarantee comparability, provided in Appendix A. The algorithms from
Fig. 2: Pareto front for all approaches on all tasks. The x-axis is the Corrected QD-Score, quantifying the quality and diversity of the final collection; and the y-axis is the Time to convergence in minutes. For each approach, we represent increasing sampling-size as increasing marker-size. The dashed blue line gives the Pareto-front. As detailed in Sec. VII-B, Archive-sampling and Parallel-Adaptive-sampling are not defined for values of sampling-size lower than the number of solutions in the archive. Each approach is replicated 10 times for each sampling-size value, each point corresponds to the median over replications.
Sec. V require two categories of hyperparameters that are not defined for MAP-Elites: number of samples \(N\) and depth \(D\). For each algorithm, we chose their optimal value independently, based on a study provided in Appendix C. In this study, we test \(3\) possible values for each parameter, and choose the optimal values based on the comparison of \(5\) replications of each algorithm on the full task suite across every sampling-size value, for a total of \(1520\) runs.
## VIII Comparison in uncertain domains
To display the trade-off between performance and convergence speed, we represent the metrics from Sec. VII-C as a Pareto plot: the x-axis is the Corrected QD-Score, quantifying the quality and diversity of the final collection; and the y-axis is the Time to convergence. We also consider \(4\) sampling-size values: \(256,1024,4096,16384\) and represent increasing sampling-size as increasing marker-size. We display the plots in Fig. 2, with the dashed blue line giving the Pareto-front, and the corresponding archives in Fig. 3. We report p-values based on the Wilcoxon rank-sum test with Bonferroni correction.
Lim et al. [20] prove that MAP-Elites' QD-Score scale with increasing batch-size. In the case of MAP-Elites, the sampling-size and batch-size are equivalent. Thus, we expect the Time to convergence (y-axis) to stay constant or decrease as the sampling-size (marker-size) increases. We observe that it is the case for sampling-size values: \(256,1024,4096\). However, the Time to convergence increases again for \(16384\) in all tasks requiring a simulator: Hexapod, Walker and Ant. This indicates that for \(16384\), we reach the parallelisation plateau of our hardware and lose the time benefit of parallelisation. As we compare algorithms based on sampling-size, this limit is the same for all algorithms so does not impact our conclusions.
### _Per-algorithm results_
As expected, the two baselines **MAP-Elites and MAP-Elites-Random** get lower QD-Score than Uncertain QD approaches (\(p<5.10^{-5}\) for Parallel-Adaptive-sampling and Archive-sampling and \(p<5.10^{-2}\) for Deep-Grid). Still, at least one of them appears on the Pareto front in all tasks, thanks to their convergence speed. Interestingly, MAP-Elites-Random gets better Corrected-QD-Score than MAP-Elites in the Walker task (\(p<1.10^{-2}\)). This illustrates the detrimental effect of uncertainty: when reevaluating the solutions kept by MAP-Elites, they prove to be sub-performing lucky solutions.
**MAP-Elites-sampling** surprisingly does not perform that well. It is even dominated by MAP-Elites for sampling-size \(256\) (\(p<5.10^{-2}\)), and equivalent in Hexapod and Ant. Its archives in these tasks (Fig. 3) are also smaller, indicating that it discovers less diverse solutions than other approaches. This may be explained by an early convergence due to a lack of exploration. MAP-Elites-sampling retains only solutions that are reproducible from the start. However, early solutions are likely to have yet not learned how to walk in a reproducible manner but still constitute important stepping stones. By preventing these new brittle solutions from being considered as parents, MAP-Elites-sampling is preventing exploration. Previous work in EA applied to noisy domains already highlighted that static explicit-sampling might induce a lack of exploration [19].
Fig. 3: Corrected Archives of all approaches on all tasks for sampling-size \(16384\). We display one of the 10 replications, randomly chosen. Each cell represents one solution, with the colour representing the fitness (the brighter the better). The axis are the descriptor dimensions, defined in Table II.
Thus, in Uncertain QD, it seems more promising to add diverse solutions to the archive and later question them, rather than only retaining highly reproducible solutions from the start.
**Deep-Grid** is on the Pareto-front in three tasks out of four for almost all sampling-sizes, making it the best performing approach. These results highlight the effectiveness in the considered tasks of using past solutions to approximate distributions. Deep-Grid only struggles in the Ant task, which appears to be the most challenging of the suite, as it is \(3D\) and closed-loop controlled while having the largest search-space. Flageat et al. [17] already highlighted that Deep-Grid mechanisms struggle more than sampling-based ones when applied to large-dimension search-spaces.
**Deep-Grid-sampling** is equivalent to Deep-Grid for large sampling-size values in Arm and Walker but get lower Corrected QD-Score for small ones (\(p<1.10^{-3}\)). As could be expected from MAP-Elites-sampling's results, Deep-Grid-sampling also worsens Deep-Grid Corrected QD-Score in Hexapod (\(p<5.10^{-4}\)) and Ant (\(p<5.10^{-2}\)) where it likely prevents exploration, as can also be observed in the archives in Fig. 3. Interestingly, Deep-Grid-sampling converges to its final sub-optimal archive faster than every other approaches in Ant with \(16384\) (\(p<5.10^{-4}\)), which corroborates the hypothesis of stucked exploration. Overall, Deep-Grid relies on filling cells with reproductions of similar solutions. Introducing sampling reduces the total number of solutions added to the cells at each generation, slowing down these mechanisms. Thus, Deep-Grid-sampling does not improve performance in tasks where Deep-Grid prove efficient, such as Arm or Hexapod.
**Archive-sampling and Parallel-Adaptive-sampling** get good results in Hexapod and Walker although they are dominated by Deep-Grid. However, they outperform every other approach in Ant (\(p<5.10^{-4}\) for Corrected QD-Score), which, as highlighted earlier, constitutes the most challenging task. Archive-sampling and Parallel-Adaptive-sampling take the best of both worlds: they keep early, promising solutions, but also question solutions over time to ensure their performance. Thus, they leverage the available per-generation sampling budget by distributing samples wisely while allowing exploration and constantly questioning solutions, which prove promising in the Ant task. The additional offspring-reevaluation mechanism in Parallel-Adaptive-sampling allows it to get slightly higher Corrected QD-Score values than Archive-sampling but to a higher time cost. Thus, neither of the two approaches dominates the other.
### _Conclusion_
Overall, from these results, Deep-Grid appears the optimal Uncertain QD approach in Arm, Hexapod and Walker whatever the sampling-size. It also proves optimal for small sampling-size in Ant. However, as soon as the sampling-size becomes large enough to afford Archive-sampling or Parallel-Adaptive-sampling in Ant, they become by far the most interesting approaches. Ant is the most challenging task of our benchmark: it is closed-loop unlike the Hexapod and \(3\)-dimensional, unlike the Walker. Thus, the good performance of Archive-sampling and Parallel-Adaptive-sampling on this task compared to all other approaches makes them promising, especially as they also get good results in Hexapod and Walker.
This study also demonstrates once again the limitations of MAP-Elites in Uncertain QD domains. It also highlights that the simple MAP-Elites-sampling algorithm is not necessarily a good approach in uncertain domains, as it seems to prevent exploration, corroborating previous results in EA [19].
## IX Analysis of performance estimation and Reproducibility
We display the metrics from Sec. VII-C in Fig. 4 and Fig. 5. We also give the Fitness-Reproducibility-Score results in Appendix D. All solutions in the Arm and Hexapod tasks are equally reproducible as the uncertainty is a Gaussian noise on the fitness and descriptor that is the same for all solutions. Thus, we do not display the Reproducibility-Score for these two tasks. We report p-values based on the Wilcoxon rank-sum test with Bonferroni correction.
### _Performance-estimation capability (QD-Score loss)_
The QD-Score loss quantifies the inaccuracy of the estimations of fitness and descriptor done by the algorithms. As it is a loss, algorithms should seek to minimise it.
Fig. 4: Comparison with respect to sampling-size values of (top) QD-Score Loss, quantifying the estimations capability of the algorithm and (bottom) Reproducibility-Score, quantifying the reproducibility of solutions kept by the algorithm. All solutions in Arm and Hexapod are equally reproducible so we do not display the Reproducibility-Score for these tasks. As detailed in Sec. VII-B, Archive-sampling and Parallel-Adaptive-sampling are not defined for values of sampling-size lower than the number of solutions in the archive. Each algorithms is replicated 10 times for each sampling-size value.
According to this metric, **MAP-Elites** is consistently the worst of the considered approaches. Even **MAP-Elites-Random** is significantly better (\(p<5.10^{-4}\)), except on the Ant task. MAP-Elites relies on improving existing solutions, which leads to evaluating repetitively similar solutions until they get lucky evaluations. This is not the case with MAP-Elites-Random, which always considers new random solutions and thus, on average, better estimates their performance. All explicit-sampling approaches: **MAP-Elites-sampling, Archive-sampling** and **Parallel-Adaptive-sampling** get equivalent losses values, which is consistent with the fact that they all rely on using the averages over samples as estimates. However, the implicit-sampling approaches (i.e. Deep-Grid approaches) get better approximation than explicit-sampling ones on open-loop tasks (\(p<1.10^{-3}\)), while their approximations are equivalent in Walker and Ant. **Deep-Grid** approximates the distribution within a cell using multiple solutions, while explicit-sampling approaches use the average of their samples. It seems that such approximation works well for simple Gaussian noise distribution as considered in the Arm and Hexapod but quickly becomes intractable for more complex distributions as in the Walker and Ant. Interestingly, **Deep-Grid-sampling**, which combines these two mechanisms, manages to get the lowest loss of all approaches (\(p<5.10^{-3}\) except for MAP-Elites-sampling in Ant where \(p<2.10^{-2}\)).
### _Reproducibility (Reproducibility-Score)_
We propose the Reproducibility-Score to quantify the reproducibility of solutions selected by each algorithm. Algorithms should seek to maximise it.
According to this metric, **MAP-Elites-Random**, and **MAP-Elites** find less reproducible solutions than every other approach. Engebraaten et al. [16] highlighted that MAP-Elites has a bias favouring solutions with large Descriptor-variance, as they tend to easily populate multiple cells and thus to better survive in the population. MAP-Elites-Random optimises solutions less than MAP-Elites and thus suffers less from this bias in the Walker task (\(p<5.10^{-2}\)), corroborating the results from the QD-Score Loss. However, in Ant, solutions with large variance have a higher chance of populating hard-to-reach cells, and would hardly be questioned as MAP-Elites-Random does not optimise solutions any further. Thus, MAP-Elites-Random selects less reproducible solutions than MAP-Elites in Ant (\(p<5.10^{-4}\)).
Sampling is an efficient way to counteract this bias, as it reduces the area that these large-descriptor-variance solutions can achieve. This is what we observed from the results of **Archive-sampling** and **Parallel-Adaptive-sampling**, which find significantly more reproducible solutions than MAP-Elites and MAP-Elites-Random on both tasks (\(p<1.10^{-3}\)). However, MAP-Elites-sampling, which also relies on sampling, find solutions that are equivalently reproducible to those of MAP-Elites and MAP-Elites-Random. Again, this seems to be due to its lack of exploration: MAP-Elites-sampling likely finds more-reproducible solutions, but fewer of them, giving an equivalent Reproducibility-Score.
Alternatively, the questioning mechanism of **Deep-Grid** also proves promising as it finds more reproducible solutions than MAP-Elites and MAP-Elites-Random (\(p<2.10^{-2}\) except for Walker with high-sampling size). This mechanism replaces every solution in a cell with equal probability, leading to slowly getting rid of solutions with large variance, as they do not end up repetitively in the same cell. As observed with the QD-Score loss, **Deep-Grid-sampling** combine these two ideas and manage to find the more reproducible solutions in the Walker task (\(p<5.10^{-2}\) for sampling size 256 and all approaches except Deep-Grid and \(p<1.10^{-3}\) for all other sampling-size). However, it proves slightly less good in the more complex Ant, where it gets outperformed by Archive-sampling and Parallel-Adaptive-sampling (\(p<1.10^{-2}\)).
### _Conclusion_
This section aims to understand better the underlying dynamics of the Uncertain QD approaches studied in this work. We show that Deep-grid approximates the expected fitnesses and descriptors of solutions well in domains with simple noise structure. However, its distribution approximation seems to struggle with more complex noise structures. In addition, Deep-Grid tends to find less reproducible solutions than
Fig. 5: Per-cell Reproducibility-Score of all approaches for sampling-size \(16384\). We display one of the 10 replications, randomly chosen. We give the archives in Walker and Ant, but not in Arm and Hexapod as all solutions have the same variance. Each cell represents one solution, with the colour representing its Reproducibility-Score, which corresponds to \(1-normalised\_variance\) (the brighter the higher). The axis are the descriptor dimensions, defined in Table II.
Deep-Grid-sampling, Archive-sampling and Parallel-Adaptive-sampling. On the contrary, these three approaches succeed in finding highly reproducible solutions while getting consistently reliable fitness and descriptor estimates across tasks. However, previous sections prove that Deep-Grid-sampling is underperforming in term of QD-Score in the complex Ant and Walker tasks. Thus, this last point combined with their good QD-Score makes Archive-sampling and Parallel-Adaptive-sampling strong competitors in complex closed-loop Uncertain QD domains.
## X Conclusion and Discussion
In this work, we propose Uncertain QD, a framework that aims to formalise the problem of QD in uncertain domains, encountered multiple times across QD literature [14, 15, 16, 17]. Uncertain QD is a special case of QD optimisation in which fitness and descriptors for each solution are no longer fixed values but rather a distribution over possible values. Building on this first contribution, we also propose a new methodology to evaluate Uncertain QD approaches, composed of a set of tasks and metrics, as well as a metrics-computation procedure. As third and fourth contributions, we propose three new approaches: Archive-sampling, Parallel-Adaptive-sampling and Deep-grid-sampling and compare them to existing Uncertain QD approaches. We empirically show that the best performing Uncertain QD approach is Deep-Grid for simple uncertain domains, as it proves fast and high performing. However, we also demonstrate the benefit of Archive-sampling and Parallel-Adaptive-sampling for more complex applications. These approaches slightly underperform Deep-Grid on simple domains but exhibit competitive results on complex tasks. More importantly, they have the interesting property to select reproducible solutions, making them very promising for more realistic tasks. Our study displays once again the limitations of MAP-Elites in Uncertain QD tasks, but also demonstrates that simple sampling-based approaches are not that interesting in such settings as they lead to a lack of exploration. We hope our framework and method will constitute a meaningful benchmark for later works considering Uncertain QD.
## Acknowledgments
This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) grant EP/V006673/1 project REcoVER. We also want to thank the members of the Adaptive and Intelligent Robotics Lab at Imperial College London for their useful comments and in particular Jenny Zhang for her work on the Deep-Grid algorithm.
|
2308.15753 | GlassMessaging: Supporting Messaging Needs During Daily Activities Using
OST-HMDs | The act of communicating with others during routine daily tasks is both
common and intuitive for individuals. However, the hands- and eyes-engaged
nature of present digital messaging applications makes it difficult to message
someone amidst such activities. We introduce GlassMessaging, a messaging
application designed for Optical See-Through Head-Mounted Displays (OST-HMDs).
It facilitates messaging through both voice and manual inputs, catering to
situations where hands and eyes are preoccupied. GlassMessaging was iteratively
developed through a formative study identifying current messaging behaviors and
challenges in common multitasking with messaging scenarios | Nuwan Janaka, Jie Gao, Lin Zhu, Shengdong Zhao, Lan Lyu, Peisen Xu, Maximilian Nabokow, Silang Wang, Yanch Ong | 2023-08-30T04:32:50Z | http://arxiv.org/abs/2308.15753v1 | # _Glass/Messaging_: Supporting Messaging Needs During Daily Activities Using OST-HMDs
###### Abstract.
The act of communicating with others during routine daily tasks is both common and intuitive for individuals. However, the hands- and eyes-engaged nature of present digital messaging applications makes it difficult to message someone amidst such activities. We introduce _GlassMessaging_, a messaging application designed for Optical See-Through Head-Mounted Displays (OST-HMDs). It facilitates messaging through both voice and manual inputs, catering to situations where hands and eyes are preoccupied. _GlassMessaging_ was iteratively developed through a formative study identifying current messaging behaviors and challenges in common multitasking with messaging scenarios.
Keywords:Human-centered computing Mixed / augmented reality; Empirical studies in ubiquitous and mobile computing. +
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †:
+
Footnote †: journal:
+
Footnote †:
+
Footnote †: journal:
+
Footnote †:
+
Footnote †: journal:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
obstructed the view, the color schemes were either too bright or too dark, and some elements were too small. These factors led to usability issues. To cater to hands-busy scenarios, we introduced **voice dictation** for text entry and **voice commands** for hands-free UI navigation. We also implemented a **ring mouse interaction** to allow for faster and more precise scrolling and selection (Kumar et al., 2018), while retaining **mid-air gestures** due to their "intuitive" touch-like content manipulation paradigm.
### Apparatus
We selected Microsoft HoloLens 2 (HL2), an OST-HMD with hand-tracking, voice commands, and world-scale positioning (2k resolution, 52deg diagonal FoV), to develop _GlassMessaging_, our messaging app designed for OST-HMDs. A wireless ring mouse (Sanwa Supply 400-MA077) facilitated easy directional UI element selection (Figure 1). We developed _GlassMessaging_ using Unity 3D, Mixed Reality Toolkit (MRTK 2.8), leveraging MRTK's built-in functions for mid-air gestures, voice inputs, virtual keyboard, and content stabilization. To simulate a realistic messaging experience, we implemented a virtual chat server using Python, running on a tablet computer connected to the HL2 via Wi-Fi, enabling bi-directional communication through a socket connection between the client and the server (see [https://github.com/NUS-HCILab/GlassMessaging](https://github.com/NUS-HCILab/GlassMessaging)).
### Interface Design
To enhance learnability and maintain consistency (Kumar et al., 2018) with familiar interfaces, we chose to modify the UIs of existing mobile _messaging_ apps and tailor them to OST-HMDs, instead of entirely redeveloping them. The final interface is shown in **Figure 1** after two iterations (see details at (Kumar et al., 2018)).
Figure 1. Steps for sending a message after receiving a notification. The user wears the OST-HMD and a ring mouse and sees the environment. The user **(1) says ‘SHOW CHAT’**, the interface is displayed, and a notification including the name of the sender and the sending time appears at the top of the view with a beep sound; **(2) says the name of the contact (e.g., ‘PETER’)**, and the system automatically navigates to the chat interface of the respective contact; **(3) says ‘VOICE MESSAGE’ and dictates the message via voice**. The system transcribes the user’s utterances to text in real-time, displayed in the text entry box. Once the user stops speaking for a measured amount of time (silence gap), dictation turns off automatically; **(4) says ‘SEND’**, and the system sends the message; **(5) says ‘HIDE CHAT’**, and the full interface is hidden, restoring the full vision of the environment.
#### 2.2.1. Visual interface (output)
The visual interface of _GlassMessaging_ (Figure 1) consists of four main UI panels, namely, notifications, contacts, chat messages, and voice/keyboard input panels. This allows users to receive notifications, select contacts, and compose/send messages using voice and keyboard input.
#### 2.2.2. Audio interface (Input-Output)
As depicted in Figure 1, users can interact with _GlassMessaging_ via voice commands (Table 1) to navigate the UI (e.g., 'SCROLL UP', 'SCROLL TO TOP') and dictate text (using 'VOICE MESSAGE'). Audio feedback (e.g., beeps) accompanies some input interactions.
When the app is not in dictation mode, voice commands can directly activate various functionalities, such as opening notifications ('OPEN NOTIFICATION'), selecting contacts ('<NAME>'), sending the message ('SEND'), and hiding the interface ('HIDE CHAT'). Voice shortcuts such as "TEXT <NAME>' are also available, which combine '<NAME>' and 'VOICE MESSAGE' for direct text entry. Similarly, the 'REPLY' command opens the notification and begins dictation for a reply immediately.
#### 2.2.3. Manual-input interface (Input)
_GlassMessaging_ supports two manual input methods: a wearable ring-mouse and mid-air hand gestures as shown in Table 1.
_Ring mouse:_ The user can scroll through the contact list using the ring mouse's 'up' and 'down' buttons. The 'right' button toggles between input modalities and selects the send button. The 'center' button activates the selected virtual button and serves as a long-press toggle to hide/reveal the entire interface.
_Mid-air interaction:_ The visual interface can be interacted with through mid-air gestures. The contact list can be scrolled by swiping, and a contact's chat can be opened by pressing their virtual icon. The input modality is chosen by selecting the corresponding virtual buttons (voice or keyboard). Pressing on a notification opens the chat with the sender.
## 3. Evaluation
To assess the effectiveness of _GlassMessaging_, we compared it to the Telegram application on mobile phones in a controlled study set in daily multitasking situations. Our findings (Han et al., 2017) indicate that, even with the present technological constraints of the OST-HMD platform, _GlassMessaging_ provided enhanced voice input access and enabled smoother interactions than phones. This resulted in a 33.1% reduction in response time and a 40.3% increase in texting speed. These findings underscore the significant potential of OST-HMDs as a meaningful complement to mobile phone-based messaging in multitasking scenarios.
However, there are several challenges to overcome before fully harnessing this platform's potential. For example, the use of _GlassMessaging_ resulted in a 2.5% drop in texting accuracy, especially with complex texts. Moreover, current OST-HMDs have some inherent downsides (e.g., rudimentary hardware capabilities, unfamiliarity, limited interactions (Boward et al., 2015; Krizhevsky et al., 2016; Krizhevsky et al., 2017)) when contrasted with the mature and extensively tested mobile phones currently available.
## 4. Conclusion and Future Work
While multitasking with messaging is a frequent real-life activity, current mobile applications and platforms fall short in providing adequate support. We pinpointed two primary situational impediments (i.e., hands-busy and eyes-busy) arising from existing mobile platforms, which drove us to iteratively develop _GlassMessaging_, a messaging application tailored for OST-HMDs to address these shortcomings. We envision messaging on OST-HMDs as the forthcoming communication frontier, acting as a valuable adjunct to mobile phones during multitasking and driven forward by technological progress. To realize this vision, it is essential to re-conceptualize communication interfaces that align
with OST-HMD affordances and to devise strategies to overcome potential situational challenges (e.g., privacy and social concerns with voice).
## Acknowledgments
We thank the volunteers who participated in our studies.
This research is supported by the National Research Foundation, Singapore, under its AI Singapore Programme (AISG Award No: AISG2-RP-2020-016). It is also supported in part by the Ministry of Education, Singapore, under its MOE Academic Research Fund Tier 2 programme (MOE-T2EP20221-0010), and by a research grant #22-5913-A0001 from the Ministry of Education of Singapore. Any opinions, findings and conclusions, or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation or the Ministry of Education, Singapore.
|
2305.15977 | Electronic structure and X-ray magnetic circular dichroism in the MAX
phases T$_2$AlC (T=Ti and Cr) from first principles | We study the electronic and magnetic properties of T$_2$AlC (T=Ti and Cr)
compounds in the density-functional theory using the generalized gradient
approximation (GGA) with consideration of strong Coulomb correlations (GGA+$U$)
in the framework of the fully relativistic spin-polarized Dirac linear
muffin-tin orbital (LMTO) band-structure method. The X-ray absorption spectra
and X-ray magnetic circular dichroism (XMCD) at the Cr $L_{2,3}$ and Cr, Ti,
and C $K$ edges were investigated theoretically. The calculated results are in
good agreement with experimental data. The effect of the electric quadrupole
$E_2$ and magnetic dipole $M_1$ transitions at the Cr $K$ edge has been
investigated. | L. V. Bekenov, S. V. Moklyak, B. F. Zhuravlev, Yu. N. Kucherenko, V. N. Antonov | 2023-05-25T12:14:51Z | http://arxiv.org/abs/2305.15977v1 | [
###### Abstract
We study the electronic and magnetic properties of T\({}_{2}\)AIC (T=Ti and Cr) compounds in the density-functional theory using the generalized gradient approximation (GGA) with consideration of strong Coulomb correlations (GGA+\(U\)) in the framework of the fully relativistic spin-polarized Dirac linear muffin-tin orbital (LMTO) band-structure method. The X-ray absorption spectra and X-ray magnetic circular dichroism (XMOD) at the Cr \(L_{2,3}\) and Cr, Ti, and C \(K\) edges were investigated theoretically. The calculated results are in good agreement with experimental data. The effect of the electric quadrupole \(E_{2}\) and magnetic dipole \(M_{1}\) transitions at the Cr \(K\) edge has been investigated.
electronic structure, X-ray absorption, X-ray magnetic circular dichroism, MAX phases 26, 23706 1-16 10.5488/CMP.26.23706 Electronic structure and X-ray magnetic circular dichroism in the MAX phases T\({}_{2}\)AIC]Electronic structure and X-ray magnetic circular dichroism in the MAX phases T\({}_{2}\)AIC (T=Ti and Cr) from first principles L. V. Bekenov et al.]L. V. Bekenov, S. V. Moklyak, B. F. Zhuravlev, Yu. N. Kucherenko, V. N. Antonov +
G. V. Kurdyumov Institute for Metal Physics of the NAS of Ukraine, 36 Academician Vernadsky Boulevard, UA-03142 Kyiv, Ukraine
Footnote †: This work is licensed under a Creative Commons Attribution 4.0 International License. Further distribution 23706-1 of this work must maintain attribution to the author(s) and the published article’s title, journal citation, and DOI.
## 1 Introduction
The M\({}_{n+1}\)AX\({}_{n}\) (MAX) phases are layered hexagonal compounds, in which close-packed layers of M (early transition metals) are interleaved with layers of group A element (mostly IIIA and IVA), with the X-atoms (C and/or N) filling the octahedral sites between the M layers. Depending on the stoichiometry, one gets three different kinds of crystal structures classified as 211 (\(n=1\)), 312 (\(n=2\)) and 413 (\(n=3\)), all being described by the \(P6_{3}/mmc\) space group [1]. These carbides and nitrides have received increasing interest in recent years due to their important properties, such as low density, high elastic stiffness, thermal-shock and high-temperature oxidation resistance, good thermal and electrical conductivities, damage tolerance, and easy machinability [2, 3, 4, 5, 6, 7]. These unusual properties make these compounds highly promising candidates for diverse applications in high-temperature oxidizing environments. Potential applications of the nanolaminated compounds mentioned to date range from spintronics to refrigeration, even though the research efforts have so far been focused solely on the discovery of new magnetic phases and compositions, and fundamentals of magnetic properties. However, individual properties vary from phase to phase and no systematic study of MAX phases is available in the literature. Up till now, more than 70 MAX compounds have been synthesized [8, 9], and the search is still going on. Moreover, an increasing number of scientists attempt to synthesize magnetic MAX phases [10] because magnetic MAX phases exhibit a combination of magnetic, metallic, and ceramic-like properties [11, 12], which are fascinating from the standpoints of both fundamental science and technological applications. For example, a substantial progress could happen in magnetic multilayers for magnetic recording and data storage if strong magnetism can be found in MAX phases with inherently nanolaminated compounds instead of the presently used artificially grown sandwich materials.
The ternary carbides have the properties of both metals and ceramics. Like metals, they are good thermal and electrical conductors. They are relatively soft with Vickers hardness of about 2-5 GPa. Like ceramics, they are elastically stiff. Some of them (e.g., Ti\({}_{3}\)SiC\({}_{2}\)) also exhibit excellent high temperature mechanical properties. They are resistant to thermal shock and show unusually good damage
tolerance and exhibit excellent corrosion resistance. Above all, unlike the traditional carbides, they can be machined by conventional tools without lubricant, which makes them more technologically important for applications [6, 13]. These excellent properties make the MAX phases another class of technically important materials. Brushes in electric motors is another application of these materials. The enormous applications of this class of materials trigger us to have a better understanding of their electronic and mechanical properties.
Of special interest to this work are Ti\({}_{2}\)AlC and Cr\({}_{2}\)AlC carbides. The Ti\({}_{2}\)AlC phase was first synthesized by Nowotny with co-workers [14], but its physical properties were not characterized until quite recently [2, 14, 15, 16]. The electronic structure of Ti\({}_{2}\)AlC was calculated a number of times [17, 18, 19]. The qualitative agreement between the first and last studies is good. The ternary nanolaminated compound Cr\({}_{2}\)AlC was discovered in 1963 by Jeitschko et al. [20]. Since that time it has been the subject of numerous studies, both experimentally [21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32] and theoretically [33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 10, 10, 10, 11, 12, 13].
In spite of many experimental and theoretical efforts to investigate the electronic structure and physical properties of the ternary MAX phases, there are still many contradictions and discrepancies among them. Many questions still remain with respect to magnetism in the the ternary nanolaminated compounds (see the review paper by Ingason et al. [1] and references therein). Schneider et al. [22] calculated the difference in cohesive energy per formula unit (f.u.) of the ferromagnetic (FM) and antiferromagnetic (AFM) configurations with reference to the nonmagnetic (NM) configuration. Based on the small total energy difference between the AFM and NM configurations as well as the comparatively small local magnetic moment (0.7 \(\mu_{\rm B}\)/Cr atom), the authors speculated that magnetism may be suppressed in Cr\({}_{2}\)AlC. Sun et al. [33] also performed total energy calculations for Cr\({}_{2}\)AlC both at AFM and NM polarized states. They found that energy difference between the two states was around 0.3 meV. Therefore, they claimed nonmagnetic ground state in Cr\({}_{2}\)AlC. This is in agreement with the first experimental result attempting to address this point based on nuclear magnetic resonance (NMR) spectroscopy [25]. In this study, Lue et al. show, from the temperature dependence of the \({}^{17}\)Al central lines between room temperature and 77 K, that Cr\({}_{2}\)AlC is NM in this temperature range. Ramzan et al. [43] presented the electronic and structural properties calculated by first principles GGA and GGA+\(U\) calculations of Cr\({}_{2}\)AlC. They show that for the GGA exchange-correlation potential, the ground state of Cr\({}_{2}\)AlC is not spin-polarized, with vanishing magnetic moments on the Cr atoms. In this case, the agreement with experiments for the equilibrium structure and the bulk modulus is not satisfactory. On the other hand, using the GGA+\(U\) approximation (with Hubbard \(U=1.95\) eV and exchange Hund coupling \(J=0.95\) eV), the calculated ground state corresponds to a FM ordering. In this case, the agreement with experiments was found to be excellent for the lattice constants and bulk modules. However, for a larger value of \(U\) (2.95 eV), the ground state magnetic order is modified, from FM (for \(U=1.95\) eV) to AFM (for \(U=2.95\) eV), and the calculated equilibrium volume was in strong disagreement with experiments. The magnitude of the magnetic moments strongly depends on the Hubbard \(U\). Dahlqvist et al. [44, 11] state that the magnetic ground state of Cr\({}_{2}\)AlC is in-plane AF. By comparing GGA and GGA+\(U\) results with experimental data, they found that using a \(U_{\rm eff}=U-J\) value larger than 1 eV results in structural parameters deviating strongly from experimentally observed values. The spin magnetic moments at the Cr site \(M_{s}^{\rm Cr}\) were found to be equal to 0.7 \(\mu_{\rm B}\)[11] and 1.37 eV for the GGA and GGA+\(U\) (for \(U_{\rm eff}=1\) eV) [44], respectively. The authors concluded that this class of Cr-based carbide MAX phases cannot be considered as strongly correlated systems since both GGA and GGA+\(U\) with \(U_{\rm eff}\leqslant 1\) eV give the calculated lattice parameters and bulk modulus close to experimentally reported values, if low-energy in-plane AFM magnetic states are considered. For larger values of the \(U\) parameter (\(U_{\rm eff}>1\) eV), the structural parameters deviate strongly from experimentally observed values.
Soft X-ray absorption and magnetic circular dichroism in T\({}_{2}\)AlC were measured by several authors [32, 37, 27]. Recent X-ray magnetic circular dichroism experiments performed by Jaouen et al. [32] demonstrate that Cr atoms carry a net magnetic moment in Cr\({}_{2}\)AlC ternary phase along the \(c\) axis. The spin magnetic moment at the Cr site was found to be extremely small \(M_{s}^{\rm Cr}=0.05\)\(\mu_{\rm B}\), which is much smaller than the predicted values for the FM [22, 43] as well as the AFM [11, 44] solutions. SQUID experiments by Jaouen et al. [45] also produce an extremely small Cr spin magnetic moment of 0.002 \(\mu_{\rm B}\) in Cr\({}_{2}\)AlC.
The aim of this paper is the theoretical study from the "first principles" of the electronic and magnetic structures and the XAS and XMCD in T\({}_{2}\)AlC (T=Ti and Cr) carbides. The energy band structure of T\({}_{2}\)AlC
(T=Ti and Cr) carbides is calculated within the _ab initio_ approach taking into account strong electron correlations by applying a local spin-density approximation to the density functional theory supplemented by a Hubbard \(U\) term (GGA+\(U\)) [46]. The paper is organized as follows. The computational details are presented in section 2. Section 3 presents the electronic structure, XAS and XMCD spectra of T\({}_{2}\)AlC (T=Ti and Cr) carbides at the Cr \(L_{2,3}\) and Cr, Ti, and C \(K\) edges calculated in the GGA+\(U\) approximation. Theoretical results are compared to the experimental measurements. Finally, the results are summarized in section 5.
## 2 Crystal structure and computational details
### X-ray magnetic circular dichroism
Magneto-optical (MO) effects refer to various changes in the polarization state of light upon interaction with materials possessing a net magnetic moment, including rotation of the plane of linearly polarized light (Faraday, Kerr rotation), and the complementary differential absorption of the left-hand and right-hand circularly polarized light (circular dichroism). In the near visible spectral range, these effects result from excitation of electrons in the conduction band. Near X-ray absorption edges, or resonances, magneto-optical effects can be enhanced by transitions from well-defined atomic core levels to transition symmetry selected valence states.
Within the one-particle approximation, the absorption coefficient \(\mu_{j}^{\lambda}(\omega)\) for incident X-ray polarization \(\lambda\) and photon energy \(\hbar\omega\) can be determined as the probability of electronic transitions from initial core states with the total angular momentum \(j\) to final unoccupied Bloch states
\[\mu_{j}^{\lambda}(\omega) = \sum_{m_{j}}\sum_{m\mathbf{k}}|\langle\Psi_{n\mathbf{k}}|\Pi_{ \lambda}|\Psi_{jm_{j}}\rangle|^{2}\delta(E_{n\mathbf{k}}-E_{jm_{j}}-\hbar \omega)\theta(E_{n\mathbf{k}}-E_{\mathrm{F}}), \tag{2.1}\]
where \(\Psi_{jm_{j}}\) and \(E_{jm_{j}}\) are the wave function and the energy of a core state with the projection of the total angular momentum \(m_{j}\); \(\Psi_{n\mathbf{k}}\) and \(E_{n\mathbf{k}}\) are the wave function and the energy of a valence state in the \(n\)-th band with the wave vector \(\mathbf{k}\); \(E_{\mathrm{F}}\) is the Fermi energy.
\(\Pi_{\lambda}\) is the electron-photon interaction operator in the dipole approximation
\[\Pi_{\lambda}=-e\boldsymbol{\alpha}\mathbf{a}_{\lambda}, \tag{2.2}\]
where \(\boldsymbol{\alpha}\) are the Dirac matrices and \(\mathbf{a}_{\lambda}\) is the \(\lambda\) polarization unit vector of the photon vector potential, with \(a_{\pm}=1/\sqrt{2}(1,\pm\mathrm{i},0)\), \(a_{\parallel}=(0,0,1)\). Here, "+" and "-" denotes, respectively, left-hand and right-hand circular photon polarizations with respect to the magnetization direction in the solid. Then, X-ray magnetic circular and linear dichroism are given by \(\mu_{+}-\mu_{-}\) and \(\mu_{\parallel}-(\mu_{+}+\mu_{-})/2\), respectively. More detailed expressions of the matrix elements in the electric dipole approximation may be found in references [47, 48, 49]. The matrix elements due to magnetic dipole and electric quadrupole corrections are presented in reference [49].
### General properties of spin density waves
The magnetic configuration of an incommensurate spin spiral shows the magnetic moments of certain atomic planes varying in direction. The variation has a well-defined period determined by a wave vector \(\boldsymbol{q}\). When the magnetic moment is confined to the lattice sites, the magnetization \(\boldsymbol{M}\) varies as [50]
\[\boldsymbol{M}(\boldsymbol{r}_{n})=m_{n}\left[\begin{array}{c}\cos( \boldsymbol{qr}_{n}+\phi_{n})\sin(\theta_{n})\\ \sin(\boldsymbol{qr}_{n}+\phi_{n})\sin(\theta_{n})\\ \cos(\theta_{n})\end{array}\right], \tag{2.3}\]
where the polar coordinates are used and \(m_{n}\) is the magnetic moment of atom \(n\) with a phase \(\phi_{n}\) at the position \(\boldsymbol{r}_{n}\). Here, we consider only planar spirals, that is, \(\theta_{n}=\pi/2\) which also give the minimum of the total energy. The magnetization of equation (2.3) is not translationally invariant but transforms as
\[\boldsymbol{M}(\boldsymbol{r+R})=D(\boldsymbol{q}\boldsymbol{R})\boldsymbol{M }(\boldsymbol{r}), \tag{2.4}\]
where \(\mathbf{R}\) is a lattice translation and \(D\) is a rotation around the \(z\) axis. A spin spiral with a magnetization in a general point \(\mathbf{r}\) in space can be defined as a magnetic configuration which transforms according to equation (4). Since the spin spiral describes a spatially rotating magnetization, it can be correlated with a frozen magnon.
Since the spin spiral breaks the translational symmetry, the Bloch theorem is no longer valid. Computationally, one should use large super-cells to obtain total-energy of the spin spirals. However, when the spin-orbit interaction is neglected, spins are decoupled from the lattice and only the relative orientation of the magnetic moments is important. Then, one can define the generalized translations which contain translations in the real space and rotations in the spin space [51]. These generalized translations leave the magnetic structure invariant and lead to a generalized Bloch theorem. Therefore, the Bloch spinors can still be characterized by a \(\mathbf{k}\) vector in the Brillouin zone, and can be written as
\[\psi_{k}(\mathbf{r})=\mathrm{e}^{\mathrm{i}\mathbf{k}\mathbf{r}}\left(\begin{array}{c} \mathrm{e}^{-\mathrm{i}\mathbf{q}\mathbf{r}/2}u_{k}(\mathbf{r})\\ \mathrm{e}^{+\mathrm{i}\mathbf{q}\mathbf{r}/2}d_{k}(\mathbf{r})\end{array}\right). \tag{5}\]
The functions \(u_{k}(\mathbf{r})\) and \(d_{k}(\mathbf{r})\) are invariant with respect to lattice translations having the same role as for normal Bloch functions. Due to this generalized Bloch theorem, the spin spirals can be studied within the chemical unit cell and no large super-cells are needed. Though the chemical unit cell can be used, the presence of the spin spiral lowers the symmetry of the system. There remain only the space-group operations that leave invariant the wave vector of the spiral. Considering the general spin space groups, i.e., taking the spin rotations into account, the space-group operations which reverse the spiral vector together with a spin rotation of \(\pi\) around the \(x\) axis are symmetry operations [51].
Though the original formulation of the local-spin-density approximation of the density-functional theory permited the noncollinear magnetic order, first-principles calculations for this aspect have begun only recently (for a review, see reference [52]). One application is the study of noncollinear ground states, for example, in \(\gamma\)-Fe (references [53, 54, 55]) or in frustrated antiferromagnets [56, 57]. In addition, the noncollinear formulation enables the studies of finite-temperature properties of magnetic materials. Since the dominant magnetic excitations at low temperatures are spin waves which are noncollinear by nature, it is possible to determine the magnon spectra and ultimately the Curie temperature from first principles [58, 59, 60, 61, 62]. The noncollinear magnetic configurations were investigated in the Heusler alloys Ni\({}_{2}\)MnGa, Ni\({}_{2}\)MnAl [50], IrMnAl [63], Mn\({}_{3}\)ZnC [64] and Mn\({}_{3}\)CuN [65]. The total energies for different spin spirals were calculated and the ground-state magnetic structures were identified.
### Crystal structure
The M\({}_{2}\)AX unit cell (the space group is \(P6_{3}/mmc\) No. 194) with two formula units per unit cell is shown in figure 1 with Wyckoff positions: M (\(4f\)), A (\(2d\)), and X (\(2a\)). The coordination of the M is trigonal prismatic, while that of X is octahedral. The structure possesses the layer stacking sequence of M and A atoms along the [0001] direction consisting of M\({}_{2}\)X slabs and intercalation of planar packed A-ions. The layered stacking characteristics can be clearly illustrated in the (\(1\bar{2}10\)) plane, as displayed in figure 1 (b). The X atoms occupy the interstitial sites of M octahedra [see figure 1 (c)]. The crystal structure has one free internal parameter by \(Z_{M}\) that defines the height of the M atoms above the X sheets. The dimensionless crystallographic coordinate is \(z_{M}=Z_{M}/c\). At the ideal value of \(z_{M}=1/12\sim 0.0833\), the M and A planes are evenly spaced. Parameter \(z_{M}=0.086\) and \(0.083\) for Cr\({}_{2}\)AlC and Ti\({}_{2}\)AlC, respectively.
### Calculation details
The details of the computational method are described in our previous papers [66, 67, 68, 69] and here we only mention several aspects. Band-structure calculations were performed using the fully relativistic linear muffin-tin orbital (LMTO) method [48, 70]. This implementation of the LMTO method uses four-component basis functions constructed by solving the Dirac equation inside an atomic sphere [71]. The exchange-correlation functional of a GGA-type was used in the version of Perdew, Burke and Ernzerhof (PBE) [72]. Brillouin zone (BZ) integration was performed using the improved tetrahedron method [73]. The basis consisted of Cr and Ti \(s\), \(p\), \(d\), and \(f\) and Al and C \(s\), \(p\), and \(d\) LMTO's.
We found that the agreement between the theoretically calculated and experimentally measured XAS and XMCD spectra became much better with taking into account strong Coulomb correlations. To include the electron-electron correlations into the consideration, we used the "relativistic" generalization of the rotationally invariant version of the LSDA+\(U\) method [74] which takes into account SO coupling so that the occupation matrix of localized electrons becomes non-diagonal in spin indexes. This method is described in detail in our previous paper [74] including the procedure to calculate the screened Coulomb \(U\) and exchange \(J\) integrals, as well as the Slater integrals \(F^{2}\), \(F^{4}\), and \(F^{6}\).
The screened Coulomb \(U\) and exchange Hund coupling \(J_{H}\) integrals enter the LSDA+\(U\) energy functional as external parameters and should be determined independently. These parameters can be determined from supercell LSDA calculations using Slater's transition state technique [75, 76], from constrained LSDA calculations (cLSDA) [77, 78, 79, 80] or from the constrained random-phase approximation (cRPA) scheme [81]. Subsequently, a combined cLSDA and cRPA method was also proposed [82]. The cLSDA calculations produce \(J_{H}=0.95\) eV for the Cr in Cr\({}_{2}\)AlC. It is known that the cRPA method underestimates the values of \(U\) in some cases [83]. On the other hand, the cLSDA method produces too large values of \(U\)[84]. Therefore, in our calculations we treated the Hubbard \(U\) as an external parameter and varied the effective Hubbard \(U_{\rm eff}=U-J_{H}\) from 0 to 3.0 eV. In the case of \(U_{\rm eff}=U-J=0\), the effect of the GGA+\(U\) comes from non-spherical terms which are determined by \(F^{2}\) and \(F^{4}\) Slater integrals. This approach is similar to the orbital polarization (OP) corrections [85, 86, 87, 88, 88]. Therefore, we use the notation GGA+OP throughout the paper for the \(U_{\rm eff}=U-J=0\) eV approach.
The X-ray absorption and dichroism spectra were calculated taking into account the exchange splitting of the core levels. The finite lifetime of a core hole was taken into account by folding the spectra with a Lorentzian. The widths of core levels \(\Gamma_{L_{2,3}}\) for Mn and Ti, and \(\Gamma_{K}\) for O were taken from reference [89]. The finite experimental resolution of the spectrometer was taken into account by a Gaussian of width 0.6 eV.
Figure 1: (Colour online) (a) Crystal structure of the Cr\({}_{2}\)AlC (the space group is \(P6_{3}/mmc\) No. 194). Blue spheres represent Cr atoms, green and blue spheres show Al and C atoms, respectively; (b) and (c) present arrangement of atoms on a (1210) and (0001) planes of Cr\({}_{2}\)AlC, respectively.
## 3 Electronic and magnetic structures
The theoretically calculated electronic band structure of nanolaminated ternary carbides such as Cr\({}_{2}\)AlC and Ti\({}_{2}\)AlC, demonstrated that the valence bands could be divided into several regions. The lowest lying group of valence states originates predominantly from the C \(2s\) states. The states located at higher energy range are hybridization states of Al \(3s-3p\) orbitals. Strong \(pd\) covalent bonding states derived from Ti-C interactions dominate the adjacent higher energy range. The states just below the Fermi level contain a relatively weaker \(pd\) covalent bonding between Ti and Al. Moreover, the states near and above the Fermi level are attributed to metal-to-metal \(dd\) interactions and antibonding states.
Characteristics of atomic bonding can be vividly illustrated by projected density of states (PDOS). Figures 2 and 3 show the PDOS of Cr\({}_{2}\)AlC and Ti\({}_{2}\)AlC for comparison. The C \(2s\) states are located between \(-14.7\) eV to \(-12.1\) eV and \(-12.2\) eV to \(-10.4\) eV below the Fermi level in Cr\({}_{2}\)AlC and Ti\({}_{2}\)AlC, respectively. The states, which are approximately located between \(-5.8\) and \(-2.8\) eV below the Fermi level (\(E_{\rm F}\)) in Ti\({}_{2}\)AlC are C \(2p\) states. They are well hybridized with Al \(sp\) and Ti \(3d\) states. The
Figure 2: (Colour online) Partial density of states [in states/(atom eV)] of Cr\({}_{2}\)AlC in the GGA approximation.
corresponding C 2\(p\) states Cr\({}_{2}\)AlC are situated lower in energy at the \(-8.0\) and \(-4.3\) eV energy interval. Al 3\(p\) states occupy a wide energy interval from \(-9.0\) eV to 16.0 eV and from \(-7.7\) eV to 16.0 eV Cr\({}_{2}\)AlC and Ti\({}_{2}\)AlC, respectively. The major peak of the occupied Al 3\(p\) states associated with the \(pd\) covalent bond is situated at around \(-2.5\) eV in Cr\({}_{2}\)AlC and \(-1.0\) eV in Ti\({}_{2}\)AlC below \(E_{\rm F}\). The 3\(d\) orbitals of Cr atoms dominate the states near the \(E_{\rm F}\), and the contribution from Al \(p\)-derived orbitals is negligible. In comparison, the states near \(E_{\rm F}\) are dominated by Ti 3\(d\) orbitals in Ti\({}_{2}\)AlC, with some contribution from Al \(p\) orbitals.
The magnitude of Cr spin and orbital magnetic moments strongly depends on the Hubbard \(U\). The GGA approach produces the spin and orbital moments equal to 0.707 \(\mu_{\rm B}\) and 0.007 \(\mu_{\rm B}\), respectively (see table 1). The GGA+OP approach gives slightly larger moments: \(M_{s}^{\rm Cr}=0.757\)\(\mu_{\rm B}\) and \(M_{l}^{\rm Cr}=0.009\)\(\mu_{\rm B}\). For \(U_{\rm eff}=1\) eV, 2 eV and 3 eV, the spin magnetic moments are equal to 0.973 \(\mu_{\rm B}\), 2.097 \(\mu_{\rm B}\), and 2.799 \(\mu_{\rm B}\), respectively. These values are in good agreement with previous band structure calculations [11, 43, 44]. The orbital magnetic moments at the Cr site also strongly increased with increasing the value of Hubbard \(U\) and even change the sign for the \(U_{\rm eff}=2\) eV and 3 eV (table 1). We found a similar dependence of the spin and orbital magnetic moments for the AFM solution as well, although the absolute values of the moments are slightly different from the FM ordering. The induced spin magnetic moments at the Al and C sites
Figure 3: (Colour online) Partial density of states [in states/(atom eV)] of Ti\({}_{2}\)AlC in the GGA approximation.
are much smaller than at the Cr site and have an opposite direction (table 1). It is interesting to note that the spin magnetic moment at the C site is larger than the corresponding moment at the Al site. Orbital moments on both the Al and C sites are quite small.
a small net magnetization of 0.005 \(\mu_{\rm B}\) in the plane.
We found that Ti\({}_{2}\)AlC is very close to NM ground state. Though titanium is in the Ti\({}^{2+}\) state (\(d^{2}\)), the spin magnetic moment at the Ti as well as at Al sites is less than 10\({}^{-4}\)\(\mu_{\rm B}\).
## 4 X-ray absorption and XMCD spectra
Figure 4 presents the X-ray absorption spectra (open circles) at the Cr \(L_{2,3}\) edges (top panel) in Cr\({}_{2}\)AlC measured at 4.2 K [32] with a 6 T external magnetic field compared with the theoretically calculated ones (full blue curve). Since the pure Cr \(L_{23}\) edges are structureless, the existence of these fine structures shows that the chromium is not in a pure metallic state in Cr\({}_{2}\)AlC, a well-known fact from previous band structure calculations related to MAX phases [90]. They all have a mixture of covalent, metallic and ionic bonds.
One can observe a well separated double structure at the top of both \(L_{3}\) and \(L_{2}\) edges. These fine
Figure 4: (Colour online) Top panel: the X-ray absorption spectra (open circles) at the Cr \(L_{2,3}\) edges in Cr\({}_{2}\)AlC measured at 4.2 K [32] with a 6 T magnetic compared with the theoretically calculated ones (full blue curve) calculated in the GGA approach; Middle panel: the theoretically calculated (full blue curve) and experimentally measured (open circles) [32] X-ray linear dichroism spectra; lower panel: the XMCD experimental spectra (open circles) of Cr\({}_{2}\)AlC at the Cr \(L_{2,3}\) edges and the theoretically calculated one (full blue line); the onset shows the XMCD spectra at the Gr \(L_{2,3}\) edges separately from the Cr\({}_{1}\) and Cr\({}_{2}\) sites.
structures are well reproduced by a band structure calculation, although the theory produces inverse relative intensities between low and high energy peaks in comparison with the experimentally observed ones in the \(L_{3}\) XAS spectrum. It is interesting to note that the model multiple scattering calculations performed with the FEFF [91, 92] code by Jaouen et al. [32] show the same inversion in the relative intensities as obtained by us.
Figure 4 (middle panel) shows the theoretically calculated (full blue curve) and experimentally measured (open circles) [32] X-ray linear dichroism spectra. The theory reproduces the energy position of all the fine structures quite well, although the negative peaks at 571 eV, 578 eV, and 587 eV are much lower in theory than in the experiment.
The XMCD experimental spectra (open circles) of Cr\({}_{2}\)AlC at the Cr \(L_{2,3}\) edges and the theoretically calculated spectrum (full blue line) are presented in the lower panel of figure 4. For the experimental geometry used in reference [32] it follows that the electric field, \(\mathbf{E}\), of the incident X-ray beam is parallel to the \((a,b)\) plane of the MAX phase. In that case, XAS measurements mainly probe the unoccupied Cr \(3d_{xy}\), \(3d_{xz}\), \(3d_{yz}\) and \(3d_{x^{2}-y^{2}}\) in-plane orbitals, as well as a much smaller \(p\to s\) contribution. The X-ray dichroism at the Cr \(L_{2,3}\) edges is quite small due to cancellation signals from Cr\({}_{1}\) and Cr\({}_{2}\) sites (see the insert in the lower panel of figure 4). The agreement between the theory and experiment is quite good although the intensity of the major positive peak at 578 eV is slightly overestimated in the theory.
We found that both the GGA and GGA+\(U\) with \(U_{\mathrm{eff}}\leqslant 1\) eV give the calculated XAS and XMCD spectra close to experimentally measured ones. For larger values of the \(U\)-parameter (\(U_{\mathrm{eff}}>1\) eV), the theoretically calculated XMCD spectra deviate strongly from experimentally observed spectra. Therefore, this class of Cr-based carbide MAX phases cannot be considered as strongly correlated systems. A similar
Figure 5: (Colour online) Top panel: the X-ray absorption spectrum (open circles) at the Cr \(K\) edge in Cr\({}_{2}\)AlC measured at 2.2 K [32] and external magnetic field of 10 T with the theoretically calculated ones in the GGA approach taking into account only dipole \(1s\to 2p\) transitions (full blue curve) and quadrupole \(1s\to 3d\) transitions (dashed red curve) multiply by factor 10; lower panel: the XMCD experimental spectrum (open circles) of Cr\({}_{2}\)AlC at the Cr \(K\) edge and the theoretically calculated ones in dipole approximation (full blue line) and quadrupole transitions (dashed red curve) multiply by factor 10.
conclusion was also drawn by Dahlqvist et al. [11, 44] based on the calculations of lattice parameters and bulk modulus in the Cr\({}_{2}\)AlC, Cr\({}_{2}\)GaC and Cr\({}_{2}\)GeC.
Figure 5 presents the X-ray absorption spectrum (open circles) at the Cr \(K\) edge (top panel) in Cr\({}_{2}\)AlC measured at 2.2 K [32] with magnetic field of 10 T compared with the theoretically calculated ones (full blue curve). The XAS spectrum consists of a major peak at 5993.4 eV followed by a local minimum at 5996.0 eV and a low energy shoulder at 5989.3 eV. The theory well reproduces the energy position and shape of the fine structures. It is worth noticing that the low energy shoulder at 5989.3 eV is usually attributed to the quadrupolar \(E_{2}\) (\(1s\to 3d\)) transitions [32, 92]. We investigate the effect of the electric quadrupole \(E_{2}\) and magnetic dipole \(M_{1}\) transitions. We found that the \(M_{1}\) transitions are extremely small in comparison with the \(E_{2}\) transitions and can be neglected. The \(E_{2}\) transitions indeed contribute to the low energy shoulder at 5989.3 eV as well as to the major peak at 5993.4 eV, although the quadrupolar \(E_{2}\) transitions are by two orders of magnitude smaller than the electric dipole transitions \(E_{1}\) (see a red dashed curve at the upper panel of figure 5). Therefore, the low energy shoulder reflects the energy distribution of C \(N_{p}\) partial DOS (figure 2).
The lower panel shows the XMCD experimental spectrum (open circles) of Cr\({}_{2}\)AlC at the Cr \(K\) edge and the theoretically calculated ones in the dipole approximation (full blue line). The dashed-dotted lines show the contribution from the quadrupole \(E_{2}\) (\(1s\to 3d\)) transitions multiplied by a factor of 10. Again, the contribution of the quadrupolar \(E_{2}\) (\(1s\to 3d\)) transitions to the Cr \(K\) XMCD spectrum is very small. The theory well reproduces the energy position and the intensity of the major positive peak at 5989.3 eV, while other fine structures are reproduced with less accuracy, although it is hard to achieve an ideal agreement with the experimental measurements with such a very weak detected XMCD signal.
Figure 6 presents the X-ray absorption spectrum (open circles) at the C \(K\) edge (top panel) in Cr\({}_{2}\)AlC measured by Lin et al. [27] compared with the theoretically calculated ones (full blue curve). Our
Figure 6: (Colour online) Top panel: the X-ray absorption spectrum (open circles) at the C \(K\) edge in Cr\({}_{2}\)AlC measured at 4.2 K [27] with a 6 T magnetic compared with the theoretically calculated one (full blue curve) in the GGA approach; lower panel: the theoretically calculated XMCD spectrum of Cr\({}_{2}\)AlC at the C \(K\) edge.
band structure calculations well reproduce the energy position of all fine structures of the experimental C \(K\) XAS spectrum. Shindo and Oikawa [93] investigated the XAS spectra of diamond, graphite, and amorphous carbon. They reported that a XAS peak at 291 eV indicated a strong \(\sigma\) bonding state for C. On the other hand, a \(\pi\) bonding state of carbon was distinguished by a \(\pi^{*}\) peak at \(\sim\) 284 eV in the XAS spectra [27, 93]. In the experimentally measured C \(K\) edge for Cr\({}_{2}\)AlC, the peak was located at about 290 eV, which implies that the Cr-C bond in Cr\({}_{2}\)AlC is a strong \(\sigma\) bonding [27].
Figure 6 (bottom panel) presents the theoretically calculated XMCD spectrum at the C \(K\) edge in Cr\({}_{2}\)AlC. Due to very small spin and orbital magnetic moments at the C site (see table 1), one would expect a quite small dichroism at this edge with major negative peak at 291 eV. The experimental measurements of the XMCD spectrum at the C \(K\) edge are highly desirable.
Spin and orbital magnetic moments in Ti\({}_{2}\)AlC are very small at the Ti and C sites. Therefore, the XMCD spectra at these edges are not detected yet. The theoretically calculated XMCD spectra (not shown) are three orders of magnitude smaller than their XAS spectra.
Figure 7 shows the X-ray absorption spectra (open circles) at the Ti \(K\) (upper panel) [37] and C \(K\) (lower panel) [37] edges in Ti\({}_{2}\)AlC compared to the theoretically calculated spectra (full blue curves). The theory quite well reproduces the experimental spectra.
The energy position of a major low energy peak at the C \(K\) edge coincides with the low energy shoulder of Ti \(K\) XAS spectrum at 4 eV above the edge indicating a strong Ti \(d\) - C \(P\)\(\sigma\) bonding in Ti\({}_{2}\)AlC.
Figure 7: (Colour online) top panel: the experimentally measured [37] X-ray absorption spectrum at Ti \(K\) edge in Ti\({}_{2}\)AlC (magenta circles) and theoretically calculated (full blue curve) in the GGA approach; lower panel: the experimentally measured [37] X-ray absorption spectrum at C \(K\) edge in Ti\({}_{2}\)AlC (magenta circles) and theoretically calculated (full blue curve) in the GGA approach.
## 5 Summary
The electronic and magnetic structures and X-ray magnetic circular dichroism of the MAX 211 compounds Cr\({}_{2}\)AlC and Ti\({}_{2}\)AlC were investigated theoretically within GGA and GGA+\(U\) approaches in the framework of the fully relativistic spin-polarized Dirac LMTO band-structure method.
We found the non-collinear magnetic state as a ground state in Cr\({}_{2}\)AlC which is characterized by a canted AFM spin configuration (spin magnetic moment \(M_{S}^{\rm Cr}=0.768\)\(\mu_{\rm B}\) in the GGA approach with the polar angles equal to \(\theta^{\rm Cr_{1}}=91.7^{\circ}\), \(\phi^{\rm Cr_{1}}=0^{\circ}\) and \(\theta^{\rm Cr_{2}}=91.4^{\circ}\), \(\phi^{\rm Cr_{1}}=180^{\circ}\)). Such AFM configuration with spins is slightly canted out of the \((a,b)\) plane and produces a small projection of the Cr spin magnetic moment along the \(c\) axis of around 0.047 \(\mu_{\rm B}\), which is in excellent agreement with the estimation by Jaouen et al. [32] of 0.05 \(\mu_{\rm B}\) using the XMCD measurements and sum rules. There is a ferrimagnetic ordering in the \((a,b)\) plane with net magnetization of 0.005 \(\mu_{\rm B}\).
We have studied an X-ray magnetic circular dichroism at the Cr \(L_{2,3}\) and Cr, C, and Ti \(K\) edges in Cr\({}_{2}\)AlC and Ti\({}_{2}\)AlC. The calculations show a good agreement with the experimental measurements. We cannot validate the significance for using the LSDA(GGA)+\(U\) methods for the study of magnetic MAX phases since both GGA and GGA+\(U\) with \(U_{\rm eff}\leqslant 1\) eV give the calculated XAS and XMCD spectra close to the experimentally measured ones. Therefore, this class of Cr-based carbide MAX phases cannot be considered as strongly correlated systems.
|
2309.02660 | A Bi-level Globalization Strategy for Non-convex Consensus ADMM and
ALADIN | In this paper, we formally analyze global convergence in the realm of
distributed consensus optimization. Current solutions have explored such
analysis, particularly focusing on consensus alternating direction method of
multipliers (CADMM), including convex and non-convex cases. While such efforts
on non-convexity offer elegant theory guaranteeing global convergence, they
entail strong assumptions and complicated proof techniques that are
increasingly pose challenges when adopted to real-world applications. To
resolve such tension, we propose a novel bi-level globalization strategy that
not only guarantees global convergence but also provides succinct proofs, all
while requiring mild assumptions. We begin by adopting such a strategy to
perform global convergence analysis for the non-convex cases in C-ADMM. Then,
we employ our proposed strategy in consensus augmented Lagrangian based
alternating direction inexact Newton method (C-ALADIN), a more recent and
generalization of C-ADMM. Surprisingly, our analysis shows that C-ALADIN
globally converges to local optimizer, complementary to the prior work on
C-ALADIN, which had primarily focused on analyzing local convergence for
non-convex cases. | Xu Du, Jingzhe Wang, Xiaohua Zhou, Yijie Mao | 2023-09-06T02:09:20Z | http://arxiv.org/abs/2309.02660v1 | # A Bi-level Globalization Strategy for Non-convex Consensus ADMM and ALADIN
###### Abstract
In this paper, we formally analyze global convergence in the realm of distributed consensus optimization. Current solutions have explored such analysis, particularly focusing on consensus alternating direction method of multipliers (C-ADMM), including convex and non-convex cases. While such efforts on non-convexity offer elegant theory guaranteeing global convergence, they entail strong assumptions and complicated proof techniques that are increasingly pose challenges when adopted to real-world applications. To resolve such tension, we propose a novel bi-level globalization strategy that not only guarantees global convergence but also provides succinct proofs, all while requiring mild assumptions. We begin by adopting such a strategy to perform global convergence analysis for the non-convex cases in C-ADMM. Then, we employ our proposed strategy in consensus augmented Lagrangian based alternating direction inexact Newton method (C-ALADIN), a more recent and generalization of C-ADMM. Surprisingly, our analysis shows that C-ALADIN globally converges to local optimizer, complementary to the prior work on C-ALADIN, which had primarily focused on analyzing local convergence for non-convex cases.
Distributed Consensus Optimization, Non-convex, Globalization, C-ADMM, C-ALADIN
## I Introduction
Alternating direction method of multipliers (ADMM) is the most well-known algorithm within the realm of distributed optimization. The primary idea of ADMM is to decompose a complex optimization problem into simpler sub-problems, each can be solved in parallel by different agents. Since its first proposal in [3, 4], there has been much effort to develop ADMM. Such efforts have mainly focused on the following two lines of research. The first line of work, including but not limited to [1, 5], has explored potential applications that could harness the advantages of ADMM, such as machine learning [6], signal processing [7], wireless communication [8]. To enrich the theoretical ground of ADMM, the second line of work, represented by [9, 10, 11, 12], has turned its attention to revisiting the structure of ADMM. In particular, the most influential work is that of He and Yuan [10], which first analyzed the convergence rate of ADMM for general convex cases and investigated the sub-linear convergence result. Later, Shi et al. extended this trajectory by introducing the assumption of strongly convex [11], leading to a linear convergence rate of consensus ADMM (C-ADMM). The aforementioned advancements, however, have been exclusively to deal with convex problems. The application of for non-convex consensus problems remains limited.
In tackling the convergence challenge [13, 14] caused by non-convexity, several authors, including Hong and others [9, 15, 16, 17], have suggested enforcing global convergence of ADMM with sufficiently large parameter of the augmented term. As a special case, Wang et al. proposed a variant of ADMM, known as Bregman ADMM [16] (an extension of proximal point method). Later, Wang et al. [18] explored some non-convex non-smooth special cases that can be effectively managed by ADMM. All the aforementioned methods are special instances of the _proximal ADMM_ algorithm proposed in [17]. We refer [19] as a comprehensive summary of ADMM variations to interested readers. We are aware of that most of the existing works of non-convex ADMM rely on the following two different methods: a) showing monotonic decreasing of a critical Lyapunov function [17, 18], b) showing a corresponding (augmented) Lagrangian function [9] converging to a limit point. Though applicable to C-ADMM, these two approaches inevitably entail quite complicated proofs. Specifically, the former method requires positive definiteness of the Lyapunov function, constructed hard in nature, while the latter requires the smoothness of the objective function, thereby narrowing the scope of applicability of C-ADMM.
Fortunately, in pursing concise proofs, Houska et al. proposed the notion of T-ALADIN (typical augmented Lagrangian based alternating direction inexact Newton method) [13], a solution that provides theoretical local convergence guarantees when tackling general distributed non-convex optimization problems, assuming a good initial point is available. It was not until seven years later that Du and Wang realized that T-ALADIN might have a more efficient form of implementation [2] (C-ALADIN, consensus ALADIN) for distributed consensus problem in the ALADIN framework. While the approaches showcased promising results, their guarantees were limited to local convergence.
In this paper, depart from the conventional _Gauss-Seidel decomposition_, instead, we delve into the parallelizable implementation of C-ADMM and C-ALADIN, based on _Jacobian decomposition_.Inspired by the globalization approach proposed in [13, Section 6], we initiate the study of bi-level globalization, which ensures that C-ADMM and C-ALADIN globally converge to local optimizer. In summary, this paper has the following key contributions: |
2302.00305 | Constructions of Urysohn universal ultrametric spaces | In this paper, we give new constructions of Urysohn universal ultrametric
spaces. We first characterize a Urysohn universal ultrametric subspace of the
space of all continuous functions whose images contain the zero, from a
zero-dimensional compact Hausdorff space without isolated points into the space
of non-negative real numbers equipped with the nearly discrete topology. As a
consequence, the whole function space is Urysohn universal, which can be
considered as a non-Archimedean analog of Banach--Mazur theorem. As a more
application, we prove that the space of all continuous pseudo-ultrametrics on a
zero-dimensional compact Hausdorff space with an accumulation point is a
Urysohn universal ultrametric space. This result can be considered as a variant
of Wan's construction of Urysohn universal ultrametric space via the
Gromov--Hausdorff ultrametric space. | Yoshito Ishiki | 2023-02-01T08:12:57Z | http://arxiv.org/abs/2302.00305v3 | # Constructions of Urysohn universal ultrametric spaces
###### Abstract.
In this paper, we give new constructions of Urysohn universal ultrametric spaces. We first characterize a Urysohn universal ultrametric subspace of the space of all continuous functions whose images contain the zero, from a zero-dimensional compact Hausdorff space without isolated points into the space of nonnegative real numbers equipped with the nearly discrete topology. As a consequence, the whole function space is Urysohn universal, which can be considered as a non-Archimedean analog of Banach-Mazur theorem. As a more application, we prove that the space of all continuous pseudo-ultrametrics on a zero-dimensional compact Hausdorff space with an accumulation point is a Urysohn universal ultrametric space. This result can be considered as a variant of Wan's construction of Urysohn universal ultrametric space via the Gromov-Hausdorff ultrametric space.
Key words and phrases:Urysohn universal ultrametric space 2020 Mathematics Subject Classification: Primary 54E35, Secondary 51F99
## 1. Introduction
For a class \(\mathcal{C}\) of metric spaces, a metric space \((X,d)\) is said to be \(\mathcal{C}\)_-injective_ if for all \((A,a)\) and \((B,b)\) in \(\mathcal{C}\) and for all isometric embeddings \(\phi\colon(A,a)\to(B,b)\) and \(\psi\colon(A,a)\to(X,d)\), there exists an isometric embedding \(\theta\colon(B,b)\to(X,d)\) such that \(\theta\circ\phi=\psi\). This definition roughly coincides with the notion of injective objects in category theory. We denote by \(\mathcal{F}\) the class of all finite metric spaces. Urysohn [35] constructed a complete separable \(\mathcal{F}\)-injective metric space \(\mathbb{U}\), which is nowadays called the _Urysohn universal metric space_. In this paper, we investigate an ultrametric analogue of \(\mathbb{U}\). For more information of \(\mathbb{U}\) and related spaces, we refer the readers to, for instance, [28], [12], [21], [25], [36], [30] and [31]. In the recent years, the theory of Urysohn type metric spaces has applications to logic and model theory (see for example, [3] and [4]). Indeed, the space \(\mathbb{U}\) can be regarded as a generalization of the random graph in model theory (see also [28, Subsection 3.3]).
A metric \(d\) on a set \(X\) is said to be an _ultrametric_ if for all \(x,y,z\in X\), it satisfies the _strong triangle inequality_\(d(x,y)\leq d(x,z)\lor d(z,y)\), where
\(\vee\) stands for the maximum operator on \(\mathbb{R}\). If a pseudo-metric \(d\) satisfies the strong triangle inequality, then \(d\) is called a _pseudo-ultrametric_.
A set \(R\) is said to be a _range set_ if it is a subset of \([0,\infty)\) and \(0\in R\). If \(R\) is a range set, and an ultrametric \(d\) (resp. pseudo-ultrametric) on a set \(X\) satisfies \(d(x,y)\in R\) for all \(x,y\in R\), then we call \(d\) an \(R\)_-ultrametric_ or \(R\)_-valued ultrametrics_ (resp. \(R\)_-pseudo-ultrametric_ or \(R\)_-valued pseudo-ultrametrics_).
For a range set \(R\), we denote by \(\mathcal{N}(R)\) the class of all finite \(R\)-valued ultrametric spaces. The main subject of this paper is to provide new constructions of complete \(\mathcal{N}(R)\)-injective \(R\)-ultrametric spaces for every range set \(R\).
In the title and the abstract of this paper, the term "the Urysohn universal ultrametric spaces" means injective ultrametric spaces. However, from now on, for a range set \(R\), we use the term "the \(R\)-Urysohn universal ultrametric spaces" as the separable \(\mathcal{N}(R)\)-injective complete \(R\)-ultrametric space. Note that such an space is unique up to isometry and note that if \(R\) is uncountable, a \(\mathcal{N}(R)\)-injective \(R\)-ultrametric space is non-separable (see for example, [11, Section 2] and [2, (12) in Theorem 1.6]).
There are several construction of \(\mathcal{N}(R)\)-injective \(R\)-valued ultrametric spaces. Similarly to the ordinary Urysohn universal metric space (see [35] and [21]), \(\mathcal{N}(R)\)-injective \(R\)-ultrametric spaces can be obtained by the Urysohn amalgamation method (the Fraisse limit) (see [1]), and the way using Katetov function spaces (see [11]). It is also known that we can construct \(\mathcal{N}(R)\)-injective \(R\)-ultrametric spaces as the spaces of branches (rays) of trees (see [11] and [28])). Wan [38] proved the \(\mathcal{N}(R)\)-injectivity of the non-Archimedean Gromov-Hausdorff space; namely, the space of all isometry classes of compact \(R\)-valued ultrametric spaces equipped with the Gromov-Hausdorff.
Before explaining our constructions, we prepare some concepts. For a range set \(R\), we define an ultrametric \(M_{R}\) on \(R\) by
\[M_{R}(x,y)=\begin{cases}x\lor y&\text{if }x\neq y\\ 0&\text{if }x=y.\end{cases}\]
Then the metric \(M_{R}\) is an ultrametric on \(R\). This construction was given by Delhomme-Laflamme-Pouzet-Sauer [6, Proposition 2], and it also can be found in [7], [13], and [17]. Based on the notion of [32], we call \(M_{R}\) (resp. the topology generated by \(M_{R}\)) the _nearly discrete (ultra)metric on \(R\)_ (resp. the _nearly discrete topology_). The space \((R,M_{R})\) is as significant for ultrametric spaces as the space \([0,\infty)\) or \(\mathbb{R}\) with the Euclidean topology in the theory of usual metric spaces. There are two key points of this paper. The first is the recognition of the importance of \((R,M_{R})\). The second is the discovery of the tenuous sets as the compact subspaces of \((R,M_{R})\) (for the definition, see Subsection 2.2).
For a topological space \(X\) and a range set \(R\), we denote by \(\mathrm{C}(X,R)\) the set of all continuous maps from \(X\) to the nearly discrete space \(R\). We also denote by \(\mathrm{C}_{0}(X,R)\) the set of all \(f\in\mathrm{C}(X,R)\) such that \(0\in f(X)\). For \(f,g\in\mathrm{C}(X,R)\), we define \(\triangledown(f,g)\) by the infimum of all \(\epsilon\in(0,\infty)\) such that \(f(x)\leq g(x)\vee\epsilon\) and \(g(x)\leq f(x)\vee\epsilon\) for all \(x\in X\). By abuse of notations, we use the same symbol \(\triangledown\) as the restricted function \(\triangledown|_{\mathrm{C}_{0}(X,R)^{2}}\). Then the function \(\triangledown\) is an ultrametric on \(\mathrm{C}_{0}(X,R)\) taking values in \([0,\infty]\). In this paper, since we only consider the case where \(X\) is compact, the function \(\triangledown\) is actually an ultrametric in the ordinary sense.
In Theorem 3.1 in this paper, for a subset \(E\) of \(\mathrm{C}_{0}(X,R)\), we shall give a sufficient and necessary condition for the \(\mathcal{N}(R)\)-injectivity of \(E\). As a consequence, we conclude that the space \((\mathrm{C}_{0}(X,R),\triangledown)\) itself is \(\mathcal{N}(R)\)-injective (see Theorem 3.2), which is our first construction. Since all separable \(R\)-valued ultrametric spaces are isometrically embeddable into a complete \(\mathcal{N}(R)\)-injective \(R\)-ultrametric space (see for example, [11, Proposition 2.7]), Theorem 3.2 can be regarded as an analog of the Banach-Mazur theorem stating that all separable metric spaces can be isometrically embedded into the space of all functions on \([0,1]\) equipped with the supremum metric. We can also consider the construction of \((\mathrm{C}_{0}(X,R),\triangledown)\) is a generalization of Vestfrid's construction in [37], which explains that the space of all decreasing sequences in \([0,\infty)\) covergent to \(0\) is \(\mathcal{N}(R)\)-injective.
For a topological space \(X\), and a range set \(R\), we denote by \(\mathrm{Cpu}(X,R)\) the set of all continuous \(R\)-valued pseudo-ultrametrics \(d\colon X\times X\to R\) on \(X\), where \(X\times X\) and \(R\) are equipped with the product topology and the Euclidean topology, respectively (see Proposition 2.27). For \(d,e\in\mathrm{Cpu}(X,R)\), we define \(\mathcal{UD}_{X}^{R}(d,e)\) the infimum of all \(\epsilon\in R\) such that \(d(x,y)\leq e(x,y)\vee\epsilon\) and \(e(x,y)\leq d(x,y)\vee\epsilon\) for all \(x,y\in X\). The metric \(\mathcal{UD}_{X}^{R}\) is a restricted metric of \(\triangledown\) and is an extension of ultrametrics on spaces of ultrametrics defined in the author's papers [13] and [17]. As in the case of \(\mathrm{C}_{0}(X,R)\), we focus only on the case where \(X\) is compact, the value \(\mathcal{UD}_{X}^{R}(d,e)\) is always finite.
As an application of Theorem 3.1, we will prove Theorem 4.2 asserting that if \(X\) is a compact Hausdorff space possessing an accumulation point, and \(R\) is a range set, then the space \((\mathrm{Cpu}(X,R),\mathcal{UD}_{X}^{R})\) is \(\mathcal{N}(R)\)-injective and complete. This is our second construction and it can be considered as a variant of Wan's construction of Urysohn universal ultrametric space via the Gromov-Hausdorff ultrametric space [38]. We also show that several subsets of \(\mathrm{Cpu}(X,R)\) are \(\mathcal{N}(R)\)-injective (see Theorem 4.4). For example, the set of all doubling ultrametrics is injective. To obtain this result, we use the author's results on dense subsets of spaces of ultrametrics in [13] and [15], and use the extension theorem of ultrametrics in [16].
The organization of this paper is as follows: Section 2 presents some preliminaries. We introduce tenuous subsets of \([0,\infty)\), which plays an important role in this paper. We also investigate basic properties of \((\mathrm{C}_{0}(X,R),\,\mathbb{V})\), \((\mathrm{Cpu}(X,R),\mathcal{UD}_{X}^{R})\), and \(\mathcal{N}(R)\)-injective ultrametrics spaces. In Section 3, we explain our first construction (see Theorem 3.2). We also give a sufficient and necessary condition for the property that \(X\) has no isolated points using the \(\mathcal{N}(R)\)-injectivity of \(\mathrm{C}_{0}(X,R)\). Section 4 is intended to show that if \(X\) is a compact Hausdorff space possessing an accumulation point, then \((\mathrm{Cpu}(X,R),\mathcal{UD}_{X}^{R})\) is \(\mathcal{N}(R)\)-injective (see Theorem 4.2). This is our second construction. In Section 5, we determine the topological type of \(\mathrm{C}_{0}(X,R)\) or \(\mathrm{Cpu}(X,R)\).
## 2. Preliminaries
In this section, we prepare some basic statements on ultrametrics, spaces of ultrametrics, and injective metric spaces.
### Generalities
For a set \(E\), the symbol \(\mathrm{Card}(E)\) stands for the cardinality of \(E\). For a metric space \((X,d)\), \(a\in X\), and \(r\in(0,\infty)\), we denote by \(B(a,r;d)\) (resp. \(U(a,r;d)\)) the closed ball (resp. open ball) centered at \(a\) with radius \(r\). We also define \(\mathbb{S}(a,r;d)=\{\,x\in X\mid d(x,a)=r\,\}\). We often simply represent them as \(B(a,r)\), \(U(a,r)\),and \(\mathbb{S}(a,r)\). In this paper, for a metric space \((X,d)\), and a subset \(A\) of \(X\), we sometimes represent the restricted metric \(d|_{A^{2}}\) as the same symbol to the ambient metric \(d\) when no confusions can arise.
#### 2.1.1. Dimension
A subset of a topological space is _clopen_ if it is closed and open in the ambient space. We define the zero-dimensionality of topological spaces in three ways. Let \(X\) be a non-empty topological space. We write \(\mathrm{ind}(X)=0\) if \(X\) has an open base consisting of clopen subsets of \(X\). We also write \(\mathrm{Ind}(X)=0\) if for all two disjoint closed subsets \(A\) and \(B\) of \(X\), there exists a clopen subset \(C\) such that \(A\subseteq C\) and \(B\cap C=\emptyset\). The notation \(\mathrm{dim}(X)=0\) means that every finite open covering of \(X\) has a refinement covering of \(X\) consisting of finitely many mutually disjoint open subsets of \(X\). The dimensions \(\mathrm{ind}(X)\), \(\mathrm{Ind}(X)\), and \(\mathrm{dim}(X)\) are called the _small inductive dimension_, the _large inductive dimension_, and the _covering dimension_, respectively. Of cause, we can define higher dimensional spaces, however, we omit it since we focus only on the \(0\)-dimensionality in this paper. In general, these three dimensions can different from each other. However, as stated in the next proposition, they are all equal to \(0\) if \(X\) is compact Hausdorff and any one of the three is \(0\). The proof is deduced from [29, Corollary 2.2, Proposition 2.3, pp156-157].
**Proposition 2.1**.: _Let \(X\) be a non-empty compact Hausdorff space. If any one of the three \(\mathrm{ind}(X)\), \(\mathrm{Ind}(X)\), and \(\mathrm{dim}(X)\) is equal to \(0\), then \(\mathrm{ind}(X)=\mathrm{Ind}(X)=\mathrm{dim}(X)=0\)._
Based on Proposition 2.1, in what follows, we will say that a compact Hausdorff space \(X\) is \(0\)_-dimensional_ if any one of the three dimensions is \(0\) (in this case, all the three dimensions are \(0\)).
#### 2.1.2. Topological weights
For a topological space \(X\), we denote by \(w(X)\) the topological weight of \(X\). Namely, \(w(X)\) is equal to the minimum of cardinals of all open bases of \(X\) (remark that some authors call the cardinal \(\max\{w(X),\aleph_{0}\}\) the topological weight). Hence a metrizable space \(X\) is separable if and only if \(w(X)\leq\aleph_{0}\). Using compactness, we obtain the following lemma:
**Lemma 2.2**.: _Let \(X\) be a \(0\)-dimensional compact Hausdorff space such that \(w(X)\) is infinite. If \(\mathcal{C}\) is the set of all clopen subsets of \(X\), then \(\operatorname{Card}(\mathcal{C})=w(X)\)._
For a metric space \((X,d)\), we say a subset \(A\) of \(X\) is \(r\)_-separated_ if we have \(r\leq d(x,y)\) for all distinct \(x,y\in A\). The following lemma can be proven by the definition of the topological weights.
**Lemma 2.3**.: _Let \((X,d)\) be a metric space. If a subset \(E\) of \(X\) is \(r\)-separated, then we have \(\operatorname{Card}(E)\leq w(X)\)._
The next lemma can be proven by Lemma 2.3 with taking a maximal \((2^{-n})\)-separated set from a dense subset for each \(n\in\mathbb{Z}_{\geq 0}\), or by the fact that every metric space has a \(\sigma\)-discrete open base.
**Lemma 2.4**.: _Let \((X,d)\) be a metric space. If \(F\) is a dense subset of \(X\), then \(w(X)\leq\operatorname{Card}(F)\)._
Remark that Lemmas 2.3 and 2.4 are consequences of the fact that the weight of a metric space is equal to the minimum of cardinals of all dense subsets of the space.
#### 2.1.3. Properties of ultrametrics
For a range set \(R\), and for a topological space \(X\), we denote by the \(\operatorname{UMet}(X;R)\) the set of all \(R\)-valued ultrametrics that generate the same topology of \(X\). Remark that \(\operatorname{UMet}(X;R)\subseteq\operatorname{Cpu}(X,R)\). A topological space \(X\) is said to be \(R\)_-valued ultrametrizable_ or \(R\)_-ultrametrizable_ if \(\operatorname{UMet}(X;R)\neq\emptyset\). We simply say that \(X\) is _ultrametrizable_ if it is \([0,\infty)\)-valued ultrametrizable. A range set \(R\) is said to be _characteristic_ if for all \(t\in[0,\infty)\), there exists \(r\in R\setminus\{0\}\) such that \(r\leq t\). The next lemma gives a characterization of ultrametrizablity, and indicates the importance of characteristic range sets. The proof follows from [13, Proposition 2.14] and [5, Theorem I] (note that the property that \(R\) is characteristic is equivalent to saying that a range set \(R\) has the countable coinitiality in the sense of [13]).
**Proposition 2.5**.: _Let \(X\) be a topological space, and \(R\) be a characteristic range set. Then the following are equivalent to each other:_
1. _The space_ \(X\) _is metrizable and_ \(\operatorname{Ind}(X)=0\)
2. _The space_ \(X\) _is ultrametrizable;_
3. _The space_ \(X\) _is_ \(R\)_-ultrametrizable._
The proofs of next three lemmas are presented in Propositions 18.2, 18.4, and 18.5 in [33], respectively.
**Lemma 2.6**.: _Let \(X\) be a set, and \(d\) be a pseudo-metric on \(X\). Then \(d\) satisfies the strong triangle inequality if and only if for all \(x,y,z\in X\), the inequality \(d(x,z)<d(z,y)\) implies \(d(z,y)=d(x,y)\)._
**Lemma 2.7**.: _Let \((X,d)\) be a pseudo-ultrametric space, \(a\in X\) and \(r\in[0,\infty)\). Then, the following statements are ture:_
1. _The sets_ \(B(p,r)\) _and_ \(U(a,r)\) _are clopen in_ \(X\)_._
2. _For all_ \(q\in B(p,r)\) _(resp._ \(q\in U(p,r)\)_), we have_ \(B(p,r)=B(q,r)\) _(resp._ \(U(p,r)=U(q,r)\)_)._
**Lemma 2.8**.: _Let \((X,d)\) be an ultrametric space, \(r\in(0,\infty)\), and \(a,b\in X\). Then for all \(x\in B(a,r)\) and \(y\in B(b,r)\), we have \(d(x,y)=d(a,b)\)._
#### 2.1.4. Nearly discrete spaces
In this subsection, we verify basic properties of the nearly discrete space \((R,M_{R})\).
**Lemma 2.9**.: _Let \(R\) be a range set. Then the next statements are true._
1. _The set_ \(R\setminus\{0\}\) _is discrete with respect to the nearly discrete space, i.e., all singletons are open._
2. _If_ \(R\) _is characteristic, then_ \(0\) _is a unique accumulation point._
Proof.: The lemma follows from the fact that \(U(x,x;M_{R})=\{x\}\) and \(U(0,x;M_{R})=[0,x)\cap R\) for all \(x\in R\setminus\{0\}\).
From the definition of the nearly discrete metrics, we deduce the next two lemmas.
**Lemma 2.10**.: _Let \(R\) be a range set. Then for all \(x,y\in R\), the value \(M_{R}(x,y)\) coincides with the infimum of \(\epsilon\in[0,\infty)\) such that \(x\leq y\vee\epsilon\) and \(y\leq x\vee\epsilon\)._
**Lemma 2.11**.: _For every range set \(R\), the space \((R,M_{R})\) is complete._
### Tenuous sets
A subset \(E\) of \([0,\infty)\) is said to be _semi-sporadic_ if there exists a strictly decreasing sequence \(\{a_{i}\}_{i\in\mathbb{Z}_{\geq 0}}\) in \((0,\infty)\) such that \(\lim_{i\to\infty}a_{i}=0\) and \(E=\{0\}\cup\{\,a_{i}\mid i\in\mathbb{Z}_{\geq 0}\,\}\). This concept is a half version of sporadic sets defined in [18]. A subset of \([0,\infty)\) is _tenuous_ if it is finite or semi-sporadic. Despite its simplicity, this concept has a central role in the present paper.
#### 2.2.1. Properties of tenuous sets
**Lemma 2.12**.: _Let \(K\) be a subset of \([0,\infty)\). Then \(K\) is tenuous if and only if \(K\) is a closed subset of \([0,\infty)\) with respect to the Euclidean topology satisfying that \(K\cap[r,\infty)\) is finite for all \(r\in(0,\infty)\)._
Proof.: If \(K\) is tenuous, then \(K\) satisfies the conditions stated in the lemma. Next assume that \(K\) is closed and \(K\cap[r,\infty)\) is finite for all \(r\in(0,\infty)\). We only need to consider the case where \(K\) is infinite. We first prove the following claim:
1. For all \(l\in(0,\infty)\), the set \(K\cap[0,l)\) has the maximum.
Indeed, due to the fact that \(K\) is infinite and \(K\cap[l,\infty)\) is finite, we can take \(s\in K\cap[0,l)\). Since \(K\cap[s,\infty)\) is finite, the set \(K\cap[0,l)\) has the maximum.
Using induction and the claim (Cl), we define a sequence \(\{a_{i}\}_{i\in\mathbb{Z}_{\geq 0}}\) as follows: The value \(a_{0}\) is defined as the maximum of \(K\). If \(a_{0},\ldots,a_{n}\) have been already determined, we define \(a_{n+1}\) as the maximum of the set \(K\cap[0,a_{n})\). From the definition of \(\{a_{i}\}_{i\in\mathbb{Z}_{\geq 0}}\), it follows that \(a_{i+1}<a_{i}\) for all \(i\in\mathbb{Z}_{\geq 0}\). Put \(h=\inf\{\,a_{i}\mid i\in\mathbb{Z}_{\geq 0}\,\}\). If \(h\) were not \(0\), then the set \(K\cap[h,\infty)\) would be infinite. This is a contradiction to the assumption. Thus \(h=0\), and hence \(\lim_{i\to\infty}a_{i}=0\). Since \(K\) is closed, we conclude that \(K=\{0\}\cup\{\,a_{i}\mid i\in\mathbb{Z}_{\geq 0}\,\}\). This means that \(K\) is semi-sporadic, and hence it is tenuous.
In the nearly discrete space \((R,M_{R})\), compact subsets coincide with the tenuous sets.
**Lemma 2.13**.: _Let \(R\) be a range set. Then a subset \(K\) of \((R,M_{R})\) is compact if and only if \(K\) is tenuous._
Proof.: If \(K\) is tenuous, then it is compact in \((R,M_{R})\). Next assume that \(K\) is compact. We only need to consider the case where \(K\) is infinite. By (1) in Lemma 2.9, for all \(r\in(0,\infty)\), we see that the set \(K\cap[r,\infty)\) is discrete and compact, and hence it is finite. According to Lemma 2.12, the set \(K\) is tenuous. This leads to the lemma.
**Corollary 2.14**.: _Let \(R\) be a range set. Let \(X\) be a compact space. If \(f\colon X\to R\) is a continuous map with respect to \(M_{R}\), then the image of \(f\) is a tenuous subset in \(R\)._
### Function spaces
In this subsection, we discuss some properties of \((\mathrm{C}(X,R),\triangledown)\) and \((\mathrm{C}_{0}(X,R),\triangledown)\).
#### 2.3.1. Metric propeties
We first provide two expressions of the ultrametric \(\triangledown\) on \(\mathrm{C}(X,R)\) and \(\mathrm{C}_{0}(X,R)\).
**Proposition 2.15**.: _Let \(X\) be a compact space, \(R\) be a range set, and \(f,g\in\mathrm{C}(X,R)\). Then \(\triangledown(f,g)=\sup_{x\in X}M_{S}(f(x),g(x))\)._
Proof.: Put \(a=\triangledown(f,g)\) and \(b=\sup_{x\in X}M_{S}(f(x),g(x))\). Take an arbitrary value \(u\in R\) satisfying that \(f(x)\leq g(x)\lor u\) and \(g(x)\leq f(x)\lor u\) for all \(x\in X\). Then, due to Lemma 2.10, for each \(x\in X\), we have \(M_{S}(f(x),g(x))\leq u\). This implies that \(b\leq u\), and hence \(b\leq a\).
We next show the converse inequality. Since \(M_{S}(f(x),g(x))\leq b\) for all \(x\in X\), Lemma 2.10 implies \(f(x)\leq g(x)\lor b\) and \(g(x)\leq f(x)\lor b\). Thus \(a\leq b\). This finishes the proof.
For a map \(f\colon X\to Y\), we write \(\operatorname{Im}(f)=f(X)\). We can describe \(\triangledown(f,g)\) more precisely.
**Proposition 2.16**.: _Let \(X\) be a compact space, \(R\) be a range set, and \(f,g\in\operatorname{C}(X,R)\). Then the value \(\triangledown(f,g)\) is equal to the minimum of all \(t\in[0,\infty)\) such that for all \(x\in X\) with \(t<(f\lor g)(x)\), we have \(f(x)=g(x)\). Moreover, the value \(\triangledown(f,g)\) belongs to \(\operatorname{Im}(f)\cup\operatorname{Im}(g)\cup\{0\}\). Consequently, the ultrametric \(\triangledown\) is \(R\)-valued._
Proof.: Let \(v\) be the infimum of all \(t\in[0,\infty)\) such that for all \(x\in X\) with \(t<(f\lor g)(x)\), we have \(f(x)=g(x)\). Put \(A=\operatorname{Im}(f)\cup\operatorname{Im}(g)\cup\{0\}\). Notice that \(A\) is tenuous. Suppose, contrary to our statement, that \(v\not\in A\). Since \(A\) is tenuous, we can take \(u\in[0,\infty)\) such that \(u<v\) and \((u,v)\cap A=\emptyset\). Then, for all \(x\in X\) with \(u<(f\lor g)(x)\), we have \(f(x)=g(x)\). This implies that \(v\) is not the infimum, which is a contradiction. Thus \(v\in A\).
We now prove \(\triangledown(f,g)\leq v\). Take an arbitrary point \(x\in X\). If \(v<(f\lor g)(x)\), then the definition of \(v\) implies \(f(x)=g(x)\), and hence we have \(f(x)\leq g(x)\lor v\) and \(g(x)\leq f(x)\lor v\). If \((f\lor g)(x)\leq v\), then \(f(x)\leq v\) and \(g(x)\leq v\), which yield \(f(x)\leq g(x)\lor v\) and \(g(x)\leq f(x)\lor v\). These two cases show that \(\triangledown(f,g)\leq v\). To prove \(v\leq\triangledown(f,g)\), take an arbitrary value \(t\in R\) with for which \(f(x)\leq g(x)\lor t\) and \(g(x)\leq f(x)\lor t\) for all \(x\in X\). Take \(x\in X\) satisfying that \(t<(f\lor g)(x)\). In this setting, we may assume that \(t<f(x)\). Then \(g(x)\leq f(x)\lor t=f(x)\). From \(f(x)\leq g(x)\lor t\) and \(t<f(x)\), it follows that \(f(x)\leq g(x)\). Thus \(f(x)=g(x)\). Hence \(v\leq\triangledown(f,g)\). This completes the proof.
**Corollary 2.17**.: _Let \(X\) be a compact space, and \(R\) be a range set. Let \(r\in R\setminus\{0\}\), and \(f,g\in\operatorname{C}(X,S)\). Then \(r=\triangledown(f,g)\) if and only if \(f^{-1}(r)\neq g^{-1}(r)\) and \(f(x)=g(x)\) whenever \(r<(f\lor g)(x)\)._
Now we notice that all members of the difference \(\operatorname{C}(X,R)\setminus\operatorname{C}_{0}(X,R)\) are isolated by Proposition 2.16:
**Lemma 2.18**.: _Let \(X\) be a compact space, and \(R\) be a range set. If \(f\in\operatorname{C}(X,R)\) satisfies \(0\not\in\operatorname{Im}(f)\) and we put \(l=\min\operatorname{Im}(f)\), then \(U(f,l)=\{f\}\). In particular, the point \(f\) is isolated in the space \(\operatorname{C}(X,R)\)._
For a topological spaces \(T\) and for a metric space \((X,d)\), we denote by \(\operatorname{BMap}(T,X)\) the set of all bounded continuous maps from \(T\) to \(X\). Let \(\mathcal{SM}_{d}\) denote the supremum metric on \(\operatorname{BMap}(T,X)\), i.e., \(\mathcal{SM}_{d}(f,g)=\sup_{x\in X}d(f(x),g(x))\).
The proof of the following is presented in [27, Theorem 43.6].
**Proposition 2.19**.: _Let \(T\) be a topological space and \((X,d)\) be a complete metric spaces. Then the space \((\operatorname{BMap}(T,X),\mathcal{SM}_{d})\) is complete._
Since Propositions 2.15 asserts that \((\operatorname{C}(X,R),\triangledown)\) is nothing but the space \((\operatorname{BMap}(T,X),\mathcal{SM}_{M_{R}})\), Lemma 2.11 and Proposition 2.19 yield the next proposition:
**Proposition 2.20**.: _Let \(X\) be a compact space, and \(R\) be a range set. Then \((\mathrm{C}(X,R),\triangledown)\) is complete._
**Corollary 2.21**.: _Let \(X\) be a compact space, and \(R\) be a range set. Then \((\mathrm{C}_{0}(X,S),\triangledown)\) is complete._
Proof.: We denote by \(I\) the set of all \(f\in\mathrm{C}(X,R)\) such that \(0\not\in\mathrm{Im}(f)\). Then we have \(\mathrm{C}_{0}(X,R)=\mathrm{C}(X,R)\setminus I\). According to Lemma 2.18, the set \(I\) is open, and hence the set \(\mathrm{C}_{0}(X,R)\) is closed in \(\mathrm{C}(X,R)\). Since \(\mathrm{C}(X,R)\) is complete (see Proposition 2.20), so is \(\mathrm{C}_{0}(X,R)\).
#### 2.3.2. Topological weights of function spaces
In this subsection, we determine the topological weight of \((\mathrm{C}_{0}(X,R),\triangledown)\).
By the definition of \(\mathrm{C}_{0}(X,R)\), we conclude:
**Lemma 2.22**.: _If \(X\) is a topological space and range sets \(R\) and \(S\) satisfy \(S\subseteq R\), then \(\mathrm{C}_{0}(X,S)\) is a metric subspace of \(\mathrm{C}_{0}(X,R)\)._
For a set \(E\), we denote by \(\mathrm{Seq}(E)\) the set of all finite sequence in \(E\). Notice that if \(E\) is infinite, then we have \(\mathrm{Card}(\mathrm{Seq}(E))=\mathrm{Card}(E)\).
**Proposition 2.23**.: _Let \(X\) be a \(0\)-dimensional compact Hausdorff space such that \(w(X)=\) is infinite, and \(R\) be a non-characteristic range set with \(2\leq\mathrm{Card}(R)\). Then \(\mathrm{Card}(\mathrm{C}_{0}(X,R))=\max\{w(X),\mathrm{Card}(R)\}\)._
Proof.: Notice that since \(R\) is non-characteristic, for all \(f\in\mathrm{C}_{0}(X,R)\) the image \(\mathrm{Im}(f)\) is a finite subset of \(R\). We denote by \(\mathcal{O}\) the set of all clopen subsets of \(X\). Remark that \(\mathrm{Card}(\mathcal{O})=w(X)\) (see Lemma 2.2). Put \(\tau=\max\{w(X),\mathrm{Card}(R)\}\). We first prove \(\mathrm{Card}(\mathrm{C}_{0}(X,R))\leq\tau\). For each \(f\in\mathrm{C}_{0}(X,R)\), put \(f(X)=\{0\}\cup\{\,a_{f,i}\mid i\in A_{f}\,\}\), where \(A_{f}=\{0,\ldots,n\}\) for some \(n\in\mathbb{Z}_{\geq 0}\). We define \(K\colon\mathrm{C}_{0}(X,R)\to\mathrm{Seq}(\mathcal{O})\times\mathrm{Seq}(R)\) by \(K(f)=(\{f^{-1}(a_{f,i})\}_{i\in A_{f}},\{a_{f,i}\}_{i\in A_{f}})\). Then \(K\) is injective, and hence \(\mathrm{Card}(\mathrm{C}_{0}(X,R))\leq\max\{w(X),\mathrm{Card}(\mathrm{Seq}(R))\}=\tau\). We next show the converse inequality. For each \(O\in\mathcal{O}\) and \(r\in R\setminus\{0\}\), we define \(h_{O,r}\in\mathrm{C}_{0}(X,R)\) by
\[h_{O,r}(x)=\begin{cases}r&\text{if $x\in O$;}\\ 0&\text{if $x\not\in O$.}\end{cases}\]
In this situation, if \(O,O^{\prime}\in\mathcal{O}\) and \(r,r^{\prime}\in R\setminus\{0\}\) satisfy \((O,r)\neq(O^{\prime},r^{\prime})\), then we have \(h_{O,r}\neq h_{O^{\prime},r^{\prime}}\). Thus we obtain \(w(X)\times\mathrm{Card}(R)\leq\mathrm{Card}(\mathrm{C}_{0}(X,R))\). Since \(w(X)\times\mathrm{Card}(R)=\max\{w(X),\mathrm{Card}(R)\}\), the proof is finished.
**Lemma 2.24**.: _Let \(X\) be a \(0\)-dimensional compact Hausdorff space such that \(w(X)\) is infinite, and let \(R\) be a range set with \(2\leq\mathrm{Card}(R)\), Then the set \(F\) of all \(f\in\mathrm{C}_{0}(X,R)\) whose images are finite is dense in \((\mathrm{C}_{0}(X,R),\triangledown)\) and satisfies \(\mathrm{Card}(F)\leq\max\{w(X),\mathrm{Card}(R)\}\)._
Proof.: The denseness of \(F\) follows from Proposition 2.16. The inequality \(\mathrm{Card}(F)\leq\max\{w(X),\mathrm{Card}(R)\}\) follows from Lemma 2.3.
**Proposition 2.25**.: _Let \(X\) be a \(0\)-dimensional compact Hausdorff space such that \(w(X)\) is infinite, and let \(R\) be a range set with \(2\leq\operatorname{Card}(R)\). Then we have \(w(\operatorname{C}_{0}(X,R))=\max\{w(X),\operatorname{Card}(R)\}\)._
Proof.: Put \(\tau=\max\{w(X),\operatorname{Card}(R)\}\). Lemmas 2.4 and 2.24 yield the inequality \(w(\operatorname{C}_{0}(X,R))\leq\tau\). To prove the converse inequality, we divide the proof into two cases.
Case 1. \([\operatorname{Card}(R)\leq\aleph_{0}]\): Take \(r\in R\setminus\{0\}\). Since the set \(\operatorname{C}_{0}(X,\{0,r\})\) is \(r\)-separated set of \(\operatorname{C}_{0}(X,R)\) (see Lemma 2.22). Owing to the fact that \(\operatorname{Card}(\operatorname{C}_{0}(X,\{0,r\}))=w(X)=\tau\) (see Proposition 2.23) and Lemma 2.3, we see that \(\tau\leq w(\operatorname{C}_{0}(X,R))\),
Case 2. \([\aleph_{0}<\operatorname{Card}(R)]\): Using a partition of \((0,\infty)\) by mutually disjoint countable intervals, the assumption that \(\aleph_{0}<\operatorname{Card}(R)\) guarantees the existence of \(s,t\in(0,\infty)\) such that \(\operatorname{Card}(R\cap(s,t))=\operatorname{Card}(R)\). Considering the subset \(\operatorname{C}_{0}(X,R\cap(s,t))\), as done in Case 1, we obtain \(\tau\leq w(\operatorname{C}_{0}(X,R))\).
Similarly to Proposition 2.25, we have:
**Proposition 2.26**.: _Let \(X\) be a \(0\)-dimensional compact Hausdorff space such that \(w(X)\) is infinite, and \(R\) be a range set and \(r\in R\setminus\{0\}\). If every clopen subset \(G\) of \(X\) satisfies \(w(G)=w(X)\), and either of \(\operatorname{Card}(R)\leq w(X)\) or \(\operatorname{Card}(R\cap[0,r])=\operatorname{Card}(R)\) is satisfied, then for all \(f\in\operatorname{C}_{0}(X,R)\), we have \(w(B(f,r))=\max\{w(X),\operatorname{Card}(R)\}\)._
### Spaces of ultrametrics
We next consider the ultrametric space \((\operatorname{Cpu}(X,R),\mathcal{UD}_{X}^{R})\).
#### 2.4.1. Metric properties
First we observe that ultrametrics are naturally continuous with respect to the nearly discrete topology.
**Proposition 2.27**.: _Let \(X\) be a topological space, \(R\) be a range set, and \(d\in\operatorname{Cpu}(X,R)\). Then \(d\colon X^{2}\to R\) is continuous with respect to \(M_{R}\). In particular, we have \(\operatorname{Cpu}(X,R)\subseteq\operatorname{C}_{0}(X\times X,R)\) and \(\triangledown|_{\operatorname{Cpu}(X,R)}=\mathcal{UD}_{X}^{R}\)._
Proof.: In this proof, we consider that \(X^{2}\) is equipped with the metric \(d\times d\) defined by \((d\times d)((x,y),(a,b))=d(x,a)\lor d(y,b)\). Fix \(p=(a,b)\in X^{2}\) and \(\epsilon\in(0,\infty)\). For all \((x,y)\in U(p,\epsilon)\), we obtain
\[d(a,b) \leq d(a,x)\lor d(x,y)\lor d(y,b)\] \[=d(x,y)\vee(d(a,x)\lor d(y,b))\leq d(x,y)\vee\epsilon.\]
Replacing the role of \((a,b)\) with that of \((x,y)\), we also obtain \(d(x,y)\leq d(a,b)\vee\epsilon\). From Lemma 2.10, it follows that \(M_{R}(d(x,y),d(a,b))\leq\epsilon\). Therefore \(d\colon X^{2}\to R\) is continuous with respect \(M_{R}\).
**Corollary 2.28**.: _Let \(R\) be a range set, \(X\) be a compact space, and \(d\in\operatorname{Cpu}(X,R)\). Then the image of \(d\) is tenuous._
_Remark 2.1_.: Corollary 2.28 has been already known, see for example, [2, The statement (1) in Theorem 1.7] and [33, (v) in Proposition 19.2]. Proposition 2.27 gives another explanation of this phenomenon on images of ultrametrics on compact spaces. Namely, that property can be reduced to the fact that ultrametrics are continuous with respect to not only the Euclidean topology but also the nearly discrete topology.
Similarly to Proposition 2.20, we obtain:
**Proposition 2.29**.: _Let \(X\) be a compact space, and \(R\) be a range set. Then the space \((\operatorname{Cpu}(X,R),\mathcal{UD}_{X}^{R})\) is complete._
Proof.: Take a Cauchy sequence \(\{d_{i}\}_{i\in\mathbb{Z}_{\geq 0}}\) in \((\operatorname{Cpu}(X,R),\mathcal{UD}_{X}^{R})\). By Proposition 2.20, we obtain a limit \(d\in\operatorname{C}_{0}(X^{2},R)\) with respect to \(\triangledown\). Since \(d\) is also a point-wise limit of \(\{d_{i}\}_{i\in\mathbb{Z}_{\geq 0}}\) with respect to the Euclidean topology, we conclude that \(d\) satisfies the strong triangle inequality and \(d(x,y)=d(y,x)\) for all \(x,y\in X\). This implies that \(d\in\operatorname{Cpu}(X,R)\).
**Definition 2.1**.: Let \((X,d)\) be an ultrametric space, and \(r\in(0,\infty)\). For a point \(p\in X\) we denote by \([p]_{\mathfrak{c}(d,r)}\) the closed ball \(B(p,r;d)\). We also denote by \([X]_{\mathfrak{c}(d,r)}\) the set \(\{\,[p]_{\mathfrak{c}(d,r)}\mid p\in X\,\}\). We define a metric \([d]_{\mathfrak{c}(r)}\) on \([X]_{\mathfrak{c}(d,r)}\) by \([d]_{\mathfrak{c}(r)}([p]_{\mathfrak{c}(d,r)},[q]_{\mathfrak{c}(d,r)})=d(p,q)\). Lemma 2.8 guarantees the well-definedness of \([d]_{\mathfrak{c}(r)}\). The space \(([X]_{\mathfrak{c}(d,r)},[d]_{\mathfrak{c}(r)})\) is an analog of the \(t\)_-closed quotient_ of a metric space defined in [26]. The symbol "\(\mathfrak{c}\)" involved in the notations stands for the "closed".
As a consequence of Propositions 2.16 and 2.27, we obtain the next lemma, which is an analogue of [26, Theorem 5.1] for spaces of ultrametrics.
**Corollary 2.30**.: _Let \(X\) be a \(0\)-dimensional compact Hausdorff space. For all \(d,e\in\operatorname{Cpu}(X,R)\), the value \(\mathcal{UD}_{X}^{R}(d,e)\) is equal to the minimum \(r\in R\) such that_
1. _we have_ \([X]_{\mathfrak{c}(d,r)}=[X]_{\mathfrak{c}(e,r)}\)_. Namely,_ \(\{B(a,r;d)\mid a\in X\}=\{B(a,r;e)\mid a\in X\}\)_;_
2. _we have_ \([d]_{\mathfrak{c}(r)}=[e]_{\mathfrak{c}(r)}\)_._
_Moreover, the ultrametric \(\mathcal{UD}_{X}^{R}\) is \(R\)-valued._
#### 2.4.2. Results on dense subsets
We review the author's theorem of dense subsets of spaces of ultrametrics. In particular, Theorem 2.41 is a new result.
**Definition 2.2**.: For a range set \(R\), we say that a quintuple \(\mathfrak{A}=(X,I,r,\{B_{i}\}_{i\in I},\{e_{i}\}_{i\in I})\) is an \(R\)_-amalgamation system_ if the following conditions are satisfied:
1. the symbol \(X\) is a topological space, and \(I\) is a set (of indices);
2. we have \(r\in\operatorname{UMet}(I;R)\), where we consider that \(I\) is equipped with the discrete topology;
3. the family \(\{B_{i}\}_{i\in I}\) is a covering of \(X\) consisting of mutually disjoint clopen subsets of \(X\);
4. for each \(i\in I\), we have \(e_{i}\in\operatorname{Cpu}(B_{i},R)\).
Moreover, if the system \(\mathfrak{A}\) satisfies
1. for each \(i\in I\), we have \(e_{i}\in\operatorname{UMet}(B_{i};R)\),
then the system is said to be _pleasant_.
For a metric space \((X,d)\), and a subset \(A\) of \(X\), we denote by \(\operatorname{diam}_{d}(A)\) the diameter of \(A\) with respect to \(d\). The proof of the next proposition is essentially presented in [15, Proposition 3.1] and [19, Proposition 2.1].
**Proposition 2.31**.: _Let \(X\) be a topological space, \(R\) be a range set, and \(\mathfrak{A}=(X,I,r,\{B_{i}\}_{i\in I},\{e_{i}\}_{i\in I})\) be an \(R\)-amalgamation system. Define a function \(D:X^{2}\to[0,\infty)\) by_
\[D(x,y)=\begin{cases}e_{i}(x,y)&\text{if $x,y\in B_{i}$;}\\ e_{i}(x,p_{i})\lor r(i,j)\lor e_{j}(p_{j},y)&\text{if $x\in B_{i}$ and $y\in B_{j}$.} \end{cases} \tag{2.1}\]
_Then \(D\) belongs to \(\operatorname{Cpu}(X,R)\). Moreover, the following statements are true._
1. _If_ \(X\) _is ultrametrizable and_ \(\mathfrak{A}\) _is pleasant, then_ \(D\in\operatorname{UMet}(X;R)\)_._
2. _If_ \(d\in\operatorname{Cpu}(X,R)\) _satisfies_ \(r(i,j)=d(p_{i},p_{j})\)_, and if_ \(\epsilon\in(0,\infty)\) _satisfies that_ \(\operatorname{diam}_{d}(B_{i})\leq\epsilon\) _and_ \(\operatorname{diam}_{e_{i}}(B_{i})\leq\epsilon\) _for all_ \(i\in I\)_, then_ \(\mathcal{UD}_{X}^{S}(D,d)\leq\epsilon\)_._
Based on Proposition 2.31, the function defined in (2.1) is called the _pseudo-ultrametric (ultrametric) associated with_ \(\mathfrak{A}\).
We next investigate dense subsets of spaces of continuous pseudo-ultrametrics.
**Proposition 2.32**.: _Let \(X\) be a \(0\)-dimensional compact Hausdorff space, and \(R\) be a characteristic range set. Then \(\operatorname{UMet}(X;R)\) is dense in \((\operatorname{Cpu}(X,R),\mathcal{UD}_{X}^{R})\)._
Proof.: Take \(d\in\operatorname{Cpu}(X,R)\) and \(\epsilon\in(0,\infty)\). By the compactness of \(X\), we can find finite points \(\{p_{i}\}_{i\in I}\), where \(I=\{0,\ldots,k\}\), for which the family \(\{B(p_{i},\epsilon;d)\}_{i\in I}\) is a mutually disjoint covering of \(X\). We define a metric \(r\) on \(I\) by \(r(i,j)=d(p_{i},p_{j})\). For each \(i\in I\), using Proposition 2.5 take \(e_{i}\in\operatorname{UMet}(B(p_{i},\epsilon;d);R)\). Then \(\mathfrak{A}=(X,I,r,\{B(p_{i},\epsilon)\}_{i\in I},\{e_{i}\}_{i\in I})\) is a pleasant \(R\)-amalgamation system. Let \(D\) denote the pseudo-metric associated with \(\mathfrak{A}\). Then the statements (1) and (2) of Proposition 2.31 imply that \(D\in\operatorname{UMet}(X;R)\) and \(\mathcal{UD}_{X}^{R}(d,D)\leq\epsilon\). Therefore \(\operatorname{UMet}(X;R)\) is dense in \(\operatorname{Cpu}(X,R)\).
The following theorem is a reformulation of [15, Theorem 1.3].
**Theorem 2.33**.: _Let \(X\) be an ultrametrizable space, and \(R\) be a characteristic range set. Then \(X\) is compact if and only if the set of all doubling ultrametrics is dense \(F_{\sigma}\) in \((\operatorname{UMet}(X;R),\mathcal{UD}_{X}^{R})\)._
A range set \(R\) is _quasi-complete_ if there exists \(C\in[1,\infty)\) such that for every bounded subset \(E\) of \(R\), there exists \(s\in R\) with \(\sup E\leq s\leq C\cdot\sup E\). The next theorem is stated in [13, Theorem 6.7 and Proposotion 6.9].
**Theorem 2.34**.: _Let \(X\) be a ultrametrizable space with an accumulation point, and \(R\) be an quasi-complete characteristic range set. Then the set of all non-doubling ultrametrics in \(\operatorname{UMet}(X;R)\) is dense \(G_{\delta}\) in \((\operatorname{UMet}(X;R),\mathcal{UD}_{X}^{R})\)._
A range set is _exponential_ if there exist \(a\in(0,1)\) and \(M\in[1,\infty)\) such that for every \(n\in\mathbb{Z}_{\geq 0}\) we have \([M^{-1}a^{n},Ma^{n}]\cap R\neq\emptyset\). The proof of the next theorem is presented in [15, Theorem 1.5].
**Theorem 2.35**.: _Let \(R\) be a characteristic range set. The the following statements are true:_
1. _The set_ \(R\) _is exponential if and only if the set of all uniformly perfect ultrametrics in_ \(\operatorname{UMet}(X;R)\) _is dense_ \(F_{\sigma}\) _in the space_ \((\operatorname{UMet}(X;R),\mathcal{UD}_{X}^{S})\)__
2. _The set of all non-uniformly perfect ultrametrics is dense_ \(G_{\delta}\) _in_ \((\operatorname{UMet}(X;R),\mathcal{UD}_{X}^{R})\)_._
For a metric space \((X,d)\), we denote by \(\dim_{\mathrm{H}}(X,d)\), \(\dim_{\mathrm{P}}(X,d)\), \(\dim_{\mathrm{B}}(X,d)\), and \(\dim_{\mathrm{A}}(X,d)\) the Hausdorff dimension, the packing dimension, the upper box dimension, and the Assouad dimension of \((X,d)\). For the definitions and details of these dimensions, we refer the readers to [8], [9], [10], [14], and [16]. We also denote by \(\mathcal{A}\) the set of all \((a_{1},a_{2},a_{3},a_{4})\in[0,\infty]^{4}\) such that \(a_{1}\leq a_{2}\leq a_{3}\leq a_{4}\). Let \(X\) be a metrizable space, and \(R\) be a range set. For \(\mathbf{a}=(a_{1},a_{2},a_{3},a_{4})\in\mathcal{A}\), let \(\operatorname{UFD}(X;R;\mathbf{a})\) stand for the set of all bounded metric \(d\in\operatorname{UMet}(X;R)\) satisfying that \(\dim_{\mathrm{H}}(X,d)=a_{1}\), \(\dim_{\mathrm{P}}(X,d)=a_{2}\), \(\overline{\dim}_{\mathrm{B}}(X,d)=a_{3}\), and \(\dim_{\mathrm{A}}(X,d)=a_{4}\).
Even if the readers does not know the details of the dimensions explained above, all arguments in this paper are traceable using only the finite stability of dimensions stated in the next lemma. The proof can be found in [8], [9], and [10].
**Lemma 2.36**.: _Let \(\mathbf{Dim}\) be any one of \(\dim_{\mathrm{H}}\), \(\dim_{\mathrm{P}}\), \(\overline{\dim}_{\mathrm{B}}\), and \(\dim_{\mathrm{A}}\). If \((X,d)\) is a metric space, and \(A\) and \(B\) are subsets of \(X\), then we have \(\mathbf{Dim}(A\cup B,d)=\max\{\mathbf{Dim}(A,d),\mathbf{Dim}(B,d)\}\)._
**Definition 2.3**.: We denote by \(\Gamma\) the Cantor set. For \(\mathbf{a}\in\mathcal{A}\), a characteristic range set \(R\) is said to be _\((\Gamma,\mathbf{a})\)-admissible_ if \(\operatorname{UFD}(\Gamma;R;\mathbf{a})\neq\emptyset\).
For example, the characteristic range sets \([0,\infty)\) and \(\mathbb{Q}_{\geq 0}\) are \((\Gamma,\mathbf{a})\)-admissible for all \(\mathbf{a}\in\mathcal{A}\).
We define \(\mathbf{n}_{0}=(0,0,0,0)\in\mathcal{A}\). The next lemma is obtained by [14, Lemma 3.7] and the fact that we can take a strictly decreasing sequence convergent to \(0\) from every characteristic range set.
**Lemma 2.37**.: _Every characteristic range set is \((\Gamma,\mathbf{n}_{0})\)-admissible._
**Lemma 2.38**.: _For every \(\mathbf{a}\in\mathcal{A}\), if a characteristic range set \(R\) is \((\Gamma,\mathbf{a})\)-admissible, then so is \(R\cap[0,r]\) for all \(r\in R\setminus\{0\}\)._
Proof.: Take \(d\in\mathrm{UFD}(\Gamma;R;\mathbf{a})\) and define \(e\in\mathrm{UMet}(\Gamma;R\cap[0,r])\) by \(e(x,y)=\min\{d(x,y),r\}\). By the compactness of \(\Gamma\), we can find points \(\{p_{i}\}_{i=0}^{k}\) in \(\Gamma\) for which \(\bigcup_{i=0}^{k}B(p_{i},r/2;e_{i})=X\). From the definition of \(e_{i}\), it follows that \(B(p_{i},r/2;e_{i})=B(p_{i},r/2;d)\). Thus, according to Lemma 2.36 and \(d\in\mathrm{UFD}(\Gamma;R;\mathbf{a})\), we have \(e\in\mathrm{UFD}(\Gamma;R\cap[0,r];\mathbf{a})\).
By [16, The statement (N3) in Theorem 4.7], we obtain the next extension theorem of ultrametrics preserving fractal dimensions.
**Lemma 2.39**.: _Let \(X\) be a separable ultrametrizable space, \(R\) be a characteristic range set, and \(F\) be a closed subset of \(X\). Then for all \(\mathbf{a}\in\mathcal{A}\) and for all \(d\in\mathrm{UFD}(F;R;\mathbf{a})\), there exists \(D\in\mathrm{UFD}(X;R;\mathbf{a})\) such that \(D|_{A^{2}}=d\)._
**Lemma 2.40**.: _If \(X\) is an uncountable compact ultrametrizable space, \(\mathbf{a}\in\mathcal{A}\), and a characteristic range set \(R\) is a \((\Gamma,\mathbf{a})\)-admissible, then \(\mathrm{UFD}(X;R;\mathbf{a})\neq\emptyset\)._
Proof.: By [22, Corollary 6.5], there exists a closed subspace \(F\) of \(X\) that is homeomorphic to the Cantor set. Since \(R\) is \((\Gamma,\mathbf{a})\)-admissible, we can take \(d\in\mathrm{UFD}(F;R;\mathbf{a})\). Applying Lemma 2.39 to \(X\), \(F\), and \(d\), we obtain \(D\in\mathrm{UFD}(X;R;\mathbf{a})\).
We next verify that \(\mathrm{UFD}(X;R;\mathbf{a})\) is dense in \(\mathrm{UMet}(X;R)\).
**Theorem 2.41**.: _If \(X\) is an uncountable compact ultrametrizable space, \(\mathbf{a}\in\mathcal{A}\), and \(R\) is a \((\Gamma,\mathbf{a})\)-admissible characteristic range set, then the set \(\mathrm{UFD}(X;R;\mathbf{a})\) is dense in \(\mathrm{UMet}(X;R)\)._
Proof.: Take \(d\in\mathrm{UMet}(X;R)\) and \(\epsilon\in(0,\infty)\). We also take \(\eta\in R\setminus\{0\}\) with \(\eta\leq\epsilon\) and define \(S_{\eta}=R\cap[0,\eta]\). Then Lemma 2.38 implies that \(S_{\eta}\) is \((\Gamma,\mathbf{a})\)-admissible. Since \(X\) is compact, there exists a sequence \(\{p_{i}\}_{i=0}^{n}\) such that the family \(\{B(p_{i},\epsilon)\}_{i=0}^{n}\) is a mutually disjoint covering of \(X\). Put \(I=\{0,\ldots,n\}\). In this setting, each \(B(p_{i},\epsilon)\) is also compact and ultrametrizable. Since \(X\) is uncountable, there exists \(k\in I\) such that \(B(p_{k},\epsilon)\) is uncountable. Lemma 2.40 enables us to take \(e_{k}\in\mathrm{UFD}(B(p_{k},\epsilon);S_{\eta};\mathbf{a})\). For each \(i\in I\setminus\{k\}\), we have take \(e_{i}\in\mathrm{UFD}(B(p_{i},\epsilon);S_{\eta};\mathbf{n}_{0})\). We define a metric \(h\) on \(I\) by \(h(i,j)=d(p_{i},p_{j})\). Note that \(h\) generates the discrete topology on \(I\). Then \(\mathfrak{A}=(X,I,h,\{B(p_{i},\epsilon)\}_{i\in I},\{p_{i}\}_{i\in I})\) is a pleasant \(R\)-amalgamation system. Let \(D\) be the ultrametric associated with \(\mathfrak{A}\). By (2) in Proposition
2.31, we have \(\mathcal{UD}_{X}^{R}(d,D)\leq\epsilon\). Due to \(e_{k}\in\operatorname{UFD}(B(p_{k},\epsilon);S_{\eta};\mathbf{a})\) and \(e_{i}\in\operatorname{UFD}(B(p_{i},\epsilon);S_{\eta};\mathbf{n}_{0})\) for all \(i\in I\setminus\{k\}\), Lemma 2.36 implies \(D\in\operatorname{UFD}(X;R;\mathbf{a})\). This finishes the proof.
#### 2.4.3. Topological weights of spaces of ultrametrics
We next estimate the topological weights of spaces of continuous pseudo-ultrametrics. Most of arguments in this subsection are analogous with Subsection 2.3.2. By the definition of \(\operatorname{Cpu}(X,R)\), we notice the following lemma:
**Lemma 2.42**.: _If \(X\) is a topological space and range sets \(R\) and \(S\) satisfy \(S\subseteq R\), then \(\operatorname{Cpu}(X,S)\) is a metric subspace of \(\operatorname{Cpu}(X,R)\)._
**Lemma 2.43**.: _Let \(X\) be \(0\)-dimensional compact Hausdorff space such that \(w(X)\) is infinite, and \(R\) be a non-characteristic range set with \(2\leq\operatorname{Card}(R)\). Then we have \(\operatorname{Card}(\operatorname{Cpu}(X,R))=\max\{w(X),\operatorname{Card}(R)\}\)._
Proof.: Put \(\tau=\max\{w(X),\operatorname{Card}(R)\}\). Let \(\mathcal{O}\) be the set of all clopen subsets of \(X\). Similarly to Proposition 2.23, we have \(\operatorname{Card}(\operatorname{Cpu}(X,R))\leq\tau\). We next show the converse inequality. Let \(\mathcal{H}\) be a subset of \(\mathcal{O}\) such that
1. if \(O\in\mathcal{H}\), then \(X\setminus O\not\in\mathcal{H}\);
2. for every \(O\in\mathcal{O}\), we have \(O\in\mathcal{H}\) or \(X\setminus O\in\mathcal{H}\).
Notice that \(\operatorname{Card}(\mathcal{H})=w(X)\). For each \(O\in\mathcal{H}\), and \(r\in R\setminus\{0\}\), we define \(u_{O,r}\in\operatorname{Cpu}(X,R)\) by
\[u_{O,r}=\begin{cases}r&\text{if $x,y\in O$, or $x,y\in X\setminus O$};\\ 0&\text{otherwise.}\end{cases}\]
If \(O,O^{\prime}\in\mathcal{H}\) and \(r,r^{\prime}\in R\setminus\{0\}\) satisfies \((O,r)\neq(O^{\prime},r^{\prime})\), then \(u_{O,r}\neq u_{O^{\prime},r^{\prime}}\). Therefore we have \(\tau\leq\operatorname{Card}(\operatorname{Cpu}(X,R))\).
Corresponding to Lemma 2.43, we can prove the next lemma:
**Lemma 2.44**.: _Let \(X\) be a \(0\)-dimensional compact Hausdorff space such that \(w(X)\) is infinite, and \(R\) be a range set. Let \(G\) be the set of all \(d\in\operatorname{Cpu}(X,R)\) such that \(\operatorname{Im}(d)\) is finite. Then the set \(G\) is dense in \(\operatorname{Cpu}(X,R)\) and satisfies \(\operatorname{Card}(G)\leq\max\{w(X),\operatorname{Card}(R)\}\)_
The following proposition can be proven in the same way to Proposition 2.25.
**Proposition 2.45**.: _Let \(X\) be a \(0\)-dimensional compact Hausdorff space such that \(w(X)\) is infinite, and \(R\) be a range set. Then \(w(\operatorname{Cpu}(X,R))=\max\{w(X),\operatorname{Card}(R)\}\)._
Similarly to Proposition 2.45, we obtain:
**Proposition 2.46**.: _Let \(X\) be a \(0\)-dimensional compact Hausdorff space such that \(w(X)\) is infinite, \(R\) be a range set and \(r\in R\setminus\{0\}\). If every clopen subset \(G\) of \(X\) satisfies \(w(G)=w(X)\), and either of \(\operatorname{Card}(R)\leq w(X)\) or \(\operatorname{Card}(R\cap[0,r])=\operatorname{Card}(R)\) satisfied, then for all \(d\in\operatorname{Cpu}(X,R)\), we have \(w(B(d,r))=\max\{w(X),\operatorname{Card}(R)\}\)._
### Injective ultrametric spaces
We show some properties of injective ultrametric spaces. For a class \(\mathcal{C}\) of finite metric spaces, we say that a metric space \((X,d)\) satisfies the _one-point extension property for \(\mathcal{C}\)_ if for every finite metric space \((A\sqcup\{\omega\},e)\) in \(\mathcal{C}\) (with the specific point \(\omega\)), and for every isometric embedding \(\phi\colon A\to X\), there exists a point \(q\in X\) such that \(d(\phi(a),q)=e(a,\omega)\) for all \(a\in A\). Using induction, we obtain:
**Lemma 2.47**.: _Let \(\mathcal{C}\) be a class of finite metric spaces. A metric space \((X,d)\) is \(\mathcal{C}\)-injective if and only if it satisfies the one-point extension property for \(\mathcal{C}\)._
The next lemma gives a more practical condition for the injectivity.
**Lemma 2.48**.: _Let \(\mathcal{C}\) be a class of finite ultrametric spaces. An ultrametric space \((X,d)\) is \(\mathcal{C}\)-injective if and only if it satisfies the following property:_
1. _Let_ \((A\sqcup\{\omega\},e)\) _be an arbitrary finite metric space in_ \(\mathcal{C}\) _and_ \(\phi\colon A\to X\) _be an isometric map. Let_ \(p\in A\) _be a point satisfying that_ \(e(p,\omega)=\min_{a\in A}e(a,\omega)\)_. Then there exists a point_ \(q\in X\) _such that if_ \(a\in A\) _satisfies_ \(e(a,\omega)=e(p,\omega)\)_, then_ \(e(a,\omega)=d(\phi(a),q)\)_._
Proof.: It suffices to show that if \((X,d)\) satisfies the property (Ex), then it is \(\mathcal{C}\)-injective. Let \((A\sqcup\{\omega\},e)\) be an ultrametric space in \(\mathcal{C}\), and \(\phi\colon A\to X\) be an isometric embedding. Using the property (Ex), we obtain \(q\in X\) such that if \(a\in A\) satisfies \(e(a,\omega)=e(p,\omega)\), then \(e(a,\omega)=d(\phi(a),q)\). Notice that \(e(p,\omega)=d(\phi(p),q)\). To show that \((X,d)\) satisfies the one-point extension property for \(\mathcal{C}\), take \(a\in A\). If \(e(p,\omega)=e(a,\omega)\), then the property of \(q\in X\) implies \(e(a,\omega)=d(\phi(a),q)\). If \(e(p,\omega)<e(a,\omega)\), then Lemma 2.6 implies \(e(p,a)=e(a,\omega)\). Due to \(d(\phi(p),q)=e(p,\omega)\) and \(d(\phi(p),\phi(a))=e(p,a)\), we have \(d(\phi(p),q)<d(\phi(p),\phi(a))\). According to Lemma 2.6 again, we obtain \(d(\phi(a),q)=d(\phi(a),\phi(p))\). From \(e(p,a)=e(a,\omega)\) and \(e(a,\omega)=d(\phi(a),\phi(p))\), it follows that \(d(\phi(a),q)=e(a,\omega)\). In any case, we obtain \(d(\phi(a),q)=e(a,\omega)\). Therefore \((X,d)\) satisfies the one-point extension property for \(\mathcal{C}\), and hence it is \(\mathcal{C}\)-injective.
The injectivity is hereditary to dense subsets. The next lemma can be proven in a similar way to [11, Proposition 2.4].
**Lemma 2.49**.: _Let \(R\) be a range set, and \((X,d)\) be an \(\mathcal{N}(R)\)-injective ultrametric space. If \(H\) is a dense subset of \(X\), then \((H,d)\) is \(\mathcal{N}(R)\)-injective._
## 3. Function spaces
In this section, we explain our first construction of injective ultrametric space.
**Definition 3.1**.: Let \(X\) be a topological space, \(R\) be a range set, and \(E\) a subset of \(\operatorname{C}_{0}(X,R)\). For a non-empty subset \(A\) of \(E\), for \(\zeta\in A\), and for \(r\in R\setminus\{0\}\), a subset \(E\) is said to be _\((A,\zeta,r)\)-attachable_ if there exists \(g\in E\) such that
* for all \(h\in A\) with \(\triangledown(h,\zeta)\leq r\), we have \(g^{-1}(r)\neq h^{-1}(r)\) ;
* we have \(g(x)=\zeta(x)\) whenever \(x\in X\) satisfies \(r<(\zeta\lor g)(x)\).
We say that \(E\) is _full-attachable_ if it is \((A,\zeta,r)\)-attachable for every finite subset \(A\) of \(E\), for every \(\zeta\in E\), and for every \(r\in R\setminus\{0\}\).
_Remark 3.1_.: According to Corollary 2.17, the pair of the conditions (A1) and (A2) is equivalent to the property that if \(h\in A\) satisfies \(\triangledown(h,\zeta)\leq r\), then \(\triangledown(\zeta,g)=r\).
**Theorem 3.1**.: _Let \(R\) be a range set, \(X\) be a \(0\)-dimensional Hausdorff space. A subset \(E\) of \((\operatorname{C}_{0}(X,S),\triangledown)\) is \(\mathcal{N}(R)\)-injective if and only if \(E\) is full-attachable._
Proof.: First assume that \(E\) is \(\mathcal{N}(R)\)-injective. Take an arbitrary non-empty subset \(A\) of \(E\), \(\zeta\in A\), and \(r\in R\setminus\{0\}\). Put \(B=A\sqcup\{\omega\}\) and define a metric \(e\) on \(B\) by \(e|_{A^{2}}=d\) and \(e(x,\omega)=e(x,\zeta)\lor r\) for all \(x\in A\). We also define an isometric embedding \(\phi\colon A\to E\) by \(\phi(a)=a\). Since \(E\) is \(\mathcal{N}(R)\)-injective, we can take \(g\in E\) such that \(e(a,\omega)=\triangledown(\phi(a),g)\) for all \(a\in A\). By Proposition 2.16 and Corollary 2.17, we conclude that \(E\) is full-attachable.
Next assume that \(E\) is full-attachable. Let \((F\sqcup\{\omega\},e)\) be an arbitrary finite ultrametric space in \(\mathcal{N}(R)\), and \(\phi\colon F\to E\) be an isometric embedding. Put \(r=\min_{a\in F}e(a,\omega)\), and take \(p\in F\) with \(r=e(p,\omega)\). Since \(E\) is \((\phi(F),\phi(p),r)\)-attachable, we can find \(g\in E\) satisfying the properties (A1) and (A2). To show that \(E\) satisfies the property (Ex) in Lemma 2.48, take \(a\in F\) with \(e(a,\omega)=e(p,\omega)(=r)\). Then \(e(a,p)\leq r\), and hence \(\triangledown(\phi(a),\phi(p))\leq r\). According to (A1), we have \(\phi(a)^{-1}(r)\neq g^{-1}(r)\). On account of Proposition 2.16, the inequality \(\triangledown(\phi(a),\phi(p))\leq r\) shows that \(\phi(a)(x)=\phi(p)(x)\) whenever \(r<(\phi(a)\vee\phi(p))(x)\). Due to the property (A2), we also have \(g(x)=\phi(p)(x)\) whenever \(r<(g\vee\phi(p))(x)\). Therefore, for all \(x\) such that \(r<(g\vee\phi(a))(x)\), we obtain \(g(x)=\phi(a)(x)\). By Corollary 2.17, we notice that \(\triangledown(\phi(a),g)=r\). Namely, the space \(E\) satisfies the the property (Ex) in Lemma 2.48, and hence we conclude that \(E\) is \(\mathcal{N}(R)\)-injective. This finishes the proof of the theorem.
We shall provide examples of \(\mathcal{N}(S)\)-injective ultrametric spaces for a range set \(R\).
**Theorem 3.2**.: _Let \(R\) be a range set, and \(X\) be a \(0\)-dimensional compact Hausdorff space without isolated points. Then the \(R\)-valued ultrametric space \((\operatorname{C}_{0}(X,R),\triangledown)\) is \(\mathcal{N}(R)\)-injective and complete._
Proof.: According to Proposition 2.20 and Theorem 3.1, we only need to show that \((\operatorname{C}_{0}(X,R),\triangledown)\) is full-attachable. Take a non-empty finite
subset \(A\) of \(\mathrm{C}_{0}(X,R)\), \(\zeta\in A\), and \(r\in R\setminus\{0\}\). Put \(B=\zeta^{-1}([0,r])\). Then \(B\) is clopen in \(X\). Since \(X\) is infinite and it has no isolated points, we can find non-empty clopen subsets \(K\) and \(L\) of \(\mathrm{C}_{0}(X,R)\) such that \(K\cap L=\emptyset\) and \(K\cup L=B\). Notice that \(K\neq B\). We define a map \(g\colon X\to R\) by
\[g(x)=\begin{cases}\zeta(x)&\text{if }x\not\in B;\\ r&\text{if }x\in K;\\ 0&\text{if }x\in L.\end{cases}\]
Then \(g\) satisfies the properties (A1) and (A2). Therefore the space \((\mathrm{C}_{0}(X,R),\triangledown)\) is full-attachable, and hence it is \(\mathcal{N}(R)\)-injective.
**Theorem 3.3**.: _Let \(X\) be a \(0\)-dimensional compact Hausdorff space, and \(R\) be a range set. If \(3\leq\mathrm{Card}(R)\), and \(X\) has an isolated point, then \((\mathrm{C}_{0}(X,R),\triangledown)\) is not \(\mathcal{N}(R)\)-injective._
Proof.: Take \(r,l\in R\) with \(0<r<l\), and let \(p\in X\) be an isolated point of \(X\), i.e., the singleton \(\{p\}\) is open. Define \(\zeta\colon X\to R\) by
\[\zeta(x)=\begin{cases}l&\text{if }x\in X\setminus\{p\};\\ 0&\text{if }x=p.\end{cases}\]
Then \(\zeta\in\mathrm{C}_{0}(X,R)\). In this setting, the space \((\mathrm{C}_{0}(X,R),\triangledown)\) is not \((\{\zeta\},\zeta,r)\)-attachable. Indeed, if there exists \(g\in\mathrm{C}_{0}(X,R)\) satisfying the properties (A1) and (A2), then \(g^{-1}(a)\neq\emptyset\), \(g^{-1}(0)\neq\emptyset\) and \(g(x)=\zeta(x)\) for all \(x\in X\setminus\{p\}\). Then \(g\) should satisfy \(g^{-1}(r)=\{p\}\) and \(g^{-1}(0)=\{p\}\). This is impossible.
Combining Theorems 3.2 and 3.3, we obtain a characterization of spaces without isolated points.
**Corollary 3.4**.: _Let \(X\) be a \(0\)-dimensional compact Hausdorff space. Then \(X\) has no isolated points if and only if \((\mathrm{C}_{0}(X,R),\triangledown)\) is \(\mathcal{N}(R)\)-injective for every range set \(R\) with \(3\leq\mathrm{Card}(R)\)._
According to Proposition 2.25 and Theorem 3.2, the next proposition holds true. Recall that \(\Gamma\) stands for the Cantor set.
**Proposition 3.5**.: _If \(R\) is a countable or finite range set, then the space \((\mathrm{C}_{0}(\Gamma,R),\triangledown)\) is the (separable) \(R\)-Urysohn universal \(R\)-valued ultrametric space._
We denote by \(\mathbb{O}\) the one-point compactification of the countable discrete space \(\mathbb{Z}_{\geq 0}\). Notice that \(\mathbb{O}=\mathbb{Z}_{\geq 0}\sqcup\{\infty\}\). As noted in Section 1, the construction of \(\mathrm{C}_{0}(X,R)\) can be considered as a generalization of Vestfrid's universal space \(V\) in [37] (originally, it is denoted by "\(**\)"), which is the space of all decreasing sequence \(\{x_{i}\}_{i\in\mathbb{Z}_{\geq 0}}\) in \(R\) such that \(x_{i+1}\leq x_{i}\) for all \(i\in\mathbb{Z}_{\geq 0}\) and \(x_{i}\to 0\) as \(i\to\infty\) equipped with the metric \(u\) defined by \(u(\{x_{i}\}_{i\in\mathbb{Z}_{\geq 0}},\{y_{i}\}_{i\in\mathbb{Z}_{\geq 0}})= \max_{k\in\mathbb{Z}_{\geq 0}}\{x_{k},y_{k}\}\) if there exists \(i\in\mathbb{Z}_{\geq 0}\) with \(x_{i}\neq y_{i}\); otherwise, the distance is \(0\). By Proposition 2.15,
the space \((V,u)\) can be regarded as the space of all continuous maps \(f\colon\mathbb{O}\to R\) such that \(f(\infty)=0\) and \(f(n+1)\leq f(n)\) and \(f(n)\to 0\) as \(n\to\infty\) equipped with \(\triangledown\). Using Theorem 3.1, we can prove the injectivity of \(V\).
**Proposition 3.6**.: _The ultrametric space \((V,u)\) explained above is \(\mathcal{N}(R)\)-injective and complete._
## 4. Spaces of ultrametrics
We begin with a variant of Theorem 3.1 for ultrametrics. A point in a topological space is an _accumulation point_ if it is not isolated.
**Lemma 4.1**.: _Let \(X\) be a \(0\)-dimensional compact Hausdorff space, \(R\) be a range set, and \(E\) be a subset of \(\operatorname{Cpu}(X,R)\). Then \(E\) is \(\mathcal{N}(R)\)-injective if and only if \(E\) satisfies that for every finite subset \(A\) of \(E\), \(\zeta\in A\), and \(r\in R\setminus\{0\}\), there exists \(g\in E\) such that_
1. _if_ \(h\in A\) _satisfies_ \(\mathcal{UD}^{R}_{X}(\zeta,h)\leq r\)_, then there exists_ \(a_{h}\in X\) _such that_ \(\mathbb{S}(a_{h},r;h)\neq\mathbb{S}(a_{h},r;g)\)_;_
2. _we have_ \(\mathcal{UD}^{R}_{X}(\zeta,g)\leq r\)_._
Proof.: First assume that \(E\) is \(\mathcal{N}(R)\)-injective. Then, owing to Theorem 3.1, we notice that \(E\) is full-attachable. The conditions (A1) and (A2) imply (B1) and (B2), respectively.
Next assume that \(E\) satisfies (B1) and (B2). According to Theorem 3.1, it suffices to show that \(E\) is full-attachable in \(\operatorname{C}_{0}(X^{2},R)\). Take a finite subset \(A\) of \(E\), \(\zeta\in A\), and \(r\in R\setminus\{0\}\). By (B1), we may assume that there exists \(b\in\mathbb{S}(a_{h},r;h)\) and \(b\not\in\mathbb{S}(a_{h},r,g)\) for all \(h\in A\). Then \((a_{h},b)\in h^{-1}(r)\) and \((a_{h},b)\not\in g^{-1}(r)\), i.e., \(h^{-1}(r)\neq g^{-1}(r)\). Thus \(E\) satisfies the condition (A1). The condition (A2) follows from (B2) and the fact that \(\mathcal{UD}^{R}_{X}=\triangledown|_{\operatorname{Cpu}(X,R)}\) (see Proposition 2.27). Therefore the set \(E\) is full-attachable.
**Theorem 4.2**.: _Let \(X\) be an infinite \(0\)-dimensional compact Hausdorff space possessing an accumulation point, and \(R\) be a range set. Then the space \((\operatorname{Cpu}(X,R),\mathcal{UD}^{R}_{X})\) is \(\mathcal{N}(R)\)-injective and complete._
Proof.: Take a finite subset \(A\) of \(\operatorname{Cpu}(X,R)\), \(\zeta\in A\), and \(r\in R\setminus\{0\}\). Let \(p\) be an accumulation point of \(X\). By the compactness of \(X\), we can take \(\{p_{i}\}_{i\in I}\), where \(I=\{0,\ldots,n\}\), for which the family \(\{B(p_{i},\epsilon)\}_{i\in I}\) is a mutually disjoint covering of \(X\). Due to Lemma 2.7, we may assume that \(p_{0}=p\). Put \(G=\bigcup_{h\in A}\mathbb{S}(p,r;h)\). Since the set \(\mathbb{S}(p,r;h)\) is clopen in \(X\) for all \(h\in A\), the set \(G\) is clopen in \(X\) and \(p\not\in G\). Using the \(0\)-dimensionality of \(X\), we can find a clopen neighborhood \(N\) of \(p\) such that \(N\cap G=\emptyset\). If necessary, taking a sufficiently small neighborhood \(N\), we may assume that \(G\) is a proper subset of \(X\setminus N\), which is possible since \(p\) is an accumulation point. We define a pseudo-ultrametric
on \(B(p,r;\zeta)\) by
\[u_{0}(x,y)=\begin{cases}0&\text{if $x,y\in N$ or $x,y\in X\setminus N$};\\ r&\text{otherwise}.\end{cases}\]
For each \(i\in I\setminus\{0\}\), we define a pseudo-ultrametric \(u_{i}\) on \(B(p_{i},r)\) by \(u_{i}=\zeta\). We define a metric \(v\) on \(I\) by \(v(i,j)=\zeta(p_{i},p_{j})\). Then \(\mathfrak{A}=(X,I,v,\{B(p_{i},r)\}_{i\in I},\{p_{i}\}_{i\in I})\) is an \(R\)-amalgamation system. According to (2) in Proposition 2.31, the ultrametric \(g\) associated with \(\mathfrak{A}\) satisfies \(\mathcal{UD}_{X}^{R}(\zeta,g)\leq r\). Since \(\mathbb{S}(p,r;g)=X\setminus N\), we have \(\mathbb{S}(p,r;g)\neq\mathbb{S}(p,r;h)\) for all \(h\in A\). Thus the conditions (B1) and (B2) are satisfied. Therefore Lemma 4.1 that \((\operatorname{Cpu}(X,R),\mathcal{UD}_{X}^{R})\) is \(\mathcal{N}(R)\)-injective.
Lemma 2.45 yields the next proposition:
**Proposition 4.3**.: _If \(X\) is an infinite compact ultrametrizable space and \(R\) is a finite or countable range set, then the space \((\operatorname{Cpu}(X,R),\mathcal{UD}_{X}^{R})\) is the (separable) \(R\)-Urysohn universal \(R\)-valued ultrametric space._
Due to the author's results on dense subsets of spaces of ultrametrics, we obtain the following theorem. Recall that \(\Gamma\) is the Cantor set.
**Theorem 4.4**.: _Let \(R\) be a characteristic range set. Then the following statements are true:_
1. _If_ \(X\) _is an infinite_ \(0\)_-dimensional compact metrizable space, then_ \((\operatorname{UMet}(X;S),\mathcal{UD}_{X}^{R})\) _is_ \(\mathcal{N}(S)\)_-injective._
2. _If_ \(X\) _is an infinite compact ultrametrizable space, then the set of all doubling ultrametrics in_ \(\operatorname{UMet}(X;R)\) _is_ \(\mathcal{N}(R)\)_-injective._
3. _If_ \(X\) _is an infinite compact ultrametrizable space and_ \(R\) _is quasi-complete, then the set of all non-doubling ultrametrics in the space_ \(\operatorname{UMet}(X;R)\) _is_ \(\mathcal{N}(R)\)_-injective._
4. _If_ \(R\) _is exponential, then the set of all uniformly perfect ultrametrics in_ \(\operatorname{UMet}(\Gamma;R)\) _is_ \(\mathcal{N}(R)\)_-injective._
5. _The set of all non-uniformly perfect ultrametrics in_ \(\operatorname{UMet}(\Gamma;R)\) _is_ \(\mathcal{N}(R)\)_-injective._
6. _If_ \(X\) _is an uncountable compact ultrametrizable space,_ \(\mathbf{a}\in\mathcal{A}\)_, and_ \(R\) _is_ \((\Gamma,\mathbf{a})\)_-admissible, then_ \(\operatorname{UFD}(X;\mathbf{a};S)\) _is_ \(\mathcal{N}(R)\)_-injective._
Proof.: According to Lemma 2.49, it suffices to show all sets in the statements are dense in \(\operatorname{Cpu}(X,R)\). Remark that if \(X\) is an infinte compact metrizable space, then it has an accumulation point. The statement (1) is deduced from Proposition 2.32, and Theorem 4.2. Using the statement (1), in what follows, we only need to show the sets in the statements are dense in \(\operatorname{UMet}(X;R)\). Therefore, The statements (2), (3) follows from Proposition 2.33 and Theorem 2.34, respectively. The statements (4) and (5) are proven by Theorem 2.35. The statement (6) is guaranteed by Theorem 2.41.
_Remark 4.1_.: Even if \(X\) is countably infinite, with some suitable modifications, Theorem 4.4 and the statement (6) in Theorem 2.41 are still true. This is based on the existence of a metric in \(\operatorname{UFD}(\mathbb{O};\mathbb{R}_{\geq 0};\mathbf{a}(u,v))\), for all \(u,v\in[0,\infty]\) with \(u\leq v\), where \(\mathbf{a}(u,v)=(0,0,u,v)\) and \(\mathbb{O}\) is the one-point compactification of the countable discrete space. Since details of the exsitence are slightly complicated, we omit the proof.
## 5. Additional remarks
For an infinite cardinal \(\kappa\), let \(N(\kappa)\) denote the countable product of the discrete space \(\kappa\), which is sometimes called the \(0\)_-dimensional Baire space of weight \(\kappa\)_. Notice that \(N(\aleph_{0})\) is homeomorphic to the space of irrational numbers. The following is a topological characterization of \(N(\kappa)\) (see [34, Theorem 1]).
**Theorem 5.1**.: _Let \(\kappa\) be an infinite cardinal and \(X\) be a completely ultrametrizable space. Assume that_
1. _the space_ \(X\) _has a dense subset of cardinal_ \(\kappa\)_;_
2. _every non-empty open subset of_ \(X\) _contains a closed discrete space of cardinal_ \(\kappa\)_._
_Then the space \(X\) is homeomorphic to the \(0\)-dimensional Baire space \(N(\kappa)\) of weight \(\kappa\)._
_Remark 5.1_.: A topological space \(X\) satisfies the conditions (1) and (2) in Theorem 5.1 if and only if every non-empty open subset \(O\) of \(X\) satisfies \(w(O)=\kappa\).
In [23, 24], Koshino determined the topological types of the spaces \(\operatorname{Met}(X)\) of metrics on a metrizable space \(X\) generating the same topology of \(X\) equipped with the uniform topology or the compact-open topology. We now show a non-Archimedean analogue of Koshino's work.
**Theorem 5.2**.: _Let \(\kappa\) be an infinite cardinal, \(X\) be a \(0\)-dimensional compact Hausdorff space with \(w(X)=\kappa\) possessing no isolated points, and \(Y\) be a \(0\)-dimensional compact Hausdorff space with \(w(X)=\kappa\) possessing an accumulation point. Let \(R\) be a characteristic range set. Put \(\tau=\max\{\kappa,\operatorname{Card}(R)\}\). If either of \(\operatorname{Card}(R)\leq\kappa\) or \(\operatorname{Card}(R\cap[0,r])=\operatorname{Card}(R)\) for all \(r\in R\setminus\{0\}\), then \(\operatorname{C}_{0}(X,R)\) and \(\operatorname{Cpu}(X,R)\) are homeomorphic to \(N(\tau)\)._
Proof.: Based on Theorem 5.1, the case of \(\operatorname{C}_{0}(X,R)\) is deduced from Propositions 2.25 and 2.26. Similarly, the case of \(\operatorname{Cpu}(X,R)\) follows from Propositions 2.45 and 2.46.
**Corollary 5.3**.: _For every countable characteristic range set \(R\), the \(R\)-Urysohn universal ultrametric spaces is homeomorphic to the space of irrational numbers._
_Remark 5.2_.: Theorem 5.2 is a generalization of the argument in the proof of [1, Theorem 2].
_Remark 5.3_.: Let \(\kappa\), \(R\), \(X\) and \(Y\) be the same objects in Theorem 5.2. Of cause, if \(\kappa=\aleph_{0}\) and \(R\) is finite or countable, then \((\mathrm{C}_{0}(X,R),\,\triangledown)\) and \((\mathrm{Cpu}(X,R),\,\mathcal{UD}_{X}^{R})\) are isometric to each other (see [11]). However, in general, the author does not know they are isometric. In [20], the author proves that if \(\kappa=\aleph_{0}\) and \(R\) is uncountable, the two spaces explained above and the non-Archimedean Gromov-Hausdorff associated with \(R\) are isometric to each other.
_Remark 5.4_.: For every infinite cardinal \(\kappa\), there exists a \(0\)-dimensional compact Hausdorff space \(X\) with \(w(X)=\kappa\) possessing no isolated points. For example, the product \(\{0,1\}^{\kappa}\) is such a space. It also becomes an example of a \(0\)-dimensional compact Hausdorff space \(X\) with \(w(X)=\kappa\) possessing an accumulation point.
**Question 5.4**.: For every range set \(R\), are there more constructions of \(\mathcal{N}(R)\)-injective \(R\)-ultrametric spaces?
_Acknowledgements_.: The author would like to thank Tatsuya Goto for helpful comments.
|
2304.13791 | NLO + parton-shower generator for Wc production in the POWHEG BOX RES | We present the implementation of a next-to-leading-order plus parton-shower
event generator for the hadronic production of a heavy charm quark accompanied
by a leptonically-decaying $W$ boson in the POWHEG BOX RES framework. We
consider both signatures, i.e. $pp\to c\, \ell^- \bar{\nu}_{\ell}$ and $pp\to
\bar{c} \,\ell^+ {\nu}_{\ell}$, and we include exactly off-shell and
spin-correlation effects, as well as off-diagonal Cabibbo-Kobayashi-Maskawa
contributions. We present particle-level results, obtained interfacing our code
with the Herwig7.2 and Pythia8.3 shower Monte Carlo event generators, including
hadronization and underlying-event effects, and compare them against the data
collected by the CMS Collaboration at $\sqrt{s}=13$ TeV. | Silvia Ferrario Ravasio, Carlo Oleari | 2023-04-26T19:15:51Z | http://arxiv.org/abs/2304.13791v1 | # NLO + parton-shower generator for \(Wc\) production in the POWHEG BOX RES
###### Abstract
We present the implementation of a next-to-leading-order plus parton-shower event generator for the hadronic production of a heavy charm quark accompanied by a leptonically-decaying \(W\) boson in the POWHEG BOX RES framework. We consider both signatures, i.e. \(pp\to c\,\ell^{-}\bar{\nu}_{\ell}\) and \(pp\to\bar{c}\,\ell^{+}\nu_{\ell}\), and we include exactly off-shell and spin-correlation effects, as well as off-diagonal Cabibbo-Kobayashi-Maskawa contributions. We present particle-level results, obtained interfacing our code with the Herwig7.2 and Pythia8.3 shower Monte Carlo event generators, including hadronization and underlying-event effects, and compare them against the data collected by the CMS Collaboration at \(\sqrt{s}=13\) TeV.
## 1 Introduction
The production of a \(W\) boson in association with a charm quark (\(c\)), tagged by the full reconstruction of the charmed \(D\) hadron from the clear experimental signature of its decay products, is one of the main probe of the strange-quark content of the colliding particles at an hadron collider, such as the LHC. In fact, the leading contribution to \(Wc\) production (\(W^{-}c\) and \(W^{+}\bar{c}\)) comes from the scattering of a strange quark (\(s\) or \(\bar{s}\)) and a gluon, from the two incoming hadrons, since the corresponding Cabibbo-Kobayashi-Maskawa (CKM) matrix element is the dominant one for this channel. The two leading-order (LO) Feynman diagrams for \(Wc\) production are depicted in Fig. 1.
As first illustrated in Ref. [1], this process can be used to put constraints on the strange-quark content of the colliding hadrons, i.e. protons at the Large Hadron Collider (LHC). Several measurements of \(Wc\) production have already been performed at the LHC as it went through the increasing energy upgrade from 7 to 13 TeV, by the ATLAS [2; 3], the CMS [4; 5; 6] and the LHCb Collaborations [7].
The next-to-leading order (NLO) QCD cross section for \(Wc\) production has been known for a while, and studied both at the Tevatron [8] and at the LHC [9]. More recently, the complete set of next-to-next-to-leading order (NNLO) QCD corrections to the dominant
CKM-diagonal contribution have been computed in Ref. [10] and further extended with full CKM dependence, including the dominant NLO electro-weak corrections, in Ref. [11], in the massless-charm limit.
The first dedicated NLO QCD + parton shower (PS) generator for \(Wc\) production was done in Ref. [12], within the PowHel event generator, based on the POWHEG method [13], with the \(W\) boson decaying leptonically, including charm-quark mass effects in the hard-scattering matrix elements and a non-diagonal CKM matrix.1
Footnote 1: Predictions for \(Wc\) hadro-production, at NLO QCD accuracy, matched to PS, according to the MC@NLO matching framework [14], can also be obtained using the MadGraph5_aMC@NLO framework [15]. In addition, one can also use the authomatic interface between the POWHEG BOX and MadGraph5_aMC@NLO [16] to build the code, according to the POWHEG matching framework.
In this paper we describe the implementation of a NLO + parton-shower generator for the hadro-production of a massive charm quark accompanied by a leptonically-decaying \(W\) boson in the POWHEG BOX RES framework [17; 18; 19]. We include exactly off-shell and spin-correlation effects, as well as off-diagonal CKM contributions.
The POWHEG BOX RES framework is able to deal with radiation off resonances. Although for the case at hand of QCD corrections to a leptonically decaying \(W\) boson, the machinery of radiation off resonances is not necessary, since no QCD radiation can be emitted from the leptons, we have implemented \(Wc\) production in this framework for several reasons:
1. Better handling of final-state radiation from heavy quarks, as radiation collinear to heavy partons is dealt with an appropriate importance sampling.2 Footnote 2: The first treatment of radiation from heavy parton in the POWHEG BOX appeared in Ref. [20].
2. The future inclusion of NLO electro-weak corrections is facilitated, since the process is already in the right framework to deal with photon radiation, not only from quarks, but also from the charged lepton in the \(W\) decay, where the virtuality of the resonance has to be preserved during the generation of the hardest radiation performed by the POWHEG BOX RES.
3. All the processes for which the NNLO+PS accuracy has been reached [21; 22; 23; 24] using the MiNNLO\({}_{\rm PS}\) formalism [25; 26] are implemented in the POWHEG BOX RES framework. It would then be easier to implement also the NNLO QCD corrections to \(Wc\) production in this framework too.
The paper is organized as follows. In Sect. 2 we give some technical details about the process we are studying at LO and NLO, we describe the change of scheme of our calculation and the default choice of the renormalization and factorization scales that we have used. In addition we give the numerical value of the input parameters we have used in our simulation, and we present the validation of the fixed-order NLO results. In Sect. 3 we give details of the matching of the NLO amplitudes with the POWHEG method, as implemented in the POWHEG BOX RES framework. We discuss the effects of the extra degrees of freedom offered by this matching and their numerical impact. In Sect. 4 we present a few interesting kinematic distributions computed with events obtained by completing the first radiation emission performed by the POWHEG BOX RES code, with three showering models,
implemented in Herwig7.2 and Pythia8.3, and we discuss the differences. We also present results with full hadronization in place and in the presence of underlying events, and we compare them with the experimental data collected by the CMS Collaboration at 13 TeV.
The POWHEG BOX RES framework, together with the Wc generator, can be downloaded at [http://powhegbox.mib.infn.it](http://powhegbox.mib.infn.it).
## 2 Contributing processes and technical details
In this section we discuss the implementation of the processes \(pp\to\ell^{-}\bar{\nu}_{\ell}c\) and \(pp\to\ell^{+}\nu_{\ell}\bar{c}\) at NLO QCD accuracy.
We consider the \(c\) quark to be massive, with all the other lighter quarks and anti-quarks treated as massless. At LO, the partonic subprocesses that contribute to \(Wc\) production are
\[\begin{array}{l}g\,d\to\ell^{-}\,\bar{\nu}_{\ell}\,c\\ g\,s\to\ell^{-}\,\bar{\nu}_{\ell}\,c\\ g\,\bar{d}\to\ell^{+}\,\nu_{\ell}\,\bar{c}\\ g\,\bar{s}\to\ell^{+}\,\nu_{\ell}\,\bar{c}\end{array} \tag{1}\]
where the channels involving a down (anti-)quark are Cabibbo-suppressed with respect to those containing the strange (anti-)quark. We are neglecting contributions arising from a bottom quark in the initial state: these contributions are suppressed both by the bottom parton distribution function (PDF), and also by the smallness of the CKM matrix element \(|V_{cb}|\).
The tree-level Feynman diagrams contributing to the process \(g\,s(d)\to\ell^{-}\,\bar{\nu}_{\ell}\,c\) are represented in Fig. 1: this process can proceed either via an \(s\)-channel exchange of a \(s(d)\) quark (left panel), or via a \(t\)-channel exchange of a \(c\) quark (right panel). The mass of the charm quark ensures that the latter contribution is finite also when its transverse momentum is vanishing.
Figure 1: Feynman diagrams contributing to the partonic process \(g\,s(d)\to\ell^{-}\,\bar{\nu}_{\ell}\,c\) at LO, in the \(s\) channel (a), on the left, and \(t\) channel (b), on the right. The heavy charm quark is represented as a double line.
In the following we discuss only \(W^{-}\) production, since a similar discussion is also valid for \(W^{+}\) production, after charge conjugation of the involved subprocesses.
Together with the QCD virtual corrections to the subprocesses in Eq. (1), not depicted here, also the real corrections contribute at the next-to-leading order. Examples of Feynman diagrams contributing to the real corrections for the process \(pp\to\ell^{-}\bar{\nu}_{\ell}c\,j\) are shown in Fig. 2. We can separate the real-correction contributions into two classes:
1. The corrections of the type \(pp\to\ell^{-}\bar{\nu}_{\ell}\,c\,j\), with \(j=g,u(\bar{u}),d(\bar{d}),s(\bar{s})\), that are singular in the limit of the unresolved jet \[\begin{split}& q(\bar{q})\,s(d)\to\ell^{-}\,\bar{\nu}_{\ell}\,c\,q( \bar{q})\\ & g\,s(d)\to\ell^{-}\,\bar{\nu}_{\ell}\,c\,g\\ & g\,g\to\ell^{-}\,\bar{\nu}_{\ell}\,c\,\bar{s}(d)\,,\end{split}\] (2)
Figure 2: Examples of Feynman diagrams contributing to the real corrections for the process \(pp\to\ell^{-}\bar{\nu}_{\ell}c\,j\), where \(j=g,u,d,s,\bar{u},\bar{d},\bar{s}\). The last Feynman diagram (d) is an example of “regular” contribution, i.e. no singularities are associated with this process.
where \(q=u,d,s\). Examples of these contributions are depicted in Fig. 2, diagrams (a), (b) and (c). We will refer to these terms with \(R\) in the following.
2. The corrections of the type \[q\,\bar{q}\to\ell^{-}\,\bar{\nu}_{\ell}\,c\,\bar{q}^{\prime},\] (3) where \(q=u,d,s\) and \(q^{\prime}=s,d\), which do not have any singularities associated with the extra light-parton emission, and are called regular, in the POWHEG jargon and will be indicated with \(R_{\mbox{\tiny reg}}\). Again we do not consider contributions with a final-state \(b\) quark. An example of this type of contributions is depicted in Fig. 2 (d).
Notice that the square of the diagrams contributing to the regular processes (Fig. 2 (d)), as well as similar diagrams with two gluons in the initial state (that interfere with the one in (c)), suffer from another potential source of divergence, due to the presence of the charm anti-quark in the \(s\) channel. In fact, when the invariant mass of the system comprising the \(\bar{s}(\bar{d})\) quark and the two leptons is equal to the charm mass \(m_{c}\), these diagrams are divergent, due to the fact that the \(c\)-quark propagator goes on shell. In order to avoid this singularity, we can impose a technical cut on the invariant mass of the off-shell \(W\) boson
\[m_{\ell\nu}^{2}\equiv(p_{\ell}+p_{\nu})^{2}>m_{c}^{2}\,, \tag{4}\]
in the theoretical simulation. In order to make a closer contact to experimentally accessible quantities, a more realistic cut is in general imposed on the transverse mass of the \(W\) system, \(m_{{}_{TW}}\), defined as
\[m_{{}_{TW}}^{2}=2\,p_{{}_{\perp\ell}}\,p_{{}_{\perp\nu}}\left(1-\cos\Delta \varphi\right)\,, \tag{5}\]
where \(\Delta\varphi\) is the azimuthal separation between the transverse momenta of the charged lepton and the missing energy of the neutrino.
We neglect contributions with two charmed quark in the final state, such as \(\bar{u}d\to\ell^{-}\,\bar{\nu}_{\ell}\,c\bar{c}\), coming from the splitting \(g\to c\bar{c}\), since these contributions are subtracted from experimental data in studies aimed at extracting information on the strange-quark parton distribution function, as illustrated in Sect. 4.2.
The Born and virtual matrix elements, as well as the Born colour- and spin-correlated amplitudes, necessary to the POWHEG BOX RES to automatically implement the POWHEG formalism, have been computed analytically.3 The one-loop amplitudes have been validated against the Gosam code [27; 28]. The matrix elements for the real contributions were instead generated using Madgraph4[29] and its interface with the POWHEG BOX[30].
Footnote 3: In fact, the analytic calculation of these amplitudes was part of the Master thesis of one of the authors.
### The decoupling and \(\overline{\mbox{MS}}\) schemes
When performing a fixed-order calculation with massive quarks, one can define al least two consistent renormalization schemes that describe the same physics: the usual \(\overline{\mbox{MS}}\) scheme, where all flavours are treated on equal footing, and a mixed scheme [31], called decoupling scheme [32], in which the \(n_{\rm lf}\) light flavours are subtracted in the \(\overline{\mbox{MS}}\) scheme, while the heavy-flavour loop is subtracted at zero momentum. In this scheme, the heavy flavour
decouples at low energies. This is the scheme we have used in the renormalization of our analytic calculation, with \(n_{\rm lf}=3\).
If the decoupling scheme is chosen, then the strong coupling constant \(\alpha_{\rm S}\) runs with three light flavours, and the parton distribution functions should not include the charm quark in the evolution.
To make contact with other results expressed in terms of the \(\overline{\rm MS}\) strong coupling constant, running with five light flavours, and with PDFs with five flavours, we prefer to change our renormalization scheme and to switch to the \(\overline{\rm MS}\) one. The procedure for such a switch is well known, and was discussed in Ref. [33]. This procedure, applied to the \(b\) quark, is also illustrated in Appendix A of Ref. [34].
For the case under study, since the LO process contains only one power of \(\alpha_{\rm S}\), if we want to express our calculation in terms of a coupling constant defined in the \(\overline{\rm MS}\) scheme with five active flavours, we need to add the following contribution to the cross section
\[\Delta V_{\alpha_{\rm S}}(\mu_{\rm R};m_{b},m_{c})=-\frac{1}{3}T_{F}\frac{ \alpha_{\rm S}}{\pi}\left(\log\frac{\mu_{\rm R}^{2}}{m_{c}^{2}}+\log\frac{\mu _{\rm R}^{2}}{m_{b}^{2}}\right)B\,, \tag{6}\]
where \(B\) is the Born cross section, \(m_{b}\) the bottom mass and \(\mu_{\rm R}\) is the renormalization scale. A corresponding modification has to be applied if we want to employ PDFs with five active flavours. The gluon PDF, which appears in the LO cross section, induces a correction
\[\Delta V_{g}(\mu_{\rm F};m_{b},m_{c})=+\frac{1}{3}T_{F}\frac{\alpha_{\rm S}}{ \pi}\left(\log\frac{\mu_{\rm F}^{2}}{m_{c}^{2}}+\log\frac{\mu_{\rm F}^{2}}{m_ {b}^{2}}\right)B\,, \tag{7}\]
where \(\mu_{\rm F}\) is the factorization scale. The quark PDF receives only corrections that starts at order \(\alpha_{\rm S}^{2}\), and can thus be neglected. By adding Eqs. (6) and (7), we get the conversion factor for the two schemes
\[\Delta V(\mu_{\rm F},\mu_{\rm R})=\Delta V_{\alpha_{\rm S}}+\Delta V_{g}= \frac{2}{3}\,T_{F}\frac{\alpha_{\rm S}}{\pi}\,B\log\frac{\mu_{\rm F}^{2}}{\mu _{\rm R}^{2}}\,, \tag{8}\]
that turns out to be independent from the bottom and charm masses, and different from zero only if the renormalization and factorization scales are different.
### Renormalization and factorization scale settings
In the POWHEG BOX there is the option to use a different renormalization and factorization scale when computing the real contribution (and/or the subtraction terms, i.e. their soft and collinear limits) or the Born and the virtual ones. The only requirement is that the scale used in the evaluation of the real contributions (and possibly of the subtraction terms) must tend to the scale used in the Born amplitude, when the emitted parton gets unresolved.
The default central scale used in our simulation in the evaluation of the Born, virtual, real and subtraction terms is
\[\mu=\frac{H_{ T}}{2} \tag{9}\]
with
\[H_{ T}=\sqrt{|\vec{p}_{\perp c}|^{2}+m_{c}^{2}}+\sqrt{|\vec{p}_{ \perp\ell}+\vec{p}_{\perp\nu}|^{2}+(p_{\ell}+p_{\nu})^{2}}\,, \tag{10}\]
evaluated with the kinematics of the underlying-Born configuration. The regular real term \(R_{\rm reg}\) has no underlying Born configuration, and the scale \(H_{ T}\) is computed using the kinematics of the real emission contribution, adding also the transverse momentum of the massless emitted parton.
\[H_{ T}=\sqrt{|\vec{p}_{\perp c}|^{2}+m_{c}^{2}}+\sqrt{|\vec{p}_{ \perp\ell}+\vec{p}_{\perp\nu}|^{2}+(p_{\ell}+p_{\nu})^{2}}+|\vec{p}_{\perp j}|. \tag{11}\]
Other options contemplate the use of the scale (11) also in the real term and/or in the subtraction terms,4 or a fixed scale,5 such as \(\mu=m_{ W}\).
Footnote 4: This option can be chosen adding the flags btlscalerel 1 and/or btlscalect 1 in the input card.
Footnote 5: Set runningscales 0 in the input card.
The renormalization and factorization scales, \(\mu_{\rm R}\) and \(\mu_{\rm F}\), are defined multiplying \(\mu\) by the factors \(\xi_{ R}\) and \(\xi_{ F}\) respectively. The uncertainty due to missing higher order corrections is estimated considering the 7-point scale variations
\[(\xi_{ R},\xi_{ F})=(1,1),\,(1,2),\,(1,0.5),\,(2,1),\,(0.5,1),\,(2,2),\,(0.5,0.5). \tag{12}\]
We employ the NNPDF3.1_nlo_pdfas [35, 36] PDF set, which is also used to determine the running of the strong coupling.6
Footnote 6: This option can be selected by adding the flag alphas_from_pdf 1 in the input card. Otherwise the running of \(\alpha_{\rm S}\) is computed at two loops internally by the POWHEG BOX RES.
### Numerical inputs
We have simulated \(Wc\) production in \(pp\) collisions at \(\sqrt{s}=13\) TeV. The input parameters we have used are
\[m_{ W}=80.385\ {\rm GeV} \Gamma_{ W}=2.085\ {\rm GeV}\] \[m_{ Z}=91.1876\ {\rm GeV} \Gamma_{ Z}=2.4952\ {\rm GeV} \tag{13}\] \[G_{\mu}=1.16639\times 10^{-5}\ {\rm GeV}^{-2} m_{c}=1.5\ {\rm GeV}.\]
The electromagnetic coupling is obtained in the \(G_{\mu}\) scheme
\[\alpha_{\rm em}=\frac{\sqrt{2}}{\pi}G_{\mu}m_{ W}^{2}\left(1- \frac{m_{ W}^{2}}{m_{ Z}^{2}}\right), \tag{14}\]
and the values of relevant CKM matrix elements are
\[|V_{cd}|=0.22438\,,\qquad|V_{cs}|=0.97356\,. \tag{15}\]
In our calculation we have neglected the mass of the leptons. Only at the stage of event generation a leptonic mass is assigned to the charged lepton once the full kinematics of the event is generated.7 The assignment of the mass is done through a reshuffling procedure that preserves the total energy and momentum, and the virtuality of the resonances. The values of the lepton masses we have used are given by
Footnote 7: The user can also choose in which leptonic family/families the \(W\) boson decays, by selecting the value of the flag vdecaymode in the input card.
\[m_{e}=0.511\times 10^{-3}\ {\rm GeV},\qquad m_{\mu}=0.1057\ {\rm GeV},\qquad m_{ \tau}=1.777\ {\rm GeV}. \tag{16}\]
Furthermore, following the discussion that precedes Eq. (4), we always apply a technical loose cut, at the generation stage, on the \(W\)-boson virtuality
\[m_{\ell\nu}>2\ {\rm GeV}\,. \tag{17}\]
### Fixed-order validation
In this section we compare our fixed-order results with those obtained with the MCFM code [37; 38]. In order to perform the comparison we employ a fixed renormalization and factorization scale set to
\[\mu_{\rm F}=\mu_{\rm R}=m_{ W}\,. \tag{18}\]
As we are setting \(\mu_{\rm F}=\mu_{\rm R}\), the term \(\Delta\)V defined in Eq. (8) vanishes and there is no correction factor. We define the fiducial cross section imposing the following cuts on the transverse momentum and rapidity of the charged lepton, as well on the transverse momentum of the charm quark
\[|\vec{p}_{\perp\ell}|>26\ {\rm GeV},\qquad|y_{\ell}|<2.4\,,\qquad|\vec{p}_{ \perp c}|>5\ {\rm GeV}. \tag{19}\]
These cuts corresponds to the ones employed by the CMS Collaboration to define the inclusive cross section in the analysis described in Ref. [6] at \(\sqrt{s}=13\) TeV.
As discussed in Sect. 2, around Eqs. (4) and (5), in order to avoid the charm-propagator singularity in diagrams like the one in Fig. 2 (d), we also implement a cut on the transverse mass of the \(W\) system
\[m_{ TW}>20\ {\rm GeV}. \tag{20}\]
Figure 3: Differential distribution for the transverse mass of the reconstructed \(W^{-}\) boson for the processes \(pp\to e^{-}\bar{\nu}_{e}c\) (left) and the transverse momentum of the heavy anti-charm quark for the processes \(pp\to e^{+}\nu_{e}\bar{c}\) (right) at NLO accuracy. The input parameters are discussed in Sect. 2.3. The ratio between predictions obtained with MCFM (blue) and the POWHEG BOX RES (red) is also shown. The statistical error bars from the Monte Carlo integrator are also shown.
In Table 1 we show the integrated cross section at LO and NLO obtained with our code and
with MCFM. In all cases a sub-permille agreement is found. Good agreement is found also at the differential level, as it can be seen in Fig. 3, where we illustrated the transverse mass of the reconstructed \(W\) boson (left panel) and the transverse momentum of the charmed quark (right panel).
## 3 Details of the POWHEG matching
In this section we briefly recall the expression for the differential cross section generated with the POWHEG method [13] for the process under consideration, focusing for simplicity only on the \(pp\to\ell\bar{\nu}_{\ell}c\) signature, in order to discuss further sources of theoretical uncertainty intrinsic to the POWHEG method.
The NLO cross section reads
\[\mathrm{d}\sigma_{\mathrm{NLO}}=B(\Phi_{3};\mu_{\mathrm{R}},\mu_ {\mathrm{F}})\,\mathrm{d}\Phi_{3}+\frac{\alpha_{\mathrm{S}}(\mu_{\mathrm{R}} )}{2\pi} \left[V(\Phi_{3};\mu_{\mathrm{R}},\mu_{\mathrm{F}})\,\mathrm{d}\Phi_{3}+R( \Phi_{3+1};\mu_{\mathrm{R}},\mu_{\mathrm{F}})\,\mathrm{d}\Phi_{3+1}\right.\] \[+\left.R_{\mathrm{reg}}(\Phi_{3+1};\mu_{\mathrm{R}},\mu_{\mathrm{ F}})\,\mathrm{d}\Phi_{3+1}\right], \tag{10}\]
where \(B\), \(V\), \(R\) and \(R_{\mathrm{reg}}\) are the Born, virtual, real and regular real matrix elements, multiplied by the appropriate PDF and flux factors. As stated in Sect. 2, we indicate with \(R_{\mathrm{reg}}\) the terms arising from the flavour configurations of Eq. (3), which do not contain any QCD singularity, while the processes in Eq. (2) are designated with \(R\). The Born and virtual cross sections are evaluated using a three-body phase space \(\Phi_{3}\), while, in the real contributions, an additional light parton is present. The matrix elements depend on the \(\mu_{\mathrm{R}}\) scale, as the LO starts with one power of the strong coupling constant \(\alpha_{\mathrm{S}}\), while the \(\mu_{\mathrm{F}}\) dependence arises from the incoming-parton PDFs. To build the POWHEG cross section, the real term \(R\) is split into a singular \(R_{s}\) and a finite contribution \(R_{f}\), and the \(\bar{B}\) term is defined as
\[\bar{B}(\Phi_{3};\mu_{\mathrm{F}},\mu_{\mathrm{R}})=B(\Phi_{3};\mu_{\mathrm{R }},\mu_{\mathrm{F}})+\frac{\alpha_{\mathrm{S}}(\mu_{\mathrm{R}})}{2\pi}\left[ V(\Phi_{3};\mu_{\mathrm{R}},\mu_{\mathrm{F}})+\int d\Phi_{1}\,R_{s}(\Phi_{3+1}; \mu_{\mathrm{R}},\mu_{\mathrm{F}})\right], \tag{11}\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \(\sigma\) [fb] & \(e^{-}\bar{\nu}_{e}c\) @ LO & \(e^{+}\nu_{e}\bar{c}\) @ LO & \(e^{-}\bar{\nu}_{e}c\) @ NLO & \(e^{+}\nu_{e}\bar{c}\) @ NLO \\ \hline PWG RES & 352.7(1) & 341.1(1) & 497.2(2) & 480.8(2) \\ MCFM 10 & 352.9(2) & 341.2(2) & 497.0(2) & 481.0(3) \\ Ratio & 1.0005(5) & 1.0002(5) & 0.9996(7) & 1.0004(7) \\ \hline \end{tabular}
\end{table}
Table 1: Fiducial cross section obtained with MCFM and our generator (PWG RES) for the processes \(pp\to e^{-}\bar{\nu}_{e}c\) and \(e^{+}\nu_{e}\bar{c}\) at LO (left columns) and NLO (right columns) at \(\sqrt{s}=13\) TeV within the fiducial cuts of Eqs. (19) and (20), with \(\mu_{\mathrm{F}}=\mu_{\mathrm{R}}=m_{ W}\). The input parameters are discussed in Sect. 2.3. In the last row we show the ratio between the MCFM and the POWHEG BOX RES predictions.
where \(\Phi_{1}\) is the radiation phase space, defined in terms of three radiation variables. The probability of not having any resolved emissions up to a scale \(k_{\rm T}^{\prime}\) is given by
\[\Delta_{s}(k_{\rm T}^{\prime};\Phi_{3})=\exp\left[-\int{\rm d}\Phi_{1}\,\theta \big{[}k_{\rm T}(\Phi_{1})-k_{\rm T}^{\prime}\big{]}\,\frac{\alpha_{\rm S}(k_{ \rm T})}{2\pi}\,\frac{R_{s}(\Phi_{3+1};k_{\rm T},k_{\rm T})}{B(\Phi_{3};k_{\rm T },k_{\rm T})}\right], \tag{10}\]
where the factorization and the renormalization scales are evaluated at the scale \(k_{\rm T}\), which corresponds to the transverse momentum of the light parton, if it is collinear to an initial-state parton, while, for gluon emissions from the final-state charm quark, it is given by
\[k_{\rm T}^{2}=2\frac{E_{g}}{E_{c}}\left(p_{g}\cdot p_{c}\right), \tag{11}\]
where \(p_{g}\) and \(p_{c}\) are the gluon and charm four-momenta and \(E_{g}\) and \(E_{c}\) their energies in the partonic center-of-mass frame.
The POWHEG cross section finally reads
\[{\rm d}\sigma_{\rm PWG}= \bar{B}(\Phi_{3};\mu_{\rm F},\mu_{\rm R})\,{\rm d}\Phi_{3}\left[ \frac{\alpha_{\rm S}(k_{\rm T})}{2\pi}\frac{R_{s}(\Phi_{3+1};k_{\rm T},k_{\rm T })}{B(\Phi_{3};k_{\rm T},k_{\rm T})}\,\Delta_{s}(k_{\rm T};\Phi_{3})\,{\rm d} \Phi_{1}+\Delta_{s}(k_{\rm T}^{\rm min};\Phi_{3})\right]\] \[+\frac{\alpha_{\rm S}(\mu_{\rm R})}{2\pi}R_{f}(\Phi_{3+1};\mu_{ \rm R},\mu_{\rm F})\,{\rm d}\Phi_{3+1}+\frac{\alpha_{\rm S}(\mu_{\rm R})}{2 \pi}R_{\rm reg}(\Phi_{3+1};\mu_{\rm R},\mu_{\rm F})\,{\rm d}\Phi_{3+1}\,, \tag{12}\]
where \(k_{\rm T}^{\rm min}\) is an infrared cutoff.
There are several degrees of freedom in the implementation of Eq. (12). One of them is in the choice of the renormalization and factorization scales \(\mu_{\rm R}\) and \(\mu_{\rm F}\) in the \(R_{f}\) and \(R_{\rm reg}\) terms, as well as in the term \(R_{s}\) appearing in \(\bar{B}\) in Eq. (10), as discussed in Sect. 2.2. Another one is in the choice of how to separate \(R\) into a singular part \(R_{s}\) and a finite one \(R_{f}\). Without loss of generality, one can separate the two contributions introducing what is called a "damping function", \(f(k_{\rm T})\), that satisfies
\[0\leq f(k_{\rm T})\leq 1\,,\qquad\lim_{k_{\rm T}\to 0}f(k_{\rm T})=1 \tag{13}\]
and define
\[R_{s}=f(k_{\rm T})\,R,\qquad R_{f}=[1-f(k_{\rm T})]\,R\,. \tag{14}\]
Figure 4: Comparison of three damping functions used to separate the real contribution into a singular and a finite part, as detailed in Sect. 3.
For the process we are studying, we have considered two different forms for \(f\)
\[f_{1}(x) = \frac{1}{1+x^{2}} \tag{10}\] \[f_{2}(x;h) = \begin{cases}1&\text{for $x\leq 1-2h$}\\ 1-\frac{(1-2h-x)^{2}}{2h^{2}}&\text{for $1-2h<x\leq 1-h$}\\ \frac{(1-x)^{2}}{2h^{2}}&\text{for $1-h<x\leq 1$}\\ 0&\text{for $x>1$},\end{cases} \tag{11}\]
with
\[x=\frac{1}{\mu}\sqrt{k_{\text{\tiny T}}^{2}-m_{c}^{2}}\ \theta(k_{\text{ \tiny T}}-m_{c})\,, \tag{12}\]
where \(\mu\) is defined in Eq. (10), and \(0\leq h\leq 1/2\) is a free parameter. The value of \(x\) is set to \(0\) if \(k_{\text{\tiny T}}<m_{c}\), so that no finite contribution is present with \(k_{\text{\tiny T}}\) below the charm mass.8 The functional form \(f_{2}\) in Eq. (11), with \(h=0.3\), is the default damping function in the Herwig7.2 event generator [39], while \(f_{1}\) in Eq. (10) corresponds to the default option of the POWHEG BOX RES. In Fig. 4 we provide a graphical representation of \(f_{1}\), \(f_{2}(h=0.1)\) and \(f_{2}(h=0)\).9
Footnote 8: In the POWHEG BOX RES, no damping factor is applied when the emitter is massive.
Footnote 9: Furthermore, the code has by default the Bornzerodamp flag activated. This provides a way to deal with kinematic configurations where the real contribution is too large with respect to its soft and collinear limits, signalling that the kinemtics of the underlying Born gives rise to a Born amplitude that is particularly small, i.e. close to zero. See Ref. [18] for more details.
In the following we investigate the dependence of several inclusive (with respect to the QCD radiation) observables, such as the leptonic observables and the rapidity and the transverse momentum of the jet containing the charm quark.
We reconstruct jets using the Fastjet [40] implementation of the anti-\(k_{\text{\tiny T}}\) algorithm [41] with \(R=0.5\). To this aim, we have considered the setups listed in Table 2 to compute the POWHEG cross section given in Eq. (9). The central value of the renormalization and factorization scales is set to \(H_{\text{\tiny T}}/2\) (see Eq. (10)).
\begin{table}
\begin{tabular}{|c|c|c|} \hline acronym & scales in \(R\) & damping function \(f\) \\ \hline \(\text{B}_{0}\) & underlying-Born kinematics & 1 \\ \hline \(\text{B}_{1}\) & underlying-Born kinematics & \(f_{1}\) \\ \hline \(\text{B}_{2}\) & underlying-Born kinematics & \(f_{2}(h=0.1)\) \\ \hline \(\text{R}_{0}\) & real kinematics & 1 \\ \hline \end{tabular}
\end{table}
Table 2: Setups for the implementation of the POWHEG cross section given in Eq. (9). The second column denotes the kinematic configuration used to evaluate the renormalization and factorization scales in the computation of the real and subtraction terms, while the last column shows which profile function is used to evaluate Eq. (10). The functional forms of \(f_{1}\) and \(f_{2}\) are given in Eqs. (10) and (11) respectively.
In the following figures we will compare some kinematic distributions computed with six different setups: two fixed-order NLO results, one computed using the underlying-Born kinematics (\(\rm NLO_{B}\)) for the renormalization and factorization scale in the real contributions, and the other one using the full real-event kinematics, indicated with \(\rm NLO_{R}\). The other four distributions are computed analysing the POWHEG-generated events at the "Les Houches" (LHE) level, i.e. events with the first hardest radiation generated according to Eq. (10), adopting the setup scales and damping functions of Table 2. For inclusive observables, the LHE results should agree with the fixed-order NLO ones up to NNLO terms.
In Fig. 5 we plot the transverse mass of the \(W\) boson (left panel) and the rapidity of the charmed jet (right panel). The \(\rm NLO_{B}\) results (blue line) are roughly 3% higher than the \(\rm NLO_{R}\) ones (black line), although always contained in the grey scale-uncertainty band of \(\rm NLO_{R}\), computed performing the 7-point scale variations in Eq. (12). Furthermore, all the distributions at the LHE level agree with the corresponding fixed order calculation, within the scale-variation band.
The transverse momentum of the \(W\) boson, \(p_{\perp W}\), and of the charmed jet, \(p_{\perp je}\), are illustrated on the left and right panel of Fig. 6, respectively. Comparing the two NLO predictions computed with different scales, one can see that the use of the underlying-Born scales favours a harder spectrum, although the result always lies within the scale-variation band of the \(\rm NLO_{R}\) one, except for the very last bin. This is due to the fact that, for large \(c\)-jet transverse momentum, it is quite likely to produce a \(c\to cg\) splitting that enhances the value of \(H_{ T}\). As such, the underlying-Born kinematics yields a lower scale and hence a
Figure 5: The transverse mass distribution of the reconstructed \(W\) boson (left) and the rapidity of the charmed jet (right) at NLO (\(\rm NLO_{B}\) and \(\rm NLO_{R}\)) and at the Les Houches event level, for the choices of the scales and damping functions listed in Table 2. The grey bands are obtained performing the 7-point scale variations in Eq. (12) of the \(\rm NLO_{R}\) results. The statistical errors from the integration procedure are also shown.
larger value for \(\alpha_{\rm S}\), that enhances the cross section. The same behaviour can be seen also at the LHE level.
In the experimental analyses, such as the one in Ref. [6], an important role is played by the measurement of the transverse momentum and rapidity of the charged lepton. In Fig. 7 we then plot these quantities, after applying the cuts of Eq. (19), that are the same as those used by the CMS Collaboration, as well as a cut on \(m_{TW}\), defined in Eq. (20). As far as the fixed-order NLO results is concerned, the NLO\({}_{\rm B}\) curves lie a few percent above the NLO\({}_{\rm R}\) one, both for the transverse momentum and the rapidity. This is again due to
Figure 6: Same as Fig. 5 but for the transverse momentum of the reconstructed \(W\) boson (left) and of the charmed jet (right).
Figure 7: Same as Fig. 5 but for the transverse momentum (left) and rapidity (right) of the charged lepton, with the cuts in Eq. (19) and (20) in place.
the higher value of \(\alpha_{\rm S}\) when the renormalization scale is evaluated with the underlying-Born kinematics. Differences up to 10% can also be seen among the LHE curves, but, in general, all these curves lie within the grey uncertainty band of the NLO\({}_{\rm R}\) result, so that they are consistent among each other.
We thus conclude that no significant differences are found among the several damping factors, particularly when realistic analysis cuts are applied, and that the main theoretical uncertainties arise from scale variations.10
Footnote 10: In this analysis we are neglecting the dependence on the PDF set. This topic is addressed in detail by the authors of Refs. [11; 12], which considered several sets, where the charm PDF is intrinsic or dynamically generated. The outcome they found depends on the PDF set used in the simulation: for some sets, the PDF-variation band is smaller than the scale-variation one, for others it is comparable or slightly larger.
## 4 NLO + parton-shower results
In this section, we present full results after the completion of the POWHEG shower performed by general-purpose Monte Carlo (MC) programs, such as Herwig7.2[42; 43] and Pythia8.3[44; 45], including hadronization and underlying-event effects.
We have showed the Les Houches events that we have produced using the POWHEG BOX RES code, employing the Herwig7.2 angular-ordered shower (that we label as "QTilde" shower), and the default Pythia8.3 shower with fully local recoil [46] (that we denote as "Dipole" shower), as well as the Pythia implementation of the Vincia shower [47; 48].
For all the different types of shower we use the default setting of their parameters, and we only change the charm mass to agree with the value we have used to generate the \(Wc\) sample (see Eq.(13)). In Sect. 4.1 we provide additional details of the showers we have employed, while in Sect. 4.2 we investigate the impact of several ingredients provided by the general-purpose MC generators on the simulation of \(Wc\) events. Finally, in Sect. 4.3 we compare our results against the experimental data at \(\sqrt{s}=13\) GeV, taken by the CMS Collaboration.
### The Powheg Box Res matching with Herwig7.2 and Pythia8.3
#### 4.1.1 QTilde shower
The relevant parameters to shower the POWHEG-generated events with the QTilde shower are taken from the LHE-POWHEG.in input card distributed in the public release of the Herwig7.2 code. In particular the options
set /Herwig/Shower/ShowerHandler:MaxPtIsMuF Yes set /Herwig/Shower/ShowerHandler:RestrictPhasespace Yes
instruct the shower to veto all the emissions with transverse momentum larger than the scalup variable, which is the hardness of the POWHEG emission. We set the charm mass to 1.5 GeV
set /Herwig/Particle/c:NominalMass 1.5*GeV set /Herwig/Particle/cbar:NominalMass 1.5*GeV
and we switch off the decay of unstable hadrons, in order to analyse more quickly the generated events
set LesHouchesHandler:DecayHandler NULL
By default, the QTilde shower includes also QED emissions, that can be switched off with the option
set /Herwig/Shower/ShowerHandler:Interactions QCD
The running coupling is evaluated at two loops in the Catani-Marchesini-Webber scheme [49], with \(\alpha_{\mathrm{S}}^{\mathrm{CMW}}(m_{Z})=0.1186\).
#### 4.1.2 Dipole shower
We also considered the Pythia8.3 implementation of a dipole shower with a fully local recoil [46]
SpaceShower:dipoleRecoil = on
We employed the PowhegHooks class to veto emissions harder than the POWHEG one, all the options correspond to the default ones, present in the main31.cmnd input card in the public release of the Pythia8.3 code. To generate only the QCD shower, we set
SpaceShower:QEDshowerByQ = off
SpaceShower:QEDshowerByL = off
TimeShower:QEDshowerByQ = off
TimeShower:QEDshowerByL = off
TimeShower:QEDshowerByGamma = off
TimeShower:QEDshowerByOther = off
and the hadrons decay is turned off with the flag
HadronLevel:Decay = off
The running coupling is evaluated at one loop in the \(\overline{\mathrm{MS}}\) scheme, and \(\alpha_{\mathrm{S}}^{\overline{\mathrm{MS}}}(m_{Z})=0.140\).
#### 4.1.3 Vincia shower
At difference with respect to the Dipole shower, Vincia is an antenna shower. To use its implementation in Pythia8.3, we set
PartonShowers:model = 2
To properly include charm-mass effects, one has to include the option
Vincia:nFlavZeroMass = 3
as, by default, this number is set to 4, meaning that only top and bottom are treated as massive. The QED shower can be turned off with the option
Vincia:ewMode = 0
The running coupling is evaluated at two loops in the Catani-Marchesini-Webber scheme, and \(\alpha_{\mathrm{S}}^{\mathrm{CMW}}(m_{Z})=0.127\). Also in this case, we use the PowhegHooks class to avoid to generate emissions harder than the POWHEG one and we do not include the unstable hadrons decay in our simulation.
### NLO + PS results with hadronization and underlying-event effects
In this section we discuss the impact of the parton shower, of the hadronization and of the underlying events on the results generated using our simulation in the POWHEG BOX RES. The results presented in this section do not include the QED shower.
We focus on the \(\mu^{+}\nu_{\mu}\bar{c}\) signature at \(\sqrt{s}=13\) TeV. We use the values of the parameters described in Sect. 2.3, and the B\({}_{2}\) setting of Table 2, i.e. the central values for the renormalization and factorization scales are computed using the underlying-Born kinematics, according to Eq. (10), and as damping function we use the \(f_{2}\) profile function of Eq. (11) with \(h=0.1\).
We begin by analyzing the effect of the parton shower on the POWHEG BOX distributions (PWG). If, at the end of the shower, an event contains more than one muon (or anti-muon) passing the charged leptonic cuts, only the hardest one is considered to reconstruct the \(W\) kinematics.
Since the aim of our analysis is to enhance the sensitivity of the measured cross section to the strange quark PDF, we proceed as follows: if more than one \(c\) quark or charmed hadron is present, they are all considered, with the exception of hadrons containing a \(c\bar{c}\) pair, which are counted as not charmed. In addition, when in an event the charmed-quark and the muon charges have the same sign (SS), the weight of the event is subtracted from the differential cross sections, while, when they have opposite sign (OS), the weight is added. This procedure effectively removes the background due to events where the final-state charm comes from gluon splitting \(g\to c\bar{c}\), and not from an initial-state strange quark emitting a \(W\) boson. Charmed quarks originated from the gluon splitting have the same probability to be SS or OS, so their contribution cancels, when the above procedure is applied.
Jets are reconstructed using the anti-\(k_{\mathrm{ T}}\) algorithm with \(R=0.5\). The charm content of a jet is defined as \(N_{c}-N_{\bar{c}}\), being \(N_{c}(N_{\bar{c}})\) the number of \(c(\bar{c})\) quarks clustered in the jet, and the event weight that we use to fill a jet-related distribution is further weighted by the factor \(\pm(N_{c}-N_{\bar{c}})\), where we use the \(+\) sign when the muon has negative charge, and the \(-\) sign otherwise.
In the following, we consider both inclusive results (with the technical cut on the \(W\) boson virtuality of Eq. (17) always in place), as well as more exclusive ones, where we apply the following acceptance cuts
\[|\eta_{\mu}|<2.4\,,\quad p_{{}^{\perp\mu}}>26\ \mathrm{GeV},\quad m_{{}_{\mathrm{ TW}}}>20\ \mathrm{GeV},\quad|\eta_{c}|<2.4\,,\quad p_{{}^{\perp c}}>5\ \mathrm{GeV}, \tag{12}\]
used by the CMS Collaboration in their analyses. At the inclusive level, the parton shower acts non-trivially on the transverse-momentum distribution of the charm quark and of the charmed jet, as portrayed in Fig. 8. In particular, one can notice that the hard tail is depleted, and the region around 5 GeV is instead increased. The effect is milder for the \(c\)-jet, as radiation inside the cone with jet radius \(R=0.5\) is still collected in the jet, but more evident for the hadron: in fact, in the high transverse-momentum region, the ratio of the LHE results with the showered ones reaches values around 50% for the quark/hadron, and 10% for the charmed jet.
## 6 Summary
Figure 8: Effect of three different parton showers (QTilde in blue, Dipole in red and Vincia in green) on the events produced with the POWHEG BOX RES (PWG), in black, for the transverse momentum of the charmed quark (left panel) and of the charmed jet (right panel), for inclusive cuts. The grey band corresponds to the scale variations of the PWG distribution.
Figure 9: Effect of the underlying event (UE) and of the hadronization (had) on the NLO+PS distributions obtained with the Herwig7.2 QTilde shower for the transverse-momentum distribution of the charmed quark (left panel) and of the charmed jet (right panel) for inclusive cuts. The grey band corresponds to renormalization- and factorization-scale variation, obtained using the standard reweighting procedure implemented in the POWHEG BOX RES framework.
In order to be able to compare theoretical predictions with data, we also have to include hadronization corrections and the simulation of the underlying-event (UE) production. In Figs. 9 and 10 we illustrate their effect on the NLO+PS distributions obtained with the Herwig7.2 QTilde shower, in Fig. 9, and with the Pythia8.3 Dipole shower, in Fig. 10. In all cases we notice that the UE have a small impact on the charmed-quark distribution, except in the region \(p_{\perp c}<10\) GeV. On the other hand, the underlying event hardens the \(c\)-jet distribution as it provides further particles that can be clustered in the jet. Hadronization corrections are large (up to 50%) and they soften the \(p_{\perp}\) of the charmed quark/hadron. Similar, but milder effects are seen for the charmed jet.
Figure 11: Same as Fig. 8, but NLO+PS simulations have been supplemented with the underlying-event activity and the hadronization.
Figure 10: Same as Fig. 9, but for the Pythia8.3 Dipole shower.
In Fig. 11 we then compare the full simulations obtained with Herwig7.2 and Pythia8.3 against the PWG standalone results. With the exception of the very low transverse-momentum region, Herwig7.2 and Pythia8.3 distributions are in very good agreement. In particular we notice that now the charmed-hadron distributions, obtained with all the three showers, are essentially indistinguishable for \(p_{\perp c}>10\) GeV, while in Fig. 8 the Herwig7.2 result was quite different from the Pythia8.3 ones. The improved agreement at the particle level is not unexpected, as the results without the inclusion of the hadronization are very sensitive to the shower cutoff, which is tuned alongside the hadronization parameters.
In Fig. 12 we plot the pseudo-rapidity11 (left panel) and the transverse momentum (right panel) of the muon, after applying the cuts in Eq. (4.1). Leptonic distributions are indirectly affected by the shower, that induces a modification of the momenta of the charmed quarks/hadrons. In particular, the softening of the charm quarks induced by the hadronization reduces the cross section. This reduction is clearly visible on the left panel of Fig. 12, with an almost uniform reduction of the cross section in the whole pseudo-rapidity range, of roughly 15%, with respect to the LHE result. In addition, all the three shower results are in very good agreement with each others. As far as the transverse momentum of the muon is concerned, shown on the right panel, we observe a significant shape variation as the bulk of the distribution, peaked around \(p_{\perp\mu}\approx m_{W}/2\), gets depleted. Differences among the three showers amount to few percents, and are much smaller than the scale-variation band.
Footnote 11: We remind the reader that since we include lepton-mass effects when generating events, as described in Sect. 2.3, the rapidity and pseudo-rapidity of the leptons differ.
Figure 12: Same as Fig. 11 for the muon pseudo-rapidity (left) and transverse momentum (right), and using the cuts in Eq. (4.1).
### Comparison with the CMS data
In this section we compare our results against the CMS measurements at \(\sqrt{s}=13\) TeV for the associated production of a \(W\) boson and a charm quark [6]. The \(W\) boson is identified from its decay products: only events with a muon with \(p_{\perp\mu}>26\) GeV and \(|\eta_{\mu}|<2.4\) are selected. The charm quarks are tagged by reconstructing the \(D^{*}(2010)\) mesons, which are required to have \(p_{\perp D^{*}}>5\) GeV and \(|\eta_{D^{*}}|<2.4\).12 The background arising from \(g\to c\bar{c}\) splitting is removed subtracting contributions where the charge of the resolved \(D^{*}\) meson and the muon have the same sign, as described in Sect. 4.2. We employ the Rivet3[50] plugin to analyze the events generated by our simulations. Since the \(W\) boson is reconstructed from dressed leptons, in this case we also include the effect of the QED shower. All the other parameters are identical to those presented in Sect. 4.2.
Footnote 12: These cuts correspond to the ones we have used in Sect. 4.2, with the exception that here no cut on \(m_{\tiny\mbox{$\mbox{$\mbox{$\mbox{$\mbox{$\mbox{$\mbox{$\mbox{$ \mbox{$\mbox{$\mbox{$\mbox{$\mbox{$}$}$}$}$}}}}}}$}}}\) is imposed.
The results of our comparison are illustrated in Fig. 13 for the inclusive \(W^{\mp}D^{*}(2010)^{\pm}\) process, on the left panel, for \(W^{+}D^{*}(2010)^{-}\), on the middle panel, and for the \(W^{-}D^{*}(2010)^{+}\) signature, on the right panel. Among all the showers, Vincia seems to be following the data more closely, slightly underestimating them, in the default Pythia8.3 configuration, while the Herwig7.2 results do instead lead to a larger value of the cross section. We notice that the Dipole shower distributions showed here are very similar to the ones presented in Ref. [12], where different PDF sets and tunes were employed. We want however to stress that we use the default tune of the showers, and this distribution is particularly sensitive to the hadronization modelling. For example, the Herwig7.2 hadronization parameters have
Figure 13: Muon pseudo-rapidity in \(pp\to WD^{*}\) events at \(\sqrt{s}=13\) TeV, using the selection cuts of the CMS analysis of Ref. [6], which are implemented in the Rivet analysis CMS_2019_I705068. The experimental data and their combined statistical and systematic errors are drawn in black. The particle-level results obtained interfacing the POWHEG BOX RES with the Herwig7.2 (QTilde, in blue) and the Pythia8.3 showers (Dipole, in red, and Vincia, in green) are also shown together with their scale-uncertainty bands, produced by varying \(\mu_{\tiny\mbox{F}}\) and \(\mu_{\tiny\mbox{R}}\) in the PWG calculation, performing the 7-point scale variations of Eq. (12).
been tuned in Ref. [51] using LEP data at the \(Z\) mass. Due to the fact that \(B\) mesons primarily decay into \(D\) ones, the charm hadronization is further affected by its interplay with the bottom hadronization. Thus, a dedicated tune will certainly allow for a better comparison with the data.
## 5 Conclusions
In this article we have presented the implementation of a NLO + parton-shower generator for massive-charm production in association with a leptonically-decaying \(W\) boson in hadron collisions in the POWHEG BOX RES framework. This process is particularly crucial for the determination of the strange-quark content of the proton.
All spin-correlation effects and off-shell contributions in the \(W\) decay are properly included, and also non-diagonal elements in the Cabibbo-Kobayashi-Maskawa matrix.
The leading-order and the one-loop matrix elements have been computed analytically, allowing for a very fast and numerically-stable evaluation of the amplitudes.
Using the flexibility of the POWHEG method on the possibility to separate the real-radiation contribution into a finite and a divergent part, that is then exponentiated in the Sudakov form factor, we have implemented different ways to separate the real term and assessed the sensitivity of several kinematic distributions to this extra degree of freedom.
The main source of theoretical uncertainty remains the dependence on the factorization and renormalization scales, of the order of 10% for most kinematic distributions.
Events generated with our code have been interfaced with the Herwig7.2 QTilde shower, as well with the Pythia8.3 default shower and the Vincia one. We have investigated the effect of the parton shower, hadronization and of the underlying-event activity on several kinematic distributions. In particular hadronization corrections turned out to have the highest impact on the differential cross sections.
Particle-level results obtained with the three shower models we have used agree very well with each other, once hadronization corrections and the underlying-event simulation are also included.
Finally we compared our results with the CMS data for the \(WD^{*}\) signature, finding good agreement, within the scale-variation bands.
The fixed order NLO part of the code we have presented in this paper was computed and implemented during S.F.R's Master thesis project. S.F.R. wants to acknowledge her co-advisors Federico Granata and Paolo Nason for guidance during the implementation of the project. S.F.R. also wants to thank Rob Verheyen for fruitful discussions on the Vincia shower. S.F.R. and C.O. thank Katerina Lipka for useful clarifications concerning the CMS Rivet analysis, and Tomas Jezo and Alexander Huss for useful comments on the manuscript. |
2308.02549 | Long term variability of Jupiter's northern auroral 8-micron CH4
emissions | We present a study of the long term variability of Jupiter's mid-infrared
auroral CH4 emissions. 7.7 - 7.9 micron images of Jupiter recorded by
Earth-based telescopes over the last three decades were collated in order to
quantify the magnitude and timescales over which the northern auroral hotspot's
CH4 emissions varies. We find that the ratio of the radiance of the poleward
northern auroral emissions to a lower-latitude zonal mean, henceforth 'Relative
Poleward Radiance' or RPR, exhibits a 37% variability over a range of
timescales. We searched for patterns of variability in order to test whether
seasonally-varying solar insolation, the 11-year solar cycle, or short-term
solar wind variability at Jupiter's magnetopause could explain the observed
evolution. The variability of the RPR exhibits a weak (r < 0.2) correlation
with the solar insolation received at Jupiter's high-northern latitudes. This
rules out the hypothesis suggested in previous work (e.g. Sinclair et al.,
2017a) that shortwave solar heating of aurorally-produced haze particles is the
dominant heating mechanism in the lower stratosphere. We also find the
variability exhibits negligible (r < 0.18) correlation with the monthly-mean
sunspot number, which rules out variability associated with the solar cycle. On
shorter timescales, we find moderate correlations of the RPR with solar wind
conditions at Jupiter in the preceding days before images were recorded. For
example, we find correlations of r = 0.45 and r = 0.51 of the RPR with the mean
and standard deviation on the solar wind dynamical pressure in the preceding 7
days. The moderate correlation suggests that either: 1) only a subset of solar
wind compressions lead to brighter, poleward, CH4 emissions and/or 2) a subset
of CH4 emission brightening events are driven by internal magnetospheric and
independent of the solar wind. | James A. Sinclair, Robert West, John M. Barbara, Chihiro Tao, Glenn S. Orton, Thomas K. Greathouse, Rohini S. Giles, Denis Grodent, Leigh N. Fletcher, Patrick G. J. Irwin | 2023-08-01T22:15:56Z | http://arxiv.org/abs/2308.02549v1 | # Long-term variability of Jupiter's northern auroral 8-\(\upmu\)m CH\({}_{4}\) emissions
###### Abstract
We present a study of the long term variability of Jupiter's mid-infrared CH\({}_{4}\) auroral emissions. 7.7-7.9 \(\upmu\)m images of Jupiter recorded by NASA's Infrared Telescope Facility, Subaru and Gemini-South over the last three decades were collated in order to quantify the magnitude and timescales over which the northern auroral hotspot's CH\({}_{4}\) emission varies. These emissions predominantly sound the 10- to 1-mbar pressure range and therefore highlight the temporal variability of lower-stratospheric auroral-related heating. We find that the ratio of the radiance of the poleward northern auroral emissions to a lower-latitude zonal-mean, henceforth 'Relative Poleward Radiance' or RPR, exhibits variability over a 37% range and over a range of apparent timescales. We searched for patterns of variability in order to test whether seasonally varying solar insolation, the 11-year solar cycle, or short-term solar wind variability at Jupiter's magnetopause could explain the observed evolution. The variability of the RPR exhibits a weak (r \(<\) 0.2) correlation with both the instantaneous and phase-lagged solar insolation received at Jupiter's high-northern latitudes. This rules out the hypothesis suggested in previous work (e.g. Sinclair et al. 2017a, 2018) that shortwave solar heating of aurorally produced haze particles is the dominant auroral-related heating mechanism in the lower stratosphere. We also find the variability exhibits negligible (r \(<\) 0.18) correlation with both the instantaneous and phase-lagged monthly-mean sunspot number, which therefore rules out a long-term variability associated with the solar cycle. On shorter timescales, we find moderate correlations of the RPR with solar wind conditions at Jupiter in the preceding days before images were recorded. For example, we find correlations of r = 0.45 and r = 0.51 of the RPR with the mean and standard deviation solar wind dynamical pressure in the preceding 7 days. The moderate correlation suggests that either: 1) only a subset of solar wind compressions lead to brighter, poleward CH\({}_{4}\) emissions and/or 2) a subset of CH\({}_{4}\) emission brightening events are driven by internal magnetospheric processes (e.g. Io activity) and independent of solar wind enhancements.
keywords: Aurorae; Infrared observations; Jupiter, magnetosphere; Jupiter, atmosphere +
Footnote †: journal: Icarus
## 1 Introduction
Auroral processes on Jupiter represent an extreme example of space weather due to the planet's large magnetic field and the internal plasma source in the form of Io's volcanism. Solar wind perturbations to the magnetosphere and
internal magnetospheric dynamics accelerate energetic ions and electrons into Jupiter's polar atmosphere producing auroral emissions over a large range of the electromagnetic spectrum (e.g. Dunn et al. 2017, Hue et al. 2021, O'Donoghue et al. 2021). A significant amount of energy is ultimately deposited as deep as Jupiter's lower stratosphere (\(\sim\)1 mbar), heating the atmosphere (e.g. Drossart et al. 1993; Sinclair et al. 2018), enriching the abundances of higher-order hydrocarbons (e.g. Sinclair et al. 2017a; Sinclair et al. 2019b), thereby producing enhanced mid-infrared emission features of stratospheric CH\({}_{4}\) and its photochemical by-products species (e.g. Caldwell et al. 1980; Kim et al. 1985; Kostiuk et al. 1993; Flasar et al. 2004).
The auroral hotspots in both auroral regions have been observed using both Earth-based (e.g. Kostiuk et al. 1993; Drossart et al. 1993; Livengood et al. 1993; Kim et al. 2014; Sinclair et al. 2018) and spacecraft instrumentation (Flasar et al., 2004; Sinclair et al., 2017a; Sinclair et al., 2019a). Generally, the strongest mid-infrared auroral emissions, commonly termed the 'auroral hotspot', are observed spatially-coincident with the 'poleward' ultraviolet auroral emissions: enclosed within or magnetospherically-poleward of the main auroral oval (e.g. Grodent et al. 2015). Anecdotally, the auroral hotspots have been observed to vary in brightness, disappear and re-emerge but the mechanisms driving this variability have remained uncertain. This is demonstrated in Figure 1, which shows examples where the northern auroral hotspot is absent and another where it is present.
In previous studies, inversions of the CH\({}_{4}\) emission spectra in order to derive the vertical temperature profile have revealed that the enhanced mid-infrared emissions result from, in part, two, vertically-discrete layers of heating in the lower (\(\sim\)1 mbar) and upper (\(<\) 0.1 mbar) stratosphere (Kostiuk et al., 2016; Sinclair et al., 2017a,b, 2018; Sinclair et al., 2020). The upper-stratospheric heating is vertically coincident with the ultraviolet auroral emissions and is therefore expected to arise from the same processes that produce the ultraviolet auroral emissions: chemical heating, H\({}_{2}\) dissociation from excited states (Grodent et al., 2001), joule heating from Pedersen currents (e.g. Badman et al. 2015) and ion drag. These processes essentially move the base of the thermosphere to lower altitudes, as has been shown using general circulation modelling of Jupiter's thermosphere (Bougher et al., 2005).
The mechanism(s) responsible for the deeper 1-mbar heating has/have remained uncertain though several hypotheses have been suggested in previous work. One suggestion was the 1-mbar heating results from shortwave solar heating of haze particles (Sinclair et al., 2017a, 2018) produced by complex chemistry prevalent in Jupiter's auroral regions (e.g. Friedson et al. 2002,Wong et al. 2003). However, the results of this study rule out this hypothesis as the dominant source of heating, as detailed further in the Discussion.
Direct precipitation and heating of a higher-energy population of charged particles from the magnetosphere has also been suggested as the source of the lower-stratospheic heating (e.g. Sinclair et al. 2017a; Kim et al. 2017, 2023). However, at the time of writing, we consider this mechanism less viable for the following reasons. Firstly, ion and electron precipitation modelling demonstrates that negligible energy directly reaches pressures higher than \(\sim\)0.1 mbar (e.g. Ozak et al. 2010; Gustin et al. 2016; Houston et al. 2018). Electrons would have to be accelerated to near-relativistic, \(>\)1 MeV energies in order to precipitate at 1 mbar and _in situ_ Juno measurements have not detected a sufficient downward component at such energies (e.g. Clark et al. 2017; Paranicas et al. 2018; Mauk et al. 2020). It is possible that an acceleration region exists below the altitudes of the Juno spacecraft in previous orbits. Measurements by Juno during its extended mission, where the northern polar region will be crossed at increasingly lower altitudes, may be able to determine whether such an acceleration region exists (Greathouse et al., 2021). Secondly, auroral-related heating is observed at \(\sim\)1 mbar and pressures lower than 0.01 mbar, with relatively less heating at the intermediate \(\sim\)0.1 mbar level (e.g. Kostiuk et al. 2016; Sinclair et al. 2018; Sinclair et al. 2020, 2023). For this pattern of heating to be explained by direct precipitation of electrons, there would have to be higher fluxes of 100-200 keV and 1-2 MeV electrons, which would respectively precipitate at 0.01 mbar and 1 mbar, compared to \(\sim\)500 keV electrons, which would precipitate at 0.1 mbar (Gustin et al., 2016). As far as we are aware, this has not been observed in _in situ_ Juno measurements: if anything, the energy spectra of downward electrons peaks in the 100 - 500 keV range (e.g. Paranicas et al. 2018).
Most recently, Cavalie et al. (2021) observed counterrotating winds at \(\sim\)0.1 mbar coincident with the main auroral emissions, which they suggested were produced by ion-neutral collisions. This is possibly an extension of a similar counterrotating electrojet predicted and observed at thermospheric altitudes (e.g. Achilleos et al. 2001; Johnson et al. 2018). Atmospheric subsidence associated with this circulation may produce adiabatic heating poleward of the main auroral emission and at pressures higher than 0.1 mbar and thus may be responsible for the 1-mbar auroral-related
heating. The fact that a temperature minimum is observed at \(\sim\)0.1 mbar, as well as the fact that strong H\({}_{3}^{+}\) emissions are not observed poleward of the main auroral emissions, would suggest the atmospheric subsidence is limited to the lower stratosphere (p \(<\) 0.1 mbar). Dynamical modelling of this potential mechanism is required to explore whether it is the source of the 1-mbar auroral-related heating.
In this work, our goal was to determine the magnitude and timescales over which Jupiter's CH\({}_{4}\) mid-infrared auroral hotspot varies using data over the largest time range possible. We were motivated to perform this analysis for the following reasons. First, we would quantify the variability that has been observed anecdotally for decades. Second, in searching for patterns of temporal variability or lack thereof, we would be able to test the aforementioned hypotheses of the source of Jupiter's 1-mbar auroral-related heating. For example, if shortwave solar heating of haze particles is indeed the dominant source of heating at 1 mbar, given the \(\sim\)3 year radiative cooling lifetimes at this altitude (e.g. Zhang et al. 2013), one would expect the mid-infrared emissions from the auroral hotspot to vary in accordance with the observed \(\sim\)3 year radiative cooling lifetimes.
Figure 1: The left column shows example images of of Jupiter recorded using the circular-variable filter (CVF) centered at 7.85 \(\mu\)m. The right column shows the corresponding polar projections with the solid, black line denoting the statistical-mean position of the ultraviolet main auroral emission (Bonfond et al., 2017). All images are in brightness temperature units according the colorbar.
seasonally varying solar insolation over the course of a Jupiter year (12 Earth years) due to its \(\sim\)3\({}^{\circ}\) axial tilt.
We chose to focus our study only on the northern auroral hotspot (\(>\)55\({}^{\circ}\)N, 150 - 210\({}^{\circ}\)W, see Figure 1) since it is at a relatively lower latitude and therefore more observable from Earth-based observatories compared to the southern auroral hotspot. While the magnitude of CH\({}_{4}\) emissions of the northern auroral hotspot have been quantified in many previous studies over a collectively large time range (e.g. Caldwell et al. 1980; Kim et al. 1985; Flasar et al. 2004; Sinclair et al. 2018), differences in spatial resolution, spectral resolution, methods of reduction and analysis between different studies would hinder or complicate any interpretation of variability. Using emissions of C\({}_{2}\)H\({}_{6}\) of the auroral hotspot recorded over a large time range, Kostiuk et al. (2016) observed more variability of C\({}_{2}\)H\({}_{6}\) emission of Jupiter's northern auroral oval during periods of solar maxima compared to solar minima. However, the photochemical lifetimes of CH\({}_{4}\) and C\({}_{2}\)H\({}_{6}\) at a given altitude differ greatly (e.g. Moses et al. 2005; Hue et al. 2018) and so the temporal behavior of each molecule would also be expected to differ. We were therefore motivated to perform a robust and consistent analysis of data capturing Jupiter's northern auroral hotspot in CH\({}_{4}\) emission in order to quantify the magnitude and timescales over which variability occurs. To this goal, we chose to focus our analysis on Earth-based broadband imaging of Jupiter's CH\({}_{4}\) emission in filters centered between 7.7-7.9 \(\upmu\)m for the following reasons. First, there is an extensive record of broadband 7.7-7.9 \(\upmu\)m images of Jupiter, which improves the potential temporal sampling and particularly the time range of the study, in comparison to spectroscopic observations. Second, broadband imaging in this filter range is predominantly more sensitive to the lower stratosphere (10 \(<\) p \(<\) 0.1 mbar), as demonstrated in Section 3.1, which allows us to the test aforementioned hypotheses for the 1-mbar auroral heating.
## 2 Observations
We searched an extensive record of 7 - 8 \(\upmu\)m images of Jupiter recorded at NASA's Infrared Telescope Facility, the Subaru telescope (both at the summit of Mauna Kea), Gemini-South at Cerro Pachon, and the Very Large Telescope (VLT) at Cerro Paranal. We omitted observations recorded at airmasses of greater than 2.2. We also focused on images recorded at sub-observer longitudes in the 150 - 210\({}^{\circ}\)W (System III) longitude range, such that the northern auroral hotspot is on or near the central meridian. While several VLT images did satisfy these conditions, the reduction of images available at the time of the study had overlap of the target and sky frames, which obscured higher-northern latitudes, and were therefore omitted from this work. The remaining 33 observations that satisfy the aforementioned criteria are detailed in Table 1 in chronological order.
### Imaging
At NASA's Infrared Telescope Facility (IRTF), 7.7 - 7.9 \(\upmu\)m images of Jupiter were recorded using the Mid-Infrared Array Camera, MIRAC (Hoffmann et al., 1993) from 1994 to 1997, the Mid-Infrared Large-Well Imager, MIRLIN (Ressler et al., 1994) from 1998 to 2001 and the Mid-Infrared Spectrometer and Imager, MIRSI (Deutsch et al., 2003) from 2003 to 2010.
Images of Jupiter at 7.8 \(\upmu\)m were acquired using the COMICS instrument (Kataza et al., 2000) at the 8-meter Subaru telescope between 2008 and 2020. The COMICS detector has a pixel scale of 0.13" and a field-of-view (FOV) of 45" x 32". The FOV could not capture the entire disk of Jupiter (37 - 45") in a single exposure and so, 2 or 4 individual images were recorded offset from the center of the disk and subsequently mosaiced together during the reduction. Individual halves or quadrants were positioned such that they overlapped, which also served to remove detector artefacts. The Thermal-Region Camera and Spectrograph, T-ReCS (De Buizer and Fisher, 2005) is a mid-infrared imager and spectrograph mounted on the 8-meter Gemini South telescope at Cerro Pachon, Chile. Ultimately, only one T-ReCS' image recorded on 11 February 2007 satisfied the aforementioned criteria.
### Reduction & Calibration
For consistency, the reduction and calibration of all images was repeated. The data were reduced, using standard object-minus-sky (A-B) subtraction and flat-fielding. Individual images from dithering were coadded to remove detector artefacts. In order to enable a robust comparison of all images, we blurred the 8-meter Gemini/T-ReCS and Subaru/COMICS data according to the diffraction-limited spatial resolution achieved on a 3-meter primary (\(\sim\)0.7") and resampled the images at the 0.26" pixel scale of IRTF-MIRSI.
A limb-fitting procedure was used to assign latitudes, longitudes and viewing geometries to each pixel on the disk of Jupiter. Absolute calibration was performed by deriving the central meridian radiance vs latitude profile from the image as well as mid-infrared spectra recorded by Voyager-IRIS and Cassini-CIRS and scaling the former profile to match the latter. This approach is described in greater detail in Fletcher et al. (2009). While this does make the images insensitive to long-term changes, our analysis calculates the ratio of radiance between the auroral hotspot and a lower-latitude zonal mean (see further details in Section 3.2), in which case the absolute calibration scale factor cancels out. The noise-equivalent spectral radiances (NESR) of the images were calculated by determining the standard deviation of sky pixels more than 5" away from Jupiter's limb.
### Transmission Functions
The filter transmission functions for MIRSI at 7.7 \(\upmu\)m, COMICS at 7.8 \(\upmu\)m, MIRAC and T-ReCS at 7.9 \(\upmu\)m use an identical filter: each observatory has labelled the central filter wavelength differently depending on whether they used the effective wavelength before or after convolution with fore-optics and/or telluric transmission. A subset of MIRAC images and all MIRLIN images were recorded using a 5%-wide circular variable filter (CVF) centered at 7.85 \(\upmu\)m. Figure 2a compares the filter transmission functions of the observations used in this work with spectra of Jupiter, which were calculated as detailed in the next Section.
Figure 2: a) The transmission filter functions for the instruments used in this work according to the legend. Solid lines represent the filter transmission, dashed lines are the filter transmission convolved with the telluric transmission spectrum. Red represents MIRSI/MIRAC on the IRTF and COMICS on Subaru (all at the summit of Mauna Kea), blue represents the CVF setting of IRTF MIRSI/MIRLIN and magenta represents Gemini-South/T-ReCS on Cerro Pachon. b) forward model spectra of a lower-latitude at nadir (black) and the auroral hotspot at an emission angle of 60\({}^{\circ}\) (grey), c) as in b) but over the 7.80 - 7.90 \(\upmu\)m range such that individual lines can be seen.
## 3 Analysis
### Radiative Transfer Simulations
Our dataset includes images recorded over different filter bandpasses, observatory altitudes and relative Earth-Jupiter velocities. We performed radiative transfer modelling in order to quantify how these parameters would affect absolute, observed broadband radiances, and how such variations could be corrected.
We used the NEMESIS software (Irwin et al., 2008) in order to perform radiative transfer forward modelling. Calculations were performed using the line-by-line treatment over 7.0 to 8.5 \(\upmu\)m, at a spectral resolving power \(\lambda/d\lambda=130000\), which sufficiently resolves individual weak and strong emission lines of CH\({}_{4}\). The sources of spectroscopic line data for the H\({}_{2}\) S(1) quadrupole line feature, NH\({}_{3}\), PH\({}_{3}\), CH\({}_{4}\), CH\({}_{3}\)D, \({}^{13}\)CH\({}_{4}\), C\({}_{2}\)H\({}_{2}\), C\({}_{2}\)H\({}_{4}\) and C\({}_{2}\)H\({}_{6}\) were adopted from Fletcher et al. (2018). The vertical profiles of H\({}_{2}\), He, NH\({}_{3}\), PH\({}_{3}\) were adopted from Sinclair et al. (2018). The vertical profiles of CH\({}_{4}\) and its photochemical by-products were taken from a photochemical model based on Moses and Poppe (2017) but outputting the model results at 60\({}^{\circ}\)N and assuming an eddy diffusion coefficient profile that results in a CH\({}_{4}\) homopause of 7.3 mbar. This is to be consistent with the findings of Sinclair et al. (2020), which demonstrated the CH\({}_{4}\) homopause altitude is higher inside Jupiter's northern auroral oval region compared to elsewhere on the planet. We note that the NEMESIS radiative transfer code assumes local thermodynamic equilibrium (LTE) while the atmosphere is expected to deviate from LTE conditions at pressures lower than 0.1 mbar (e.g. Appleby 1990; Sinclair et al. 2020). Parameterizing non-LTE (NLTE) will be the subject of future work.
Calculations of synthetic spectra and vertical functional derivative calculations, which describes the relative contribution of each atmospheric level to the total observed radiance, were performed twice. First, at a nadir emission angle using the temperature profile adopted in the Moses and Poppe (2017) photochemical model. This uses results derived from the Infrared Space Observatory's Short Wave Spectrometer, ISO-SWS (Lellouch et al. 2001 and references therein) at pressures greater than 1 mbar and Galileo Atmospheric Structure Instrument, ASI (Seiff et al., 1998) measurements at pressures lower than 1 mbar. We use this calculation as representative of the spectrum and atmosphere at a lower latitude. Second, the calculation was performed at an emission angle of 60\({}^{\circ}\) using the temperature profile retrieved by Sinclair et al. (2020) using a mean of IRTF-TEXES measurements recorded on August 20, 2019 at 68\({}^{\circ}\)N at longitudes magnetospherically-poleward of and including the main auroral oval (Bonfond et al., 2017). This calculation is representative of the geometry and thermal structure of the auroral hotspot.
Figure 2b-c shows both synthetic spectra. For the auroral hotspot, emission from the CH\({}_{4}\) line cores is higher due to the upper-stratospheric heating and limb brightening whereas radiances in the continua are limb-darkened and dimmer than those of the nadir, lower-latitude spectrum. Both spectra, \(I(\lambda,v_{r})\), where \(I\) is the radiance, were doppler-shifted to simulate Jupiter being observed by a relative Earth-Jupiter velocity of \(v_{r}\). We performed this calculation using a range of \(v_{r}\) from -28 to +28 km/s, in increments of 1 km/s, in order to capture the range of relative Earth-Jupiter velocities of the data presented in this work (Table A.1). The doppler-shifted spectrum was then convolved with the filter response function, \(R(\lambda)\), the telluric transmission spectrum \(\tau(\lambda)\) and then integrated over the wavelength range (\(\lambda_{1}\) - \(\lambda_{2}\)) of the filter bandpass, as shown in Equation 1. The telluric transmission spectrum was calculated using the ATRAN1 software (Lord, 1992) assuming a zenith angle of 45\({}^{\circ}\), a precipitable water vapor of 1 mm and at altitudes of 8900 (Cerro Pachon) and 13800 feet (Mauna Kea). A similar calculation was then performed for the 2-dimensional vertical functional derivative image \(dI/dT(\lambda,p,v_{r})\), as shown in Equation 2.
Footnote 1: [https://atran.arc.nasa.gov/cgi-bin/atran/atran.cgi](https://atran.arc.nasa.gov/cgi-bin/atran/atran.cgi)
\[\bar{I}(p,v_{r})=\frac{\int_{\lambda_{1}}^{\lambda_{2}}I(\lambda,p,v_{r})*R( \lambda)*\tau(\lambda)d\lambda}{\int_{\lambda_{1}}^{\lambda_{2}}R(\lambda)* \tau(\lambda)d\lambda} \tag{1}\]
\[\frac{d\bar{I}}{dT}(p,v_{r})=\frac{\int_{\lambda_{1}}^{\lambda_{2}}\frac{d \bar{I}}{dT}(\lambda,p,v_{r})*R(\lambda)*\tau(\lambda)d\lambda}{\int_{\lambda_ {1}}^{\lambda_{2}}R(\lambda)*\tau(\lambda)d\lambda} \tag{2}\]
Figure 3a shows the variations in absolute, broadband radiance at both the lower-latitude and auroral hotspot locations for each instrument/observatory and as a function of \(v_{r}\). At a given \(v_{r}\), the same spectrum would produce a range of absolute, convolved radiances due to differences in the filter bandpasses and the altitudes of each instrument/observatory. By calculating the ratio of radiance between the auroral hotspot and a lower latitude location, such differences are effectively removed for MIRSI/MIRAC/COMICS and the CVF MIRSI/MIRLIN filters, as demonstrated in Figure 3b. For T-ReCS on Gemini-South, the radiance ratio vs Jupiter radial velocity curve is offset by less than 5% with respect to the Mauna Kea instruments. Ultimately, our analysis adopts only one Gemini/T-ReCS measurement and the propagated uncertainty on the radiance ratio from random noise in the image and the standard deviation on pixels averaged is \(\sim\)10%. We therefore consider the offset in radiance ratio of the Gemini/T-ReCS measurement with respect to the Mauna Kea measurements to already be captured within this uncertainty.
For a given spectrum, higher, convolved radiances are recorded when Jupiter is observed at higher \(v_{r}\), since jovian CH\({}_{4}\) emission lines are doppler-shifted out of the telluric CH\({}_{4}\) lines to a greater extent. Higher relative velocities also better sample the strong cores of the CH\({}_{4}\) emission lines, which in turn sound higher and warmer altitudes in Jupiter's atmosphere. This is particularly true for Jupiter's auroral hotspot due to its high latitude/emission angle, and the associated limb-brightening effect, as well as the upper stratospheric heating present in this region. This is demonstrated in Figure 4, which shows the vertical functional derivatives for each instrument for both a nadir, lower-latitude and a high emission angle observation of the auroral hotspot. A consequence of this is that calculating the ratio in radiance between the auroral hotspot and a lower latitude does not remove the apparent variability in brightness due to changes in \(v_{r}\) alone, as shown in Figure 3b. We address this in two ways. First, we use the curves shown in Figure 3a-b to derive a correction for radial velocity, which we apply to the data as detailed further in Section 3.2. Second, in Section 3.2, we demonstrate that there is negligible correlation of the (uncorrected) radiance of the auroral hotspot with Jupiter's relative velocity at the time of measurements.
Figure 3: a) Variations in the total observed radiance as a function of the relative Earth-Jupiter velocity for MIRSI at 7.7 μm/COMICS at 7.8 μm/MIRAC at 7.9 μm (red), MIRAC/MIRSI CVF at 7.85 μm (light blue) and T-ReCS on Gemini-South (purple). Solid lines show radiance assuming a low latitude temperature profile and a nadir emission angle, dotted lines show radiance assuming a typical auroral hotspot temperature profile and an emission angle of 60\({}^{\circ}\). b) The ratio of the auroral-to-nominal radiance.
### Searching for temporal patterns
For each image, a centre-to-limb correction was applied in order to correct for limb brightening and foreshortening. The mean radiance between 60 - 75 \({}^{\circ}\)N and 160 - 200 \({}^{\circ}\)W was calculated, which is a latitude/longitude range that captures the emissions poleward of the main oval. The standard deviation of the mean, and the noise-equivalent radiance scaled by n\({}^{-0.5}\), where n is the number of diffraction-resolved pixels, were calculated and the larger of the two was adopted as the uncertainty on the average radiance. Similarly, the average radiance was calculated between 50\({}^{\circ}\)S - 50\({}^{\circ}\)N at all longitudes captured in the image at \(\mu>0.2\). The assumption is that the mean radiance between 50\({}^{\circ}\)S and 50\({}^{\circ}\)N should not exhibit any net time variability since seasonal variations in each hemisphere should cancel out and smaller-scale variable or transient features, such as the equatorial oscillation (e.g. Giles et al., 2020; Antunano et al., 2021; Orton et al., 2023), stratospheric waves (e.g. Fletcher et al., 2017) should have negligible effect on the mean radiance calculated over a large area.
Using the radiance vs \(v_{r}\) curves derived in Figure 3, a radial-velocity correction was applied to both sets of radiances. We then calculated the ratio of the auroral to lower-latitude radiance in these locations, henceforth described as the 'Relative Poleward Radiance' or RPR. This effectively cancels out any variations in absolute radiance due to differences in filters/observatory altitudes as well as absolute calibration. On nights where more than one image satisfied the aforementioned criteria, a mean RPR was calculated. We also performed the same calculation for
Figure 4: The logarithmic, vertical functional derivatives with respect to temperature, which describes the contribution of each atmospheric level to the total observed radiance at the top of the atmosphere. The left column shows the functional derivatives assuming a nadir emission angle and a lower-latitude temperature profile for a) MIRSI at 7.7 \(\upmu\)m/COMICS at 7.8 \(\upmu\)m/MIRAC at 7.9 \(\upmu\)m, b) MIRSI/MIRSI CVF, and c) T-ReCS on Gemini-South. The right column shows similar results assuming a typical auroral hotspot temperature profile and calculated at an emission angle of 60\({}^{\circ}\).
radiances without the radial velocity correction to demonstrate that Jupiter's radial velocity is not the dominant driver of the northern auroral hotspot's variability (see Section 3.2.1).
#### 3.2.1 Ruling out relative velocity
First, we compared the (uncorrected) RPR with Jupiter's relative velocity to confirm that the variability of the auroral hotspot is not an artefact of varying Doppler shift. We adopted the observer range rate as a function of time with a step of 1 day using the JPL Horizons system2, which takes into account the light-time abberation. Our analysis focuses on observations recorded when the northern auroral hotspot is close or near Jupiter's central meridian, at which time any rotational or horizontal-wind component is negligible compared to the relative velocity between Earth and Jupiter.
Footnote 2: [https://ssd.jpl.nasa.gov/horizons](https://ssd.jpl.nasa.gov/horizons)
Figure 5 shows a scatter diagram of the (uncorrected) relative poleward radiance as a function of Jupiter's absolute relative velocity. At first glance, there may appear to be more variabiltiy of the RPR at higher, absolute radial velocities. However, we consider this a measurement bias since a large proportion of measurements were performed at higher relative velocities. We derive a correlation coefficient of -0.04 and therefore a negligible correlation of the RPR with Jupiter's relative velocity. We therefore conclude that the variability of the auroral hotspot is not a result of Jupiter's varying radial velocity with respect to Earth.
#### 3.2.2 Longer-term solar insolation
Figure 6a shows the variability of the northern auroral hotspot as a function of time. We find the CH\({}_{4}\) emissions of Jupiter's northern auroral region exhibit significant temporal variability over a range of \(\sim\)37%. We reproduce the result described in Sinclair et al. (2019b) and Sinclair et al. (2023), which demonstrated daily variability of the northern auroral CH\({}_{4}\) emissions. Variability is observed over a range of timescales from days to years. However, we suspect that the northern auroral hotspot in CH\({}_{4}\) emissions exhibits daily variability in response to magnetospheric and solar-wind events and will appear to exhibit erratic variability when sampled irregularly/sparsely by the data in this work.
First, we test whether the variability of the northern auroral hotspot is linked to the varying solar insolation received at Jupiter's high-northern latitudes over the course of a Jupiter year (12 Earth years). JPL Horizons was used to compute the subsolar latitude and heliocentric range of Jupiter from 1975 to 2030 in steps of 2 hours or 1/5th of a Jupiter rotation. For each timestep, the solar zenith angle and solar insolation at 65\({}^{\circ}\)N, 180\({}^{\circ}\)W were calculated. The result was then smoothed using a triangular function with a width of 30 (Earth) days thereby removing diurnal variations. Figure 6a compares the variability of the RPR with the 30-day or monthly-mean solar insolation. Both visually and
Figure 5: The relative poleward radiance (RPR) of Jupiter’s auroral hotspot compared to the absolute Earth-Jupiter relative velocity. The correlation coefficient is shown in the top-left.
quantitatively by calculation of the correlation coefficient (r = 0.176), we find there is negligible correlation between the radiance of the northern auroral region with instantaneous solar insolation.
Given that the observations predominantly sound the 10- to 1-mbar level (Section 4, where the thermal inertia timescale of the atmosphere is on the order of several Earth years (Zhang et al., 2013), we next explored whether the radiance of the auroral hotspot varied in accordance with solar insolation but with a phase lag. Figure 6b shows the correlation coefficient of the RPR with sub-solar latitude with a phase lag from 0 to 1 Jovian year. As shown, there is only marginal change in the correlation coefficient when a phase lag is imposed on the solar insolation. The maximum and minimum correlation coefficients of 0.18 and -0.19 occur at phase lags of 0.93 and 0.45 Jupiter years, respectively, neither of which are high enough to conclude a correlation exists.
#### 3.2.3 Longer-term solar activity
Second, we explored whether the radiance of the northern auroral hotspot exhibited correlation with the 11-year solar cycle. Figure 7a compares the variability of the RPR with the monthly mean sunspot number, as taken from NOAA3. As in Section 3.2.2, we also tested whether a significant correlation coefficient existed with the phase-lagged monthly mean sunspot number from 0 to 11 years. There results are shown in Figure 7b.
Figure 6: a) The relative poleward radiance (RPR) of the northern auroral hotspot as a function of time, according to the left-hand axis. Points are colored according to the observatory/instrument used as shown in the legend. The variability is compared to the 30-day mean solar insolation at 65\({}^{\circ}\)N, 180\({}^{\circ}\)W according to the right-hand axis. The correlation coefficient of the RPR with solar insolation is shown in the top-right of the panel. b) The correlation coefficient of the RPR vs phase-lagged solar insolation.
With zero phase lag, we find a near-zero (r \(<\) 0.05) correlation of the variability of the auroral hotspot with the monthly-mean sunspot number. We find a maximum correlation of 0.19 when introducing a phase lag of 7.6 (Earth) years, which we also consider negligible. A substantially larger minimum correlation coefficient (r = -0.50) is found when adopting a phase lag of \(\sim\)3.6 Earth years. A negative correlation of the auroral hotspot radiance with solar activity would be physically-implausible and so we reject this as a sampling artefact.
### Short-term solar wind conditions
Finally, we explored whether the radiance of the northern auroral hotspot exhibited correlation with short-term solar wind conditions at Jupiter. OMNI-measured solar wind conditions at Earth (Thatcher and Muller, 2011) and the Tao et al. (2005) solar wind propagation model were used to predict the solar wind dynamical pressure, p\({}_{dyn}\) impinging on Jupiter's magnetosphere from 1994 to 2021.
First, we compared the RPR with the instantaneous solar wind dynamical pressure taking into account the light-time abberation, and found a correlation of r = 0.26, as shown in Figure 8a. As noted in previous work (e.g. Sinclair et al. 2019a), solar wind propagation models are 1-dimensional and therefore most accurate when the Earth-Sun-Jupiter angle is small i.e. when Jupiter is close to opposition. Zieger and Hansen (2008) quantified the uncertainty on a similar solar wind propagation model and estimated a 12-hour uncertainty on timing and 38% error on magnitude when Jupiter was within 60\({}^{\circ}\) of opposition. In using only the 14 measurements recorded when Jupiter was within
Figure 7: a) The relative poleward radiance (RPR) of the northern auroral hotspot as a function of time, according to the left-hand axis. Points are colored according to the observatory/instrument used as shown in the legend. The variability is compared to the monthly-mean sunspot number as a metric of solar activity according to the right-hand axis. The correlation coefficient of the RPR with sunspot number is shown in the top-right of the panel. b) The correlation coefficient of the RPR vs phase-lagged solar insolation.
60\({}^{\circ}\) of opposition (shown in red in Figure 8a), the correlation coefficient was re-calculated and we found a negligible increase in correlation to r = 0.28.
Similar calculations were repeated using the mean and standard deviation of solar wind dynamical pressure in the 3 and 7 days preceding the Jupiter images in order to test whether there was a correlation with active and variable periods of solar wind conditions. We find there is negligible correlation when performing the calculation using all 33 measurements. However, in using only the 14 measurements recorded when Jupiter was within 60\({}^{\circ}\) of opposition, when solar-wind propagation model results are more certain, we find moderate correlations of the relative poleward radiance with all four, aforementioned metrics of solar wind activity and variability. For example, the RPR exhibits a r = 0.51 correlation with the standard deviation of the solar wind dynamical pressure in the 7 days preceding the Jupiter images.
## 4 Discussion
We collated an extensive record of 7.7 - 7.9 \(\upmu\)m images of Jupiter from Earth-based telescopes over a period of three decades. These images allowed us to quantify the timescales and magnitude over which the northern auroral hotspot
Figure 8: Correlations of the relative poleward radiance (RPR) with a) the instantaneous solar wind dynamical pressure (p\({}_{dyn}\) (nPa)), the mean p\({}_{dyn}\) in the preceding b) 3 days, c) 7 days, and the standard deviation of p\({}_{dyn}\) in the preceding d) 3 days and e) 7 days. The correlation coefficient, r, in each case is given in bottom, left of each panel in black. Data corresponding to times when Jupiter was within 60\({}^{\circ}\) of opposition are colored in red and the correlation coefficient is also shown.
CH\({}_{4}\) emissions varied.
Figure 6 shows the relative poleward radiance or RPR of the northern auroral hotspot over the 1994 to 2021 time period. The RPR is the ratio in radiance of the northern auroral hotspot with a lower-latitude zonal mean, which removes/minimizes uncertainty due to absolute calibration, differences in filter bandpasses and observatory altitudes. We find the CH\({}_{4}\) emissions of Jupiter's northern auroral hotspot exhibit significant temporal variability over a range of \(\sim\)37%. This reproduces the result described in Sinclair et al. (2019b) and Sinclair et al. (2023), which demonstrated daily variability of the northern auroral CH\({}_{4}\) emissions using different datasets. Variability is observed over a range of timescales from days to years. We suspect that the CH\({}_{4}\) emission of the northern auroral hotspot exhibits daily variability in response to magnetospheric and solar-wind events and will appear to exhibit erratic variability when sampled irregularly/sparsely.
First, we tested whether the variability of the northern auroral hotspot could be explained by the varying solar insolation at Jupiter's high-northern latitudes. We found there was negligible correlation with both the instantaneous solar insolation as well the phase-lagged solar insolation. This allows us to explicitly rule out the hypothesis we suggested in previous work (e.g. Sinclair et al. 2017a,b) that shortwave solar heating of haze particles produced by the auroral chemistry (e.g. Wong et al. 2000; Friedson et al. 2002; Wong et al. 2003) is the dominant source of 1-mbar auroral-related heating. If this source of heating were significant, given the \(\sim\)3 year thermal inertia/radiative lifetimes at this altitude (e.g. Zhang et al. 2013), 1-mbar temperatures would vary in accordance with seasonally-changing solar insolation due to Jupiter's \(\sim\)3\({}^{\circ}\) axial tilt. The majority of the broadband 7.7-7.9\(\upmu\)m CH\({}_{4}\) emissions originate from close to this pressure level (Figure 4). Thus, such hypothetical 1-mbar temperature variations would be observed as a long-term modulation of the 7.7-7.9 CH\({}_{4}\) emissions in accordance with solar insolation, which is not observed. While solar heating of haze particles likely has a non-negligible contribution to the radiative balance of the auroral stratosphere, it cannot be the dominant source of auroral heating at 1 mbar.
Second, we also tested whether the radiance of the northern auroral hotspot varied in accordance with the 11-year solar cycle. We also found negligible correlation of the variability of the hotspot with the monthly-mean sunspot number as a metric of solar activity over the 11-year solar cycle. We also found negligible correlation when introducing a phase lag to the monthly mean sunspot number. We therefore dismiss the hypothesis that the CH\({}_{4}\) emissions of the auroral hotspot exhibit a longer-term variability associated with the 11-year solar cycle. The temporal sampling of our data does not allow us to confidently conclude whether or not the CH\({}_{4}\) emissions of the auroral hotspot exhibit more variability during periods of solar maxima, as has been interpreted for C\({}_{2}\)H\({}_{6}\) emissions (Kostiuk et al., 2016). For example, we do see high variability in the 1999-2002 time period in proximity to the maximum of Solar cycle 23, however, we do not have a similar sampling during periods of solar minima to say whether the auroral hotspot is as variable, or more/less variable.
Third, we tested whether the auroral hotspot exhibited correlation with short-term solar wind conditions. Solar-wind conditions measured at Earth using OMNI (Thatcher and Muller, 2011) and the Tao et al. (2005) solar wind propagation model was used to predict the solar wind conditions impinging on Jupiter's magnetosphere. Such solar wind propagation models are expected to have the lowest uncertainty when Jupiter is within 60\({}^{\circ}\) of opposition, i.e. when the Earth-Jupiter-Sun angle is small. We therefore tested the correlation of the radiance of the auroral hotspot with solar wind conditions using: 1) all available Jupiter images, 2) only those recorded when Jupiter was within 60\({}^{\circ}\) of opposition - see Section 3.3. Here, we only discuss the results of the latter. The radiances of the northern auroral hotspot exhibit negligible correlation with the instantaneous solar wind dynamical pressure, p\({}_{dyn}\) at Jupiter (taking into account the light-time abberation). However, we find moderate correlations of the auroral hotspot with the mean p\({}_{dyn}\) and the standard deviation of p\({}_{dyn}\) in the days preceding the Jupiter images. For example, we find the relative poleward radiance exhibits an r = 0.45 correlation with the preceding 7-day mean p\({}_{dyn}\) and an r = 0.51 correlation with the standard deviation on the 7-day mean. This demonstrates that for a subset of the times the northern auroral hotspot exhibited enhanced CH\({}_{4}\) emissions, there were strong and variable solar wind conditions present in the preceding days/week. The fact the correlation is not higher likely denotes that either internal magnetospheric processes, independent of the solar wind (e.g. Io's output) can also produce a brightening of the mid-infrared auroral hotspot and/or only a subset of solar wind enhancements lead to brightening of the auroral emissions. In a similar analysis of Jupiter's ultraviolet auroral emissions using HISAKI-EXCEED, Kita et al. (2016) noted correlations of a similar magnitude (\(\sim\)0.44) between the total auroral power and the preceding variation in solar wind dynamical pressure. However, their
spatially-coarse observations could not distinguish whether the variations in auroral power were arising from the main emission, the poleward emissions or outer emissions. Using the same dataset but focusing on the emissions from the Io Plasma torus (IPT) at 5 - 7 R\({}_{J}\), Murakami et al. (2016) also demonstrated that dawn-dusk asymmetries were amplified during periods of solar wind compressions. This demonstrated that solar wind variations could ultimately perturb Jupiter's inner magnetosphere.
Using observations recorded in 2013, 2018 and 2020, Kim et al. (2020, 2022) concluded there was negligible correlation of the 3-\(\upmu\)m auroral CH\({}_{4}\) emissions with external solar wind conditions at Jupiter, in contrast to the moderate correlations of the 8-\(\upmu\)m emissions reported in this work. One possible source of this difference is that the 3-\(\upmu\)m emissions predominantly sound microbar pressures while the 8-\(\upmu\)m broadband emissions predominantly sound millibar pressures, and therefore exhibit contrasting temporal behaviours due to differences in the thermal inertia and radiative lifetimes (Zhang et al., 2013). Another possible reason is that the observations analyzed by Kim et al. (2020) and Kim et al. (2022) by coincidence captured brightenings of the CH\({}_{4}\) emissions in response to internal magnetospheric processes without a solar wind perturbation or events where solar wind variations led to no apparent change in CH\({}_{4}\) emissions. Further observations of Jupiter's auroral CH\({}_{4}\) emissions at 3 \(\upmu\)m and 8 \(\upmu\)m may better highlight similarities and differences in their temporal behaviour.
As demonstrated in Figure 4, broadband 7.7 - 7.9 \(\upmu\)m imaging of Jupiter's northern auroral hotspot predominantly sounds the lower stratosphere (\(\sim\)1 mbar). The modulation of the northern auroral hotspot's 7.7 to 7.9 \(\upmu\)m CH\({}_{4}\) emission by a subset of solar wind events suggests that variable solar wind conditions and their perturbing effects on the magnetosphere can ultimately modulate 1-mbar temperatures on timescales of several days. Particle precipitation modelling (Ozak et al., 2010; Gustin et al., 2016; Houston et al., 2018) and _in-situ_ measurements by the Juno spacecraft (e.g. Clark et al. 2017; Paranicas et al. 2018; Mauk et al. 2020) demonstrate that a negligible flux of magnetospheric particles deposit their energies at pressures deeper than 0.1 mbar. Thus, we suggest that the modulation of 1-mbar temperatures cannot result from _direct_ deposition of energy from the magnetosphere. Instead, we favor the explanation first suggested in Cavalie et al. (2021), who observed counterrotating jets at \(\sim\)0.1 mbar, coincident with the southern auroral oval and inferred that a similar circulation is likely present at the northern auroral oval too. They suggested that ion-neutral collisions drive the formation of the vortex, and may possibly be an extension of a similar counterrotating electrojet predicted and observed at thermospheric altitudes (e.g. Achilleos et al. 2001; Johnson et al. 2018). This could produce atmospheric subsidence poleward of the main oval and therefore adiabatic heating at pressures higher than 0.1 mbar. Magnetospheric dynamics, driven by both internal processes (e.g. Io volcanism) or by solar wind compressions, could perhaps accelerate the vortex, thereby increasing the magnitude of 1-mbar heating. The vortex would trap warm gas augmenting the thermal contrast with regions outside of the auroral hotspot. Periods of quiescent, solar wind conditions could perhaps allow the vortex to decelerate, even dissipate, which would stop adiabatic heating as a heat source and also enable warm gas to be advected/diffused away from the hotspot. However, time-dependent circulation modelling would be required to determine whether this proposed mechanism can produce the magnitude of heating and variability observed of the auroral hotspot.
## 5 Conclusions
We present a study of the long term variability of Jupiter's mid-infrared CH\({}_{4}\) auroral emissions. 7.7-7.9 \(\upmu\)m images of Jupiter recorded by NASA's Infrared Telescope Facility, Subaru and Gemini-South over the last three decades were collated. A ratio of the radiance of the poleward northern auroral emissions with a lower-latitude zonal-mean, henceforth 'Relative Poleward Radiance' or RPR was calculated from each image, and compared to quantify the magnitude and timescales over which auroral hotspot varies. We find the RPR exhibits erratic, variable behaviour on a range of timescales. The variability of the RPR exhibits a weak (r \(<\) 0.2) correlation with both the instantaneous and phase-lagged solar insolation received at Jupiter's high-northern latitudes. This rules out the hypothesis suggested in previous work (e.g. Sinclair et al. 2017, 2018) that shortwave solar heating of aurorally-produced haze is the dominant auroral-related heating mechanism in the lower stratosphere. We also find the variability exhibits negligible (r \(<\) 0.18) correlation with both the instantaneous and phase-lagged monthly-mean sunspot number and therefore rule out a long-term variability associated with the solar cycle. On shorter timescales, we find moderate correlations of the RPR with solar wind conditions at Jupiter in the preceding days before images were recorded. For example, we find correlations of r = 0.45 and r = 0.51 of the RPR with the mean and standard deviation solar wind dynamical pressure
in the preceding 7 days. The moderate correlation suggests that either: 1) only a subset of solar wind compressions lead to brighter, poleward CH\({}_{4}\) emissions and/or 2) a subset of CH\({}_{4}\) emission brightening events are driven by internal magnetospheric processes (e.g. Io activity) and independent of solar wind enhancements. |
2301.09117 | Design-based individual prediction | A design-based individual prediction approach is developed based on the
expected cross-validation results, given the sampling design and the
sample-splitting design for cross-validation. Whether the predictor is selected
from an ensemble of models or a weighted average of them, valid inference of
the unobserved prediction errors is defined and obtained with respect to the
sampling design, while outcomes and features are treated as constants. | Li-Chun Zhang, Danhyang Lee | 2023-01-22T13:02:38Z | http://arxiv.org/abs/2301.09117v1 | # Design-based individual prediction
###### Abstract
A design-based individual prediction approach is developed based on the expected cross-validation results, given the sampling design and the sample-splitting design for cross-validation. Whether the predictor is selected from an ensemble of models or a weighted average of them, valid inference of the unobserved prediction errors is defined and obtained with respect to the sampling design, while outcomes and features are treated as constants.
**Keywords:** Probability sampling; Ensemble learning; Rao-Blackwellisation.
## 1 Introduction
Valid inference of the unobserved individual prediction errors is a fundamental issue to supervised machine learning, no matter how confident one is about the obtained predictor. An IID model of the prediction errors is commonly assumed for algorithm-based learning, such as random forest, support vector machine or neural network, which could be misleading in situations where the available observations are not obtained in a completely random fashion.
We define and develop a design-based approach to individual prediction, which requires the sample for learning to be selected by a probability design. Whether the adopted predictor is selected from an ensemble of models or a weighted average of them, the proposed approach can provide valid inference of the associated risk with respect to the known sampling design, "irrespectively of the unknown properties of the target population studied" (Neyman, 1934).
For an illustration of the conceptual issue at hand, suppose on observing an outcome value \(y\) in a subset \(s\) of a finite collection of units \(U\), denoted by \(\{y_{i}:i\in s\}\) and \(s\subset U\), one would like to predict the \(y\)-value for each unit out of \(s\)**by**\(\mu(s)=\sum_{i\in s}y_{i}/n\), where \(n\) is the number of units in \(s\). How can one infer about the loss \(D_{s}=\sum_{j\in U\setminus s}\{\mu(s)-y_{j}\}^{2}\) that is unobserved?
One possibility is to evaluate \(D_{s}\) using an assumed model. Under the model that \(y_{i}\) is independent and identically distributed (IID), for any \(i\in U\), we have
\[E_{M}(D_{s}\mid s)=(N-n)\big{(}1+n^{-1}\big{)}\sigma^{2}\]
where \(E_{M}(\cdot|s)\) denotes expectation with respect to the IID model conditional on the given subset \(s\), and \(\sigma^{2}\) is the variance of \(y_{i}\) under the model, and \(N\) is the number of units in \(U\).
A fundamentally different, design-based approach would be possible if \(s\) is selected from \(U\) by a known sampling design, denoted by \(s\sim p(s)\), where \(\sum_{s\in\Omega}p(s)=1\) and \(\Omega\) contains all the possible samples from \(U\). For instance, we have \(p(s)=1/\binom{N}{n}\) if \(s\) is selected from \(U\) by simple random sampling without replacement. We have then
\[E_{p}(D_{s})=(N-n)\big{(}1+n^{-1}\big{)}S_{y}^{2}\]
where \(E_{p}\) denotes expectation with respect to \(p(s)\), while \(y_{U}=\{y_{i}:i\in U\}\) are treated as constants and \(S_{y}^{2}=\sum_{i\in U}(y_{i}-\bar{Y})^{2}/(N-1)\) and \(\bar{Y}=\sum_{i\in U}y_{i}/N\).
Since \(s_{y}^{2}=\sum_{i\in s}\{y_{i}-\mu(s)\}^{2}/(n-1)\) is both unbiased for \(\sigma^{2}\) under the IID model and unbiased for \(S_{y}^{2}\) under simple random sampling, numerically one would obtain the same estimate of the expected loss, although the two measures have completely different interpretations. While some assumed model would be necessary for the measure \(E_{M}(D_{s}|s)\) if the selection mechanism of \(s\) is unknown, it could be invalid if the observed data distribution actually differs to that of the unobserved ones. While the design-based measure \(E_{p}(D_{s})\) requires one to plan and implement the simple random sampling design here, it is necessarily valid because the sampling distribution \(p(s)\) is known.
The general design-based individual prediction approach we develop in this paper can use any model or algorithm-based predictor learned from the observed sample. The prediction errors over the out-of-sample units are evaluated with respect to the known sampling design, while all the outcomes and relevant features in the population are treated as constants. We note that Sanguiao-Sande & Zhang (2021) study design-unbiased learning for estimating population totals such as \(Y=\sum_{i\in U}y_{i}\) in the same spirit.
By the design-based approach, there is no need to assume that a true model exists for \(y_{U}\), or that one is able to identify and learn the true predictor under repeated sampling. It is then natural to combine an ensemble of different predictors (e.g. Dietterich, 2000; Zhou, 2012; Sagi & Rokach, 2018; Dong et al., 2020) in addition to selecting a single best predictor. Design-based ensemble learning by voting or averaging will be developed.
## 2 Theory
In addition to \(y_{U}\), denote by \(x_{U}=\{x_{i}:i\in U\}\) the vectors of features. Given any sample \(s\), let \(\mu(x,s)\) be the predictor for the out-of-sample units, \(R=U\setminus s\). Regardless how \(\mu(x,s)\) is obtained from \(y_{s}=\{y_{i}:i\in s\}\) and \(x_{s}=\{x_{i}:i\in s\}\), its unobserved _total squared error of prediction (TSEP)_ is given by
\[D(s;\mu)=\sum_{i\in R}\left\{\mu(x_{i},s)-y_{i}\right\}^{2}\.\]
We define the _risk_ of \(\mu\) to be the expectation of \(D(s;\mu)\) over repeated sampling of \(s\sim p(s)\), denoted by
\[\tau(\mu)=E_{p}\left\{D(s;\mu)\right\}. \tag{1}\]
Design-based prediction is guided by the risk (1) over the sampling distribution \(p(s)\). First, valid estimation of \(\tau\) is developed below, where \(s\) is random but \(y_{U}\) and \(x_{U}\) are treated as constants associated with the given \(U\). Then, design-based approaches to ensemble learning by voting and averaging are described.
### Sample split and associated TSEP estimation
Denote by \(s_{1}\cup s_{2}=s\) and \(s_{1}\cap s_{2}=\emptyset\) a _training-test sample split_, where \(s_{1}\) is selected by a sample-splitting design, denoted by \(q(s_{1}|s)\). For instance, \(s_{1}\) of a given size can be sampled from \(s\) with or without replacement. In \(T\)-fold cross-validation, \(s\) is first randomly divided into \(T\) clusters and then each cluster is selected as \(s_{2}\) one by one systematically, yielding \(s_{1}=s\setminus s_{2}\) accordingly.
Given any sample-split \((s_{1},s_{2})\), and any predictor \(\mu(x,s_{1})\) that is trained on \(s_{1}\), let its prediction error for any \(i\notin s_{1}\) be
\[e_{i}(\mu,s_{1})=\mu(x_{i},s_{1})-y_{i}\]
which can be observed for any \(i\in s_{2}\) but not for any \(i\in R\). However, the TSEP
\[D_{R}(s_{1};\mu)=\sum_{i\in R}e_{i}(\mu,s_{1})^{2}\]
on applying \(\mu(x,s_{1})\) to \(R=U\setminus s\) can be estimated unbiasedly as follows.
Given the training set \(s_{1}\), for any \(i\in U\setminus s_{1}\), let
\[\pi_{2i}=\Pr(i\in s_{2}\mid s_{1})=\sum_{s:i\in s\cap i\notin s_{1}}f(s\mid s_ {1}) \tag{2}\]
be the conditional inclusion probability in \(s_{2}\) given \(s_{1}\), which is derived from the given \(pq\)-design defined by \(p(s)\) and \(q(s_{1}\mid s)\), i.e.
\[f(s_{1},s)=q(s_{1}\mid s)p(s)=f(s\mid s_{1})f(s_{1})\.\]
The estimator
\[\hat{D}_{R}(s_{1};\mu)=\sum_{i\in s_{2}}\big{(}\pi_{2i}^{-1}-1\big{)}e_{i}(\mu,s_{ 1})^{2}\]
is unbiased for \(D_{R}(s_{1};\mu)\) conditional on \(s_{1}\) in the sense that
\[E_{s}\{\hat{D}_{R}(s_{1};\mu)\mid s_{1}\}=E_{s}\{D_{R}(s_{1};\mu)\mid s_{1}\}. \tag{3}\]
To derive (3), let \(A_{2}=\sum_{i\in s_{2}}e_{i}(\mu,s_{1})^{2}=\sum_{i\in U\setminus s_{1}}e_{i}( \mu,s_{1})^{2}-D_{R}(s_{1};\mu)\). Both \(A_{2}\) and \(D_{R}(s_{1};\mu)\) vary with \(s_{2}=s\setminus s_{1}\) but their sum \(\sum_{i\in U\setminus s_{1}}e_{i}(\mu,s_{1})^{2}\) is fixed given \(s_{1}\). Thus, conditionally over all possible \(s\) given \(s_{1}\), we have
\[E_{s}\{\hat{D}_{R}(s_{1};\mu)\mid s_{1}\}=E_{s}\{\,\sum_{i\in s_{2}}\pi_{2i}^{ -1}e_{i}(\mu,s_{1})^{2}-A_{2}\mid s_{1}\}=E_{s}\{D_{R}(s_{1};\mu)\mid s_{1}\}\.\]
### Subsampling Rao-Blackwellisation and risk
One can only observe the residuals of any \(\mu(x,s)\) that is trained directly on the full sampl \(s\). Whereas for unbiased estimation of the risk (1), we would use prediction errors generally, in order to capitalise on the result (3). We shall therefore consider the predictor obtained over all possible sample-splits, i.e. \(s_{1}\sim q(s_{1}\mid s)\), which is defined as
\[\bar{\mu}(x,s)=E_{q}\{\mu(x,s_{1})\mid s\}. \tag{4}\]
Due to the Rao-Blackwell Theorem (Rao, 1945; Blackwell, 1947), we refer to the operation \(E_{q}(\cdot\mid s)\) as _subsampling Rao-Blackwellisation_ (SRB), since it is the conditional expectation over the subsampling distribution \(q(s_{1}\mid s)\), where the unordered \(s\) is the minimal sufficient statistic with respect to \(p(s)\).
We refer to \(\bar{\mu}(x,s)\) given by (4) as the SRB-predictor, associated with
\[D(s;\bar{\mu})=\sum_{i\in R}e_{i}(\bar{\mu})^{2}\qquad\text{and}\qquad e_{i}( \bar{\mu})=\bar{\mu}(x_{i},s)-y_{i}\.\]
For any \(i\in R\) with \(x_{i}=x\), the errors of \(\bar{\mu}(x,s)\) and \(\mu(x,s_{1})\) are related by
\[e_{i}(\mu,s_{1})=\mu(x,s_{1})-y_{i}=\{\mu(x,s_{1})-\bar{\mu}(x,s)\}+e_{i}( \bar{\mu})\.\]
Since \(e_{i}(\bar{\mu})\) is constant of \(s_{1}\) and \(\bar{\mu}(x,s)=E_{q}\{\mu(x,s_{1})|s\}\) by definition (4), we have
\[e_{i}(\bar{\mu})^{2}=E_{q}\{e_{i}(\mu,s_{1})^{2}\mid s\}-E_{q}\{a_{i}(\mu,s_{1 })^{2}\mid s\} \tag{5}\]
where \(a_{i}(\mu,s_{1})=\mu(x,s_{1})-\bar{\mu}(x,s)\) and \(E_{q}\{a_{i}(\mu,s_{1})^{2}\mid s\}\) is the variance of \(\mu(x,s_{1})\) under SRB.
**Theorem 1**.: _For any given \(\mu(\cdot)\), an unbiased estimator of the risk \(\tau(\bar{\mu})\) of the
corresponding SRB-predictor, over \(s\sim p(s)\), is given by_
\[\hat{D}(s;\bar{\mu})=E_{q}\Big{(}\sum_{i\in s_{2}}(\pi_{2i}^{-1}-1)\left\{e_{i}( \mu,s_{1})^{2}-a_{i}(\mu,s_{1})^{2}\right\}\mid s\Big{)}\.\]
Proof.: By (5), we have
\[D(s;\bar{\mu})=E_{q}\{\sum_{i\in R}e_{i}(\mu,s_{1})^{2}\mid s\}-E_{q}\{\sum_{i \in R}a_{i}(\mu,s_{1})^{2}\mid s\}\]
For the first term that can be rewritten as \(E_{q}\{D_{R}(s_{1};\mu)\mid s\}\), the estimator
\[\hat{E}_{q}\big{(}D_{R}(s_{1};\mu)\mid s\big{)}=E_{q}\big{(}\hat{D}_{R}(s_{1}; \mu)\mid s\big{)} \tag{6}\]
is unbiased over \(p(s)\) since, using (3), we have
\[E_{p}\{E_{q}\big{(}\hat{D}_{R}(s_{1};\mu)\mid s\big{)}\} =E_{s_{1}}\{E_{s}\big{(}\hat{D}_{R}(s_{1};\mu)\mid s_{1}\big{)}\}\] \[=E_{s_{1}}\{E_{s}\big{(}D_{R}(s_{1};\mu)\mid s_{1}\big{)}\}=E_{p} \left\{E_{q}\big{(}D_{R}(s_{1};\mu)\mid s\big{)}\right\}.\]
On replacing \(e_{i}(\mu,s_{1})^{2}\) by \(a_{i}(\mu,s_{1})^{2}\), one can carry through the same derivation for the second term of \(D(s;\bar{\mu})\) above, \(E_{q}\{\sum_{i\in R}a_{i}(\mu,s_{1})^{2}\mid s\}\). It follows that
\[E_{p}\{\hat{D}(s;\bar{\mu})\}=E_{p}\{D(s;\bar{\mu})\}=\tau(\bar{\mu})\.\]
This completes the proof.
### Notes on implementation
Exact SRB operation is not tractable analytically for algorithm-based \(\mu(\cdot)\), such as random forest or support vector machine. Nor is exact SRB numerically feasible if the number of possible \(s_{1}\) given \(s\) is large. In practice, one can use the Monte Carlo SRB predictor based on \(T\) subsamples, which is given as
\[\begin{cases}\tilde{\mu}(x_{i},s)=T^{-1}\ \sum_{t=1}^{T}\mu(x_{i},s_{1}^{(t)})& \text{if}\ \ i\in R\\ \mathring{\mu}(x_{i},s)=T_{i}^{-1}\sum_{t=1}^{T}\mathbb{I}\big{(}i\notin s_{1} ^{(t)}\big{)}\mu(x_{i},s_{1}^{(t)})&\text{if}\ \ i\in s\end{cases}\]
where \(s_{1}^{(t)}\) is the \(t\)-th subsample by \(q(s_{1}\mid s)\) and \(T_{i}=\sum_{t=1}^{T}\mathbb{I}(i\notin s_{1}^{(t)})\).
For the estimation of risk, let \(e_{i}(\mu,s_{1}^{(t)})=\mu(x_{i},s_{1}^{(t)})-y_{i}\) for any \(i\in s_{2}^{(t)}\) directly, whereas one can use \(a_{i}(\mu,s_{1}^{(t)})=\mu(x_{i},s_{1}^{(t)})-\mathring{\mu}(x_{i},s)\) as an out-of-bag approximation to \(\mu(x_{i},s_{1}^{(t)})-\bar{\mu}(x_{i},s)\), for any \(i\in s_{2}^{(t)}\), instead of \(\mu(x_{i},s_{1}^{(t)})-\tilde{\mu}(x_{i},s)\) that would be a residual-based alternative. The Monte Carlo risk estimator is given by
\[\tilde{D}(s;\bar{\mu})=\frac{1}{T}\ \sum_{t=1}^{T}\sum_{i\in s_{2}^{(t)}} \big{(}\pi_{2i}^{-1}-1\big{)}\{e_{i}(\mu,s_{1}^{(t)})^{2}-a_{i}(\mu,s_{1}^{(t) })^{2}\}. \tag{7}\]
By definition, \(\pi_{2i}\) in (7) is the conditional \(s_{2}\)-inclusion probability given \(s_{1}\), which is given by (2) and requires \(f(s\mid s_{1})\) that is derived from \(q(s_{1}\mid s)p(s)\). However, \(p(s)\) is unknown for many unequal-probability sampling algorithm in practice, e.g. the cube method (Deville & Tille, 2004), although the inclusion probability \(\pi_{i}=\Pr(i\in s)\) is always known.
One can use instead another sampling probability of the \(pq\)-design. For any \(i\in U\), let its conditional test-set inclusion probability _given_\(i\notin s_{1}\) be
\[\phi_{2i}=\Pr(i\in s_{2}\mid i\notin s_{1})=\frac{\Pr(i\in s_{2},i\notin s_{1} )}{\Pr(i\notin s_{1})}=\frac{\pi_{i}\{1-\Pr(i\in s_{1}\mid i\in s)\}}{1-\pi_{i }\Pr(i\in s_{1}\mid i\in s)}. \tag{8}\]
Given \(\pi_{i}\), the probability \(\phi_{2i}\) can be calculated as long as \(\Pr(i\in s_{1}\mid i\in s)\) does not depend on \(i\) and can be specified regardless the realised \(s\), such as simple random sampling (SRS) of \(s_{1}\) from \(s\) with or without replacement, or \(T\)-fold cross-validation. Since
\[\phi_{2i}=\frac{\sum_{s_{1}:i\notin s_{1}}\sum_{s:i\in s\cap i\notin s_{1}}f(s \mid s_{1})f(s_{1})}{\sum_{s_{1}:i\notin s_{1}}f(s_{1})}=\frac{\sum_{s_{1}:i \notin s_{1}}\pi_{2i}f(s_{1})}{\sum_{s_{1}:i\notin s_{1}}f(s_{1})}=E_{s_{1}}\{ \pi_{2i}\mid i\notin s_{1}\}\,\]
it is the conditional expectation of non-zero \(\pi_{2i}\), since \(\pi_{2i}=0\) iff \(i\in s_{1}\).
For instance, take the special case of SRS without replacement of \(s\) from \(U\) and SRS without replacement of \(s_{1}\) from \(s\), with sample sizes \(n=|s|\), \(n_{1}=|s_{1}|\) and \(n_{2}=|s_{2}|=n-n_{1}\). For any given \(s_{1}\) and \(i\notin s_{1}\), we have exactly
\[\pi_{2i}=\frac{\Pr(i\in s_{2})f(s_{1}\mid i\in s_{2})}{f(s_{1})}=\frac{\frac{n _{2}}{N}\binom{N-1}{n_{1}}^{-1}}{\binom{N}{n_{1}}^{-1}}=\frac{n_{2}}{N-n_{1}}= \phi_{2i}\]
### SRB-selector
Beyond using a single base learner \(\mu\) for the SRB-predictor (4), consider design-based ensemble learning by voting given an order-\(K\) heterogeneous ensemble \(\{\mu_{1},...,\mu_{K}\}\). Let \(D(s;\mu_{k})=\sum_{i\in R}\{\mu_{k}(x_{i},s)-y_{i}\}^{2}\). Denote by \(\Omega=\bigcup_{k=1}^{K}\Omega_{k}\) the partition of the sample space such that, for any \(s\in\Omega_{k}\) and \(l\neq k\), we have
\[D(s;\mu_{k})<D(s;\mu_{l})\]
where we discount the possibility of \(D(s;\mu_{k})=D(s;\mu_{l})\) merely to simplify the exposition. To select a single predictor for \(R\) based on a given sample \(s\), which minimises the risk (1), one would vote for \(\mu_{k}(x,s)\) iff \(s\in\Omega_{k}\). It follows that, for design-based ensemble learning by voting, the optimal selector is the perfect classifier of \(\mathbb{I}(s\in\Omega_{k})\).
It is a common approach to select a model by cross-validation and majority-vote, where cross-validation is based on \(s_{1}\sim q(s_{1}|\ s)\) and \(s_{2}=s\backslash s_{1}\). The expected selection result is given by the SRB-selector below. Given any \((s_{1},s_{2})\) by \(q(s_{1}|s)\)
and any \(k=1,...,K\), let
\[\delta_{k}(s_{1})=\begin{cases}1&\text{if}\ \ k=\arg\min_{l=1,...,K}\ \sum_{i\in s _{2}}\left\{\mu_{l}(x_{i},s_{1})-y_{i}\right\}^{2}\\ 0&\text{otherwise}\end{cases}\]
indicate which predictor has the least sum of squared errors in \(s_{2}\). The SRB-selector
\[\bar{\delta}_{k}(s)=\begin{cases}1&\text{if}\ \ k=\arg\max_{l=1,...,K}\ E_{q} \left\{\delta_{k}(s_{1})|s\right\}\\ 0&\text{otherwise}\end{cases} \tag{9}\]
is a classifier of \(\mathbb{I}(s\in\Omega_{k})\), i.e. the expected majority-vote over cross-validation.
Given the selection by (9), say, \(\mu_{k}\), one can reuse the same cross-validation samples \((s_{1},s_{2})\) to obtain the selected SRB-predictor \(\bar{\mu}_{k}\) and its associated risk.
### Mixed SRB-predictor
For design-based averaging given an order-\(K\) ensemble \(\{\mu_{1},...,\mu_{K}\}\), let the _mixed_ SRB-_predictor_ be
\[\mu(x,s)=\sum_{k=1}^{K}w_{k}\bar{\mu}_{k}(x,s) \tag{10}\]
where \(\sum_{k=1}^{K}w_{k}=1\) and \(w_{k}>0\) for the mixing weights, \(k=1,\ldots,K\). We have
\[D(s;\mu)=\sum_{k\neq 1}w_{k}^{2}D_{kk}+\left(1-\sum_{l\neq 1}w_{l}\right)^{2} D_{11}+\sum_{k=1}^{K}\sum_{l\neq k,1}w_{k}w_{l}D_{kl}+2\sum_{k\neq 1}w_{k}(1-\sum_{l \neq 1}w_{l})D_{1k}\]
now that \(w_{1}=1-\sum_{k\neq 1}w_{k}\), where \(D_{kk}=D(s;\bar{\mu}_{k})\) and \(D_{kl}=D(s;\bar{\mu}_{k},\bar{\mu}_{l})\) is given by
\[D_{kl}=\sum_{i\in R}e_{i}(\bar{\mu}_{1})e_{i}(\bar{\mu}_{2})=\sum_{i\in R}E_{q }\{e_{i}(\mu_{1},s_{1})e_{i}(\mu_{2},s_{1})\mid s\}-E_{q}\{a_{i}(\mu_{1},s_{2 })a_{i}(\mu_{2},s_{2})\mid s\},\]
i.e. similarly to (5). An estimator of \(D_{kl}\) follows as a corollary of Theorem 1, as well as its Monte Carlo approximation similarly to (7).
The optimal mixing weights \(w_{k}\) minimise \(D(s;\mu)\). The estimated \(\hat{w}_{k}\) can be obtained via \(\hat{D}(s;\mu)\) given \(\hat{D}_{kl}\), for all \(k,l=1,...,K\). Substituting \(\hat{w}_{k}\) in (10) yields the mixed SRB-predictor. The associated risk (1) can be estimated by \(\hat{D}(s;\mu)\).
Whilst the above approach aims at minimum risk (1), it may experience instability when the ensemble is not sufficiently heterogeneous. A robust approach to mixed ensemble prediction should automatically aim at the same mixing weight of two component predictors that are equal to other.
For any \(k=1,...,K\), write \(\hat{D}(s;\bar{\mu}_{k})=E_{q}(\hat{\tau}_{k}\mid s)\), similarly to \(\hat{D}(s;\bar{\mu})\) in Theorem 1. Regarding the risk of \(\bar{\mu}_{k}(x,s)\) defined by (1), we have
\[\tau(\bar{\mu}_{k})=E_{s_{1}}\{\tau(\bar{\mu}_{k}|s_{1})\}=E_{s_{1}}\left\{E_ {s}\big{(}\hat{\tau}_{k}\mid s_{1}\big{)}\right\}=E_{p}\left\{E_{q}\big{(}\hat {\tau}_{k}\mid s\big{)}\right\}\]
where \(\tau(\bar{\mu}_{k}|s_{1})\) is its conditional risk given \(s_{1}\). Let the SRB operation yield
\[w_{k}=E_{q}(\delta_{k}\mid s),\quad\delta_{k}=\begin{cases}1&\text{if}\ \ k=\arg \min\limits_{l=1,...,K}\hat{\tau}_{l}\\ 0&\text{otherwise}\end{cases}. \tag{11}\]
The corresponding mixed SRB-predictor (10) is robust against \(\mu_{k}\approx\mu_{l}\) for any \(k\neq l\). While the SRB-selector (9) is a binary classifier taking the majority-vote over all \((s_{1},s_{2})\), the robust mixing weight (11) is a proportion over all the votes.
## 3 Illustration
Simulations below provide a simple illustration of the design-based individual prediction approach and the potential pitfalls of the IID error model.
We generate \(B=200\) sets of \(y_{U}\) of population size \(N=2000\) in an ad hoc manner. For each \(y_{U}\), half of them are generated by M1 below and half of them by M2, where \(x_{1}\sim N(0,1)\) and \(x_{2}\sim\text{Poisson}(5)\),
\[\text{(M1)}\quad y=x_{1}+0.5x_{2}+\epsilon,\quad\epsilon\sim \begin{cases}N(0,1)&\text{if}\ z=1\Leftrightarrow x_{2}<3\\ N(-2,1)&\text{if}\ z=2\Leftrightarrow 3\leq x_{2}<7\\ N(2,1)&\text{if}\ z=3\Leftrightarrow x_{2}\geq 7\end{cases}\] \[\text{(M2)}\quad y=0.5+1.5x_{1}+x_{2}+\epsilon,\quad\epsilon\sim z ^{2}+N(0,0.25),\quad z\sim N(0,1),\]
From each population we draw a sample of size \(n=200\) either by SRS without replacement or Poisson sampling. For Poisson sampling, we set \(\pi_{i}^{-1}\propto 1+1/\exp(\alpha+0.5y_{i})\) and \(\sum_{i\in U}\pi_{i}=n\), where \(\alpha\in\{1,-0.1,-1\}\) leads to the coefficient of variation of \(\pi_{i}\) over \(U\), denoted by \(\text{cv}_{\pi}\), to be about 15%, 30% and 45%, respectively. This illustrates a situation where sample selection may cause issues for model-based uncertainty assessment.
Let an order-3 ensemble contain linear regression, random forest and support vector machine. Let the feature vector be \(x=(x_{1},x_{2})\) in all cases. We use a 70-30 random split for subsampling of \((s_{1},s_{2})\) and \(T=50\) for relevant Monte Carlo SRB operations such as (7). We obtain the SRB-predictor (4) selected by (9), and the two mixed SRB-predictors using weights that are either optimal for (10) or robust (11). Moreover, for each simulation \(b=1,...,B\), let the hypothetical SRB-selector (9) be
\[\bar{\delta}_{k}(s^{(b)})=\begin{cases}1&\text{if}\ k=\arg\min\limits_{l=1,...,K}\sum\limits_{i\in R^{(b)}}\left\{\tilde{\mu}_{l}(x_{i},s^{(b)})-y_{i} \right\}^{2}\\ 0&\text{otherwise}\end{cases}\]
which is based on the true total squared error of predictions of the Monte Carlo SRB-predictors; let the hypothetical optimal mixing weights \(w_{k}^{(b)}\) minimise the true total squared error of prediction of (10). This yields the hypothetical per
fectly selected or optimally mixed predictors, respectively.
For each predictor, we estimate its standardised risk (1), \(\tau/|R|\), as described in Section 2, where we have \(\pi_{2i}\equiv n_{2}/(N-n_{1})\) under SRS given \(n_{1}=|s_{1}|\) and \(n_{2}=|s_{2}|\), and we use \(\phi_{2i}\) given by (8) instead of \(\pi_{2i}\) under Poisson sampling. Note that if \(\hat{\tau}\) is unbiased for \(\tau\) over repeated sampling from a given population, then it is also unbiased for the average of mean squared error of prediction, \(D/|R|\), over all the 200 populations. Note also that \(\hat{\tau}\) calculated for the two hypothetical predictors are not affected by the actual uncertainty associated with model selection or estimating the mixing weights.
For comparison, we consider two estimators of the mean squared error of prediction, both of which rely on the IID model of prediction errors; see e.g. James et al. (2013).
* Residual-based estimator \(\sum_{i\in s}\hat{e}_{i}^{2}/n\) where \(\hat{e}_{i}=\tilde{\mu}(x_{i},s)-y_{i}\) for given predictor.
* Given \(\mu\) either selected or mixed, let the cross-validation-based estimator be \[\frac{1}{T}\sum_{t=1}^{T}\frac{1}{n_{2}}\ \sum_{i\in s_{2}^{(t)}}\{\mu(x_{i},s_{1}^{(t)})-y_{ i}\}^{2},\quad\tilde{\mu}(x_{i},s)=\frac{1}{T}\sum_{t=1}^{T}\mu(x_{i},s_{1}^{(t)})\.\]
Table 1 provides a summary of the models and mixing weights used over the 200 simulations. Linear regression is the best single-model predictor for all the \((y_{U},x_{U},s)\) generated by the ad hoc mechanism. The hypothetical and actual selectors (or optimal mixing weights) are quite close to each other, whereas the linear regression model is somewhat weighted down for the robust mixing
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & & & SRS & & & \multicolumn{3}{c}{PS (cv\({}_{\pi}\)=15\%)} \\ & & LR & RF & SVM & LR & RF & SVM \\ \cline{3-8} Hypothetical & Selected & 1 & 0 & 0 & 1 & 0 & 0 \\ & Mixed, optimal & 0.73 & 0.21 & 0.06 & 0.81 & 0.18 & 0.01 \\ \cline{2-8} Actual & Selected & 0.97 & 0.02 & 0.01 & 1 & 0 & 0 \\ & Mixed, optimal & 0.74 & 0.18 & 0.08 & 0.82 & 0.15 & 0.02 \\ & Mixed, robust & 0.68 & 0.20 & 0.12 & 0.75 & 0.18 & 0.07 \\ \hline & & & \multicolumn{3}{c}{PS (cv\({}_{\pi}\)=30\%)} & \multicolumn{3}{c}{PS (cv\({}_{\pi}\)=45\%)} \\ & & LR & RF & SVM & LR & RF & SVM \\ \cline{3-8} Hypothetical & Selected & 1 & 0 & 0 & 1 & 0 & 0 \\ & Mixed, optimal & 0.9 & 0.1 & 0 & 0.98 & 0.02 & 0 \\ \cline{2-8} Actual & Selected & 0.995 & 0 & 0.005 & 1 & 0 & 0 \\ & Mixed, optimal & 0.87 & 0.13 & 0 & 0.84 & 0.08 & 0 \\ & Mixed, robust & 0.78 & 0.19 & 0.03 & 0.80 & 0.18 & 0.02 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Proportion selected by (9), average optimal (10) or robust (11) mixing weights. SRS, without replacement; PS, Poisson sampling; LR, Linear regression; RF, Random forest; SVM, Support vector machine.
weights. In any case, we are not concerned with best possible prediction here, but rather valid estimation of the errors of any given predictor.
Table 2 shows the average of mean squared error of prediction and its estimates over the 200 simulations. The design-based risk estimator for the two hypothetical predictors are essentially unbiased; the approximate \(\phi_{2i}\) under Poisson sampling have worked as well as the exact \(\pi_{2i}\) under SRS. Due to the missing uncertainty associated with the actual SRB-selector and mixing-weights, the risk estimator for the three actual predictors exhibit some underestimation, the worst of which is \(-6.6\%\) under Poisson sampling with \(\text{cv}_{\pi}\)=45%.
The cross-validation-based estimator of mean squared error of prediction is nearly unbiased when the sample is actually selected by SRS, where the out-of-bag squared prediction errors in the test sample \(s_{2}\) have the same mean as those in \(R\), conditional on \(s_{1}\) under the twice-SRS \(pq\)-design. This is reasonable, because the IID error model would be valid under SRS with replacement. However, the cross-validation-based estimator can become severely biased, if the IID model does not hold for the actual sample selection mechanism, as illustrated here for Poisson sampling as \(\text{cv}_{\pi}\) increases. Finally, residual-based estimation of mean squared error of prediction should be avoided because it causes underestimation generally, e.g. the bias is severe for the mixed predictors even under simple random sampling.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & & SRS & & \multicolumn{3}{c}{PS (\(\text{cv}_{\pi}\)=15\%)} \\ MSEP \(D/|R|\) & Selected & Optimal & Robust & Selected & Optimal & Robust \\ Average, true & 8.432 & 8.367 & 8.380 & 8.566 & 8.558 & 8.578 \\ Design, hypothetical & 8.399 & 8.300 & - & 8.566 & 8.513 & - \\ Design, actual & 8.395 & 8.260 & 8.284 & 8.416 & 8.343 & 8.372 \\ Model, CV-based & 8.453 & 8.341 & 8.374 & 8.014 & 7.981 & 8.009 \\ Model, residual-based & 8.076 & 7.264 & 7.178 & 7.766 & 7.146 & 7.008 \\ \cline{2-7} & \multicolumn{3}{c}{PS (\(\text{cv}_{\pi}\)=30\%)} & \multicolumn{3}{c}{PS (\(\text{cv}_{\pi}\)=45\%)} \\ MSEP \(D/|R|\) & Selected & Optimal & Robust & Selected & Optimal & Robust \\ Average, true & 9.021 & 9.043 & 9.072 & 9.866 & 9.915 & 9.981 \\ Design, hypothetical & 9.013 & 8.994 & - & 9.866 & 9.862 & - \\ Design, actual & 8.767 & 8.711 & 8.747 & 9.288 & 9.257 & 9.316 \\ Model, CV-based & 7.566 & 7.563 & 7.590 & 6.992 & 6.997 & 7.035 \\ Model, residual-based & 7.323 & 6.861 & 6.638 & 6.776 & 6.509 & 6.187 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Mean squared error of prediction (\(D/|R|\)) and estimates, averaged over 200 simulations; predictor selected by majority-vote, or averaged by optimal or robust mixing weights. MSEP, Mean squared error of prediction; SRS, without replacement; PS, Poisson Sampling; CV-based, Cross-validation-based.
Final remarks
Although the IID model of the unobserved individual prediction errors is commonly applied for algorithm-based machine learning, it can be misleading in situations where the observations are not selected with an equal probability.
We define and develop a design-based approach to individual prediction, which requires the sample for learning to be selected by a probability design. Ensemble learning by voting or averaging is aimed at the expected predictor by cross-validation. Valid inference of the associated risk (1) can be obtained with respect to the known probability design, regardless if the adopted predictor is true or not. In practice, stratified simple random sampling can be considered instead of individually varying sampling probabilities, which can help to reduce the uncertainty due to model-selection or mixing-weight estimation.
Combining sampling with machine learning can be relevant for many other settings. For instance, we have only considered ensemble learning by voting and averaging, but not the other approaches such as gating or stacking. Or, the training for deep learning or neural networks taking graph, instead of vector, as inputs may need to be based on subsets of data or subgraphs, due to limited memory size or computing power, for which sampling methods and design-based estimation require their own studies.
|
2308.05226 | Training neural networks with end-to-end optical backpropagation | Optics is an exciting route for the next generation of computing hardware for
machine learning, promising several orders of magnitude enhancement in both
computational speed and energy efficiency. However, to reach the full capacity
of an optical neural network it is necessary that the computing not only for
the inference, but also for the training be implemented optically. The primary
algorithm for training a neural network is backpropagation, in which the
calculation is performed in the order opposite to the information flow for
inference. While straightforward in a digital computer, optical implementation
of backpropagation has so far remained elusive, particularly because of the
conflicting requirements for the optical element that implements the nonlinear
activation function. In this work, we address this challenge for the first time
with a surprisingly simple and generic scheme. Saturable absorbers are employed
for the role of the activation units, and the required properties are achieved
through a pump-probe process, in which the forward propagating signal acts as
the pump and backward as the probe. Our approach is adaptable to various analog
platforms, materials, and network structures, and it demonstrates the
possibility of constructing neural networks entirely reliant on analog optical
processes for both training and inference tasks. | James Spall, Xianxin Guo, A. I. Lvovsky | 2023-08-09T21:11:26Z | http://arxiv.org/abs/2308.05226v1 | # Training neural networks with end-to-end optical backpropagation
###### Abstract
Optics is an exciting route for the next generation of computing hardware for machine learning, promising several orders of magnitude enhancement in both computational speed and energy efficiency. However, to reach the full capacity of an optical neural network it is necessary that the computing not only for the inference, but also for the training be implemented optically. The primary algorithm for training a neural network is backpropagation, in which the calculation is performed in the order opposite to the information flow for inference. While straightforward in a digital computer, optical implementation of backpropagation has so far remained elusive, particularly because of the conflicting requirements for the optical element that implements the nonlinear activation function. In this work, we address this challenge for the first time with a surprisingly simple and generic scheme. Saturable absorbers are employed for the role of the activation units, and the required properties are achieved through a pump-probe process, in which the forward propagating signal acts as the pump and backward as the probe. Our approach is adaptable to various analog platforms, materials, and network structures, and it demonstrates the possibility of constructing neural networks entirely reliant on analog optical processes for both training and inference tasks.
## I Introduction
Machine learning, one of the most revolutionary scientific breakthroughs in the past decades, has completely transformed the technology landscape, enabling innovative applications in fields ranging from natural language processing to drug discovery. As the demand for increasingly sophisticated machine learning models continues to escalate, there is a pressing need for faster and more energy-efficient computing solutions. In this context, analog computing has emerged as a promising alternative to traditional digital electronics [1; 2; 3; 4; 5; 6; 7]. A particularly exciting platform for analog neural networks (NNs) is optics, in which the interference and diffraction of light during propagation implements the linear part of every computational layer [8; 9].
Most of the current analog computing research and development is aimed at using the NN for inference [8; 10]. Training such NNs, on the other hand, is a challenge. This is because the backpropagation algorithm [11], the workhorse of training in digital NNs, requires the calculation to be performed in the order opposite to the information flow for inference, which is difficult to implement on an analog physical platform. Hence analog models are typically trained offline (_in silico_), on a separate digital simulator, after which the parameters are transferred to the analog hardware. In addition to being slow and inefficient, this approach can lead to errors arising from imperfect simulation and systematic errors ('reality gap'). In optics, for example, these effects may result from dust, aberrations, spurious reflections and inaccurate calibration [12].
To enable learning in analog NNs, different approaches have been proposed and realized [13]. Several groups explored various 'hardware-in-the loop' schemes, in which, while the backpropagation was done _in silico_, the signal acquired from the analog NN operating in the inference regime was incorporated into the calculation of the feedback for optimizing the NN parameters [14; 15; 16; 17; 18; 19; 20]. This has partially reduced the training error, but has not addressed the low speed and inefficiency of _in silico_ training.
Recently, several optical neural networks (ONNs) were reported that were trained online (_in situ_) using methods alternative to backpropagation. Bandyopadhyay _et al._ trained an ONN based on integrated photonic circuits using simultaneous perturbation stochastic approximation, i.e. randomly perturbing all ONN parameters and using the observed change of the loss function to approximate its gradient [21]. Filipovich _et al._ applied direct feedback alignment, wherein the error calculated at the output of the ONN is used to update the parameters of all layers [22]. However both these methods are inferior to backpropagation as they take much longer to converge, especially for sufficiently deep ONNs [23].
An optical implementation of the backpropagation algorithm was proposed by Hughes _et al._[24], and recently demonstrated experimentally [25], showing that the training methods of current digital NNs can be applied to analog hardware. However, their scheme omitted a crucial step for optical implementation: backpropagation through nonlinear activation layers. Their method requires digital nonlinear activation and multiple optoelectronic inter-conversions inside the network, complicating the training process. Furthermore, the method applies only to a specific type of ONN that uses interferometer meshes for the linear layer, and does not generalise to other ONN architectures. Complete implementation of the backpropagation algorithm in optics, through
all the linear and nonlinear layers, that can generalise to many ONN systems, remains a highly challenging goal.
In this work, we address this long-standing challenge and present the first complete optical implementation of the backpropagation algorithm in a two-layer ONN. The gradients of the loss function with respect to the NN parameters are calculated by light travelling through the system in the reverse direction. The main difficulty of all-optical training lies in the requirement that the nonlinear optical element used for the activation function needs to exhibit different properties for the forward and backward propagating signals. Fortunately, as demonstrated in our earlier theoretical work [26] and explained below, there does exist a group of nonlinear phenomena, which exhibits the required set of properties with sufficient precision.
We optically train our ONNs to perform classification tasks, and our results surpass those trained with a conventional _in silico_ method. Our optical training scheme can be further generalized to other platforms using different linear layers and analog activation functions, making it an ideal tool for exploring the vast potential of analog computing for training neural networks.
## II Optical training algorithm
We consider a multilayer perceptron -- a common type of NN which consists of multiple linear layers that establish weighted connections between neurons, inter-laid by activation functions that enable the network to learn complex nonlinear functions. To train the NN, one presents it with a training set of labeled examples and iteratively adjusts the NN parameters (weights and biases) to find the correct mapping between the inputs and outputs.
The training steps are summarised in Fig. 1(d). The weight matrices, denoted \(W^{(i)}\) for the \(i\)-th layer, are first initialized with random values. Each iteration of training starts by entering the input examples from the training set as input vectors \(x=a^{(0)}\) into the NN, and forward propagating through all of its layers. In every layer \(i\), one performs a matrix-vector multiplication (MVM) of the weight matrix and the activation vector,
\[z^{(i)}=W^{(i)}\times a^{(i-1)}, \tag{1}\]
followed by element-wise application of the activation function \(g(\cdot)\) to the resulting vector:
\[a^{(i)}=g\left(z^{(i)}\right). \tag{2}\]
The output \(y=a^{(L)}\) of an \(L\)-layer NN allows one to compute the loss function \(\mathcal{L}(y,t)\) that determines the difference between the network predictions \(y\) and ground truth labels \(t\) from the training set. The backpropagation algorithm helps calculating the gradient of this loss function with respect to all the parameters in the network, through what is essentially an application of the chain rule of calculus. The network parameters are then updated using these gradients and optimization algorithms such as stochastic gradient descent. The training process is repeated until convergence.
The gradients we require are given by [11] as
\[\frac{\partial\mathcal{L}}{\partial W^{(i)}}=\delta^{(i)}\otimes a^{(i-1)}, \tag{3}\]
where \(\delta^{(i)}\) is referred to as the "error vector" at the \(i\)th layer and \(\otimes\) denotes the outer product. The error vector is calculated as
\[\delta^{(i-1)}=\left({W^{(i)}}^{T}\times\delta^{(i)}\right)g^{\prime}\left(z^{ (i-1)}\right), \tag{4}\]
going through layers in reverse sequence. The expression for the error vector \(\delta^{(L)}\) in the last layer depends on the choice of the loss function, but for the common loss functions of mean-squared error and cross-entropy (with an appropriate choice of activation function) it is simply the difference between the NN output and the label: \(\delta^{(L)}=y-t\).
Therefore, to calculate the gradients at each layer we need one vector from the forward pass through the network (the activations) and one vector from the backward pass (the errors).
We see from Eq. (4) that the error backpropagation consists of two operations. First we must perform an MVM, mirroring the feedforward linear operation (1). In an ONN, this can be done by light that propagates backwards through the same linear optical arrangement [27]. The second operation consists in modulation of the MVM output by the activation function derivative and poses a significant challenge for optical implementation. This is because most optical media exhibit similar properties for forward and backward propagation. On the other hand, our application requires an optical element that is (1) nonlinear in the forward direction, (2) linear in the backward direction and (3) modulates the backward light amplitude by the derivative of the forward activation function.
We have solved this challenge with our optical backpropagation protocol, which calculates the right-hand side of Eq. (4) entirely optically with no opto-electronic conversion or digital processing. The first component of our solution is the observation that many optical media exhibit nonlinear properties for strong optical fields, but are approximately linear for weak fields. Hence, we can satisfy the conditions (1) and (2) by maintaining the back-injected beam at a much lower intensity level than the forward. Furthermore, there exists a set of nonlinear phenomena that also addresses the requirement (3). An example is saturable absorption (SA). The transmissivity of SA medium in the backward direction turns out to approximate the derivative of its intensity-dependent transmission in the forward direction \(g^{\prime}\left(z^{(i-1)}\right)\). This approximation is valid up to a certain numerical factor and only for small values of \(z^{(i-1)}\); however, as shown in
our prior work [26], this is sufficient for successful training.
## III Multilayer Onn
Our ONN as shown in Fig. 1(a,b) is implemented in a free-space tabletop setting. The neuron values are encoded in the transverse spatial structure of the propagating light field amplitude. Spatial light modulators are used to encode the input vectors and weight matrices. The NN consists of two fully-connected linear layers implemented with optical MVM following our previously demonstrated experimental design [28].
This design has a few characteristics that make it suitable for use in a deep neural network. First, it is re
Figure 1: **Illustration of optical training.****(a)** Network architecture of the ONN used in this work, which consists of two fully-connected linear layers and a hidden layer. **(b)** Simplified experimental schematic of the ONN. Each linear layer performs optical MVM with a cylindrical lens and a spatial light modulator (SLM) that encodes the weight matrix. Hidden layer activations are computed using SA in an atomic vapour cell. Light propagates in both directions during optical training. **(c)** Working principle of SA activation. The forward beam (pump) is shown by solid red arrows, backward (probe) by purple wavy arrows. The probe transmission depends on the strength of the pump and approximates the gradient of the SA function. For high forward intensity (top panel), a large portion of the atoms are excited to the upper level. Stimulated emission produced by these atoms largely compensates the absorption due to the atoms in the ground level. For weak pump (bottom panel), the excited level population is low and the absorption is significant. **(d)** Neural network training procedure. **(e)** Optical training procedure. Both signal and error propagation in the two directions are fully implemented optically. Loss function calculation and parameter update are left for electronics without interrupting the optical information flow.
configurable, so that both neuron values and network weights can be arbitrarily changed. Second, multiple MVM blocks can be cascaded to form a multilayer network, as the output of one MVM naturally forms the input of the next MVM. Using a coherent beam also allows us to encode both positive- and negative-valued weights. Finally, the MVM works in both directions, meaning the inputs and outputs are reversible, which is critical for the implementation of our optical backpropagation algorithm. The hidden layer activation between the two layers is implemented optically by means of SA in a rubidium atomic vapour cell [Fig. 1(c)].
## Results
### Linear layers
We first set up the linear layers that serve as the backbone of our ONN, and we make sure that they work accurately and simultaneously in both directions -- a highly challenging task that has never been achieved before to our best knowledge.
This involves three MVMs: first layer in the forward direction (MVM-1), second layer in both forward (MVM-2a) and backward (MVM-2b) directions. To characterise these MVMs, we apply random vectors and matrices and simultaneously measure the output of all three: the results for 300 random MVMs are presented in Fig. 2(a). To quantify the MVM performance, we define the signal-to-noise ratio (SNR, see Methods for details). As illustrated by the histograms, MVM-1 has the greatest SNR of 14.9, and MVM-2a has a lower SNR of 7.1, as a result of noise accumulation from both layers and the reduced signal range. MVM-2b has a slightly lower SNR of 6.7, because the optical system is optimized for the forward direction. Comparing these experimental results with a simple numerical model, we estimate 1.3% multiplicative noise in our MVMs, which is small enough not to degrade the ONN performance [12].
### Nonlinearity
With the linear layers fully characterized, we now measure the response of the activation units in both directions. With the vapor cell placed in the setup and the laser tuned to resonance with the atomic transition, we pass the output of MVM-1 through the vapor cell in the forward direction. The response as presented in Fig. 2(b) shows strong nonlinearity. We fit the data with the theoretically expected SA transmissivity (see Supplementary for details), thereby finding the optical depth to be \(\alpha_{0}=7.3\), which is sufficient to achieve high accuracy in ONNs [26]. The optical depth and the associated nonlinearity can be easily tuned to fit different network requirements by controlling the temperature of the vapor cell. In the backward direction, we pass weak probe beams through the vapor cell and measure the output. Both the forward and backward beams are simultaneously present in the vapor cell during the measurement.
In Fig. 2(c) we measure the effect of the forward amplitude \(z^{(1)}\) on the transmission of the backward beam through the SA. The theoretical fit for these data -- the expected backward transmissivity calculated from the physical properties of SA -- is shown by the red curve. For comparison, the orange curve shows the rescaled exact derivative \(g^{\prime}\left(z^{(1)}\right)\) of the SA function, which is the dependence required for the calculation (4) of the training signal. Although the two curves are not identical, they both match the experimental data for a broad range of neuron values generated from the random MVM, hence the setting is appropriate for training.
### All optical classification
After setting up the two-layer ONN, we perform end-to-end optical training and inference on classification tasks: distinguishing two classes of data points on a two-dimensional plane (Fig. 3). We implement a fully-connected feed-forward architecture, with three input neurons, five hidden layer neurons and two output neurons (Fig. 1). Two input neurons are used to encode the input data point coordinates \((x_{1},x_{2})\), and the third input neuron of constant value is used to set the first layer bias. The class label is encoded by a 'one-hot' vector \((0,1)\) or \((1,0)\), and we use categorical cross-entropy as the loss function.
We optically train the ONN on three 400-element datasets with different nonlinear boundary shapes, which we refer to as 'rings', 'XOR' and 'arches' [Fig. 3(a)]. Another 200 similar elements of each set are used for validation, i.e. to measure the loss and accuracy after each epoch of training. The test set consists of a uniform grid of equally-spaced \((x_{1},x_{2})\) values. The optical inference results for the test set are displayed in Fig. 3(a) by light purple and orange backgrounds, whereas the blue circles and orange triangles show the training set elements.
For all three datasets, each epoch consists of 20 minibatches, with a mini-batch size of 20, and we use the Adam optimizer to update the weights and biases from the gradients. We tune hyperparamters such as learning rate and number of epochs to maximise network performance. Table 1 summarises the network architecture and hyperparameters used for each dataset.
Figure 3(b) shows the optical training performance on the 'rings' dataset. We perform five repeated training runs, and plot the loss and accuracy for the validation set after each epoch of training. To visualise how the network is learning the boundary between the two classes, we also run a test dataset after each epoch. Examples of the network output after 1, 3, 6 and 10 epochs are shown. We see that the ONN quickly learns the nonlinear boundary and gradually improves the accuracy to 100%. This indicates a strong optical nonlinearity in the system and
a good gradient approximation in optical backpropagation. Details of the training procedure are provided in the following section.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline Dataset & \begin{tabular}{c} Input \\ neurons \\ \end{tabular} & \begin{tabular}{c} Hidden \\ neurons \\ \end{tabular} & \begin{tabular}{c} Output \\ neurons \\ \end{tabular} & \begin{tabular}{c} Learning \\ rate \\ \end{tabular} & Epochs &
\begin{tabular}{c} Batches \\ per epoch \\ \end{tabular} & Batch size \\ \hline Rings & & & & & 0.01 & 16 & & \\ \hline XOR & 2 & 5 & 2 & 0.005 & 30 & 20 & 20 \\ \hline Arches & & & & 0.01 & 25 & & \\ \hline \end{tabular}
\end{table}
Table 1: **Summary of network architecture and hyperparameters used in optical and digital training.**
Figure 2: **Multi-layer ONN characterisation.****(a)** Scatter plots of measured against theory results for MVM-1 (first layer forwards), MVM-2a (second layer forwards) and MVM-2b (second layer backwards). All three MVM results are taken simultaneously. Histograms of the signal and noise error for each MVM are displayed underneath. **(b)** First-layer activations \(a_{\text{meas}}^{(1)}\) measured after the vapor cell, plotted against the theoretically expected linear MVM-1 output \(z_{\text{theory}}^{(1)}\) before the cell. The green line is a best fit curve of the theoretical SA nonlinear function. **(c)** The amplitude of a weak constant probe passed backwards through the vapor cell as a function of the pump \(z_{\text{theory}}^{(1)}\), with constant input probe. Measurements for both forward and backward beams are taken simultaneously.
Methods section, and results for the other two datasets in Supplementary Note 3.
To better understand the optical training process, we explore the evolution of the output neuron and error vector values in Fig. 3(c). First, we plot the mini-batch mean value of each output neuron, \(\bar{z}_{j}^{(2)}\), for the inputs from the two classes separately in the upper and lower panels, over the course of the training iterations. We see the output neuron values diverge in opposite ways for the two classes, such that the inputs can be distinguished and correctly classified.
Second, we similarly plot the evolution of the mini-batch mean output error, \(\bar{\delta}^{(2)}\), for each neuron. This is calculated as the difference between the network output vector \(a^{(2)}\) and the ground truth label \(y\), averaged over each mini-batch. As expected, we see the output errors converge towards zero as the system learns the correct boundary.
### Optical training vs _in-silico_ training
To demonstrate the optical training advantage, we perform _in-silico_ training of our ONN as a comparison. We digitally model our system with a neural network of the equivalent architecture, including identical learning rate, number of epochs and all other hyperparamters. The hidden layer nonlinearity and the associated gradient are given by the best fit curve and theoretical probe response of Fig. 2(b). The trained weights are subsequently used for inference with our ONN. The top and bottom rows in Fig. 3(a) plot the network output of the test boundary set, after the system has been trained optically and digitally, respectively, for all three datasets. In all cases, the optically trained network achieves almost perfect accuracy, whilst the digitally trained network is clearly not optimised, with the network prediction not matching the data. This is further evidence for the already well
Figure 3: **Optical training performance.****(a)** Decision boundary charts of the ONN inference output for three different classification tasks, after the ONN has been trained optically (top) or _in-silico_ (bottom). **(b)** Learning curves of the ONN for classification of the ‘rings’ dataset, showing mean and standard deviation of the validation loss and accuracy averaged over 5 repeated training runs. Shown above are decision boundary charts of the ONN output for the test set, after different epochs. **(c)** Evolution of output neuron values, and of output errors, for the training set inputs of the two classes.
documented advantages of hardware-in-the-loop training schemes.
## Discussion
Our optical training scheme is surprisingly simple and effective. It adds minimal computational overhead to the network, since it doesn't require _a priori_ simulation, or intricate mapping of network parameters to physical device settings. It also imposes minimal hardware complexity on the system, as it requires only a few additional beam splitters and detectors to measure the activation and error values for parameter updates.
Our scheme can be generalised and applied to many other analog neural networks with different physical implementations of the linear and nonlinear layers. We list a few examples in Table. 2. Common optical linear operations include MVM, diffraction and convolution. Compatible optical MVM examples include our free-space multiplier and photonic crossbar array [29], as they are both bidirectional, in the sense that optical field propagating backwards through these arrangements gets multiplied by the transpose of the weight matrix. Diffraction naturally works in both directions, hence diffractive neural networks constructed using different programmable amplitude and phase masks also satisfy the requirements [30]. Optical convolution, achieved with the Fourier transform by means of a lens, and mean pooling, achieved through an optical low pass filter, also work in both directions. Therefore, a convolutional neural network can be optically trained as well. Detailed analysis on the generalization to these linear layers can be found in Supplementary Note 4.
Regarding the generalization to other nonlinearity choices, the critical requirement is the ability to acquire gradients during backpropagation. Our pump-probe method is compatible with multiple types of optical nonlinearities: saturable absorption, saturable gain and intensity-dependent phase modulation [31]. Using saturable gain as the nonlinearity offers the added advantage of loss compensation in a deep network, and using intensity-dependent phase modulation nonlinearity, such as self-lensing, allows one to build complex-valued optical neural networks with potentially stronger learning capabilities [32; 26; 33].
In our ONN training implementation, some computational operations remain digital, specifically the calculation of the last layer error \(\delta^{(2)}\) and the outer product (3) between the activation and error vectors. Both these operations can be done optically if needed [26]. The error vector can in many cases be obtained by subtracting the ONN output from the label vector by way of destructive interference [12]. Interference can also be utilized to compute the outer product by fanning out the two vectors and overlapping them in a criss-cross fashion onto a pixelated photosensor.
Our optically trained ONN can be scaled up to improve computing performance. Previously, using a similar experimental setup, we have demonstrated an ONN with 100 neurons per layer and high multiplier accuracy [12], and 1000 neurons can be supported by commercial SLMs or liquid crystal displays. Optical encoders and detectors can work at speeds up to 100 GHz using off-the-shelf components, enabling ultra-fast end-to-end optical computing at low latency. Therefore computational speeds up to \(10^{17}\) operations per second are within reach, and our optical training method is compatible with this productivity rate.
## Methods
### Multilayer ONN
To construct the multi-layer ONN, we connect two optical multipliers in series. For the first layer (MVM-1), the input neuron vector \(x\) is encoded into the field amplitude of a coherent beam using a digital micromirror device (DMD), DMD-1. This is a binary amplitude modulator, and every physical pixel is a micromirror that can reflect at two angles representing 0 or 1. By grouping 128 physical pixels as a block, we are able to represent 7-bit positive-valued inputs on DMD-1, with the input value proportional to the number of binary pixels 'turned on' in each block.
Since MVM requires performing dot products of the input vector with every row of the matrix, we create multiple copies of the input vector on DMD-1, and image them onto the \(W^{(1)}\) matrix mask -- a phase-only liquid-crystal spatial light modulator (LC-SLM), SLM-1 -- for element-wise multiplication. The MVM-1 result \(z^{(1)}\) is obtained by summing the element-wise products using a cylindrical lens (first optical 'fan-in'), and passing the beam through a narrow adjustable slit to select the zero spatial frequency component. The weights in our ONN are real-valued, encoded by LC-SLMs with 8-bit resolution using a phase grating modulation method that enables arbitrary and accurate field control [34].
The beam next passes through a rubidium vapor cell to apply the activation function, such that immediately after the cell the beam encodes the hidden layer activation vector, \(a^{(1)}\). The beam continues to propagate and becomes the input for the second linear layer. Another cylindrical lens is used to expand the beam (first optical 'fan-out'), before modulation by the second weight matrix mask SLM-2. Finally, summation by a third cylindrical lens (second optical 'fan-in') completes the second MVM in the forward direction (MVM-2a), and the final beam profile encodes \(z^{(2)}\).
To read out the activation vectors required for the optical training, we insert beam splitters at the output of each MVM to tap-off a small portion of the beam. The real-valued vectors are measured by high-speed cameras, using coherent detection techniques detailed in Supplementary Note 2.
At the output layer of the ONN we use a digital softmax function to convert the output values into probabilities, and calculate the loss function and output error vector, which initiates the optical backpropagation.
### Optical backpropagation
The output error vector, \(\delta^{(2)}\) is encoded in the backward beam by using DMD-2 to modulate a beam obtained from the same laser as the forward propagating beam. The backward beam is introduced to the system through one of the arms of the beam splitter placed at the output of MVM-2a, and carefully aligned so as to overlap with the forward beam. SLM-2 performs element-wise multiplication by the transpose of the second weight matrix.
The cylindrical lens that performs 'fan-out' for the forward beam, performs 'fan-in' for the backward beam into a slit, completing the second layer backwards MVM (MVM-2b). Passing through the vapor cell modulates the beam by the derivative of the activation function, after which the beam encodes the hidden layer error vector \(\delta^{(1)}\). Another beam splitter and camera are used to tap off the backward beam and measure the result.
In practice, two halves of the same DMD act as DMD-1 and DMD-2, and a portion of SLM-1 is used to encode the sign of the error vector. A full experiment diagram is provided in Supplementary Note 1.
Each training iteration consists of optically measuring all of \(a^{(1)}\), \(z^{(2)}\) and \(\delta^{(1)}\). These vectors are used, along with the inputs \(x=a^{(0)}\), to calculate the weight gradients according to Eq. (3) and weight updates, which are then applied to the LC-SLMs. This process is repeated for all the mini-batches until the network converges.
### SA activation
The cell with atomic rubidium vapor is heated to 70 degrees by a simple heating jacket and temperature controller. The laser wavelength is locked to the \(D_{2}\) transition at 780nm.
The power of the forward propagating beam is adjusted to ensure the beam at the vapor cell is intense enough to saturate the absorption, whilst the maximum power of the backward propagating beam is attenuated to approximately 2% of the maximum forward beam power, to ensure a linear response when passing through the cell in the presence of a uniform pump.
In the experiment, the backward probe response does not match perfectly with the simple two-level atomic model, due to two factors.
First, the probe does not undergo 100% absorption even with the pump turnes off. Second, a strong pump beam causes the atoms to fluoresce in all directions, including along the backward probe path. Therefore, the backward signal has a background offset proportional to the forward signal. To compensate for these issues, three measurements are taken to determine the probe response \(\delta^{(1)}\) for each training iteration: pump only; probe only; and both pump and probe. In this way, the background terms due to pump fluorescence and unabsorbed probe could be negated.
**Acknowledgements** This work is supported by Innovate UK Smart Grant 10043476. X.G. acknowledges support from the Royal Commission for the Exhibition of 1851 Research Fellowship.
**Author contributions** X.G. and A.L. conceived the experiment. J.S. carried out the experiment and performed the data analysis. All the authors jointly prepared the manuscript. This work was done under the supervision of A.L.
**Competing interests** The authors declare no competing interests in this work.
|
2301.11758 | Robust statistical properties of T1 transitions in confluent cell
tissues | Large-scale tissue deformation which is fundamental to tissue development
hinges on local cellular rearrangements, such as T1 transitions. In the realm
of the multi-phase field model, we analyse the statistical and dynamical
properties of T1 transitions in a confluent tissue. We identify an energy
profile that is robust to changes in several model parameters. It is
characterized by an asymmetric profile with a fast increase in energy before
the T1 transition and a sudden drop after the T1 transition, followed by a slow
relaxation. The latter being a signature of the fluidity of the cell tissue. We
show that T1 transitions are sources of localised large deformation of the
cells undergoing the neighbour exchange and induce other T1 transitions in the
nearby cells through a chaining of events that propagate local cell deformation
to large scale tissue flows. | Harish P Jain, Axel Voigt, Luiza Angheluta | 2023-01-27T14:55:26Z | http://arxiv.org/abs/2301.11758v1 | # Robust statistical properties of T1 transitions in confluent cell tissues
###### Abstract
Large-scale tissue deformation which is fundamental to tissue development hinges on local cellular rearrangements, such as T1 transitions. In the realm of the multi-phase field model, we analyse the statistical and dynamical properties of T1 transitions in a confluent tissue. We identify an energy profile that is robust to changes in several model parameters. It is characterized by an asymmetric profile with a fast increase in energy before the T1 transition and a sudden drop after the T1 transition, followed by a slow relaxation. The latter being a signature of the fluidity of the cell tissue. We show that T1 transitions are sources of localised large deformation of the cells undergoing the neighbour exchange and induce other T1 transitions in the nearby cells through a chaining of events that propagate local cell deformation to large scale tissue flows.
## 1 Introduction
Collective motion of cells is essential to several processes including development of an embryo, tissue morphogenesis, wound healing, homeostasis and cancer metastasis [1, 2, 3]. These biological processes are highly complex and orchestrate mechanical, chemical and biochemical interactions across multiple scales [4, 5, 6, 7]. Through the interplay between directed motion, neighbour alignment and mechanical interactions, cell tissues exhibit emergent structures and dynamics that are crucial for their biological function. A fundamental underlying process for emergent large-scale behavior is the topological rearrangement of neighbouring cells, also known as T1 transition. It is a local, dissipative event that leads to remodelling of the tissue architecture and influences the large-scale flow properties of cell tissues that affects tissue homeostasis and epithelial morphogenesis [8, 9, 10].
In confluent tissues, the tissue architecture can change in several ways. To isolate the tissue dynamics driven by spontaneous T1 transitions, we consider an idealised situation where apoptosis and cell division are neglected, cells have a constant volume, identical mechanical properties, and their total number is fixed. During a T1 transition, typically two neighbouring cells move apart, while two of their neighbours come towards each other and make contact as illustrated in Figure 1. The average number of neighbours before and after the T1 transition is invariant. Through T1 transitions, cells undergo large deformations and shape changes, and encounter an energy barrier that they have to overcome through their activity [11, 12]. Albeit, there are several competing scenarios of the mechanical-chemical-biological feedback involved in a T1 transition, our understanding of these coupled processes is still elusive [13].
T1 transitions are common features also in granular matter and foams under external forcing [14, 15, 16]. The energy relaxation after a T1 transition has been studied in foams by measuring the length of T1 junctions [17]. This concept was adapted for active tissues [18], where the length of the T1 junction before and after a T1 transition has been measured. During a T1 transition in dry foams [15], the cells form a rosette where either four or more edges meet. It has been shown that a junction is energetically stable for three edges incident at 120 degrees. So, while undergoing a T1 transition, the cells pass from one metastable state to another via an unstable state comprising of a rosette. In confluent tissue extracellular spaces (gaps) change this process [19]. Rosettes and tri-junctions can no longer be defined by the number of edges meeting but are placed where gaps are formed.
Also, various mathematical models have been used to study different facets of T1 transitions in foams and tissues [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]. They are mainly based on vertex models, which approximate cells by polygonal shapes. However, cell shape plays a crucial role in T1 transitions and the ability to accurately describe complex cell shapes, beyond polygons, might be advantageous [24, 19, 25]. We consider a multi-phase field model that allows for spontaneous T1 transitions while capturing the cell shape at a high resolution and allowing for large shape deformations. Multi-phase field models have been used to probe several questions pertaining collective motion of cells [26, 27, 28, 29, 30, 31, 32, 33, 34]. These models consider cells as active incompressible droplets and unlike vertex models, T1 transitions emerge spontaneously as a result of shape deformations. In vertex models, also extracellular spaces (gaps) need explicit modelling with ad-hoc assumptions [19], whereas in multi-phase field models they are emergent.
In this paper, we focus on characterising the energy profile preceding and succeeding T1 transitions. We show that this energy profile is statistically robust to changes in several model parameters. It is characterized by an asymmetric profile with a fast increase in energy before the T1 transition, a sudden drop after the T1 transition, followed by a slow relaxation. The relaxation profile provides insights into the flow properties of the tissue. Previously, the relaxation has been indirectly studied in tissues by examining the relaxation of an ellipsoid droplet immersed in a tissue [17, 19, 35]. The relaxation profile was attributed to yield stress due to limitations in the measurement timescales [35], and also associated with the fluidization of the tissue [19, 36]. We further consider the duration of T1 transitions and find that the average duration scales inversely to the maximum average energy attained during the T1 transition. Also, we show that T1 transitions may trigger the creation of other T1 transitions nearby and the chaining of T1 transitions leads to large-scale deformation and fluid like behaviour.
We introduce the multi-phase field model in Section 2 and discuss results on local statistical properties of T1 transitions in Section 3. We further analyse the dependency of these statistical properties on various model parameters. The effect of cell deformability and activity is considered in detail. We study the impact of chaining of T1 transitions on flow at larger scales. In Section 4 we relate these finding to mechanical and rheological properties of the tissue and postulate that they can be used to characterize fluidization. Details on the numerical methods, the initialization and characterization of T1 transitions are provided in Section 5.
## 2 Multi-phase field model
We represent a two-dimensional confluent cell tissue within a multi-phase field model following formulations [28, 32, 33, 34]. We consider a system of \(N\) cells of equal area occupying a square domain of size \([0,L]\times[0,L]\) and use periodic boundary conditions. Each cell is represented by a scalar phase field \(\phi_{i}(\mathbf{x},t)\) as an indicator function of the domain occupied by each cell labeled by \(i=1,2,\cdots,N\). Namely, the bulk phase values \(\phi_{i}\approx 1\) and \(\phi_{i}\approx-1\) indicate the interior and exterior of the cell, respectively. The cell boundary is defined by the localised transition region between the two bulk values. The time evolution of the \(i\)-th phase field follows a conservative dynamics which preserves the cell areas and is given by
\[\partial_{t}\phi_{i}+\mathbf{v}_{i}\cdot\nabla\phi_{i}=\Delta\frac{\delta \mathcal{F}}{\delta\phi_{i}}, \tag{1}\]
where \(\Delta\) is the two-dimensional Laplacian applied to the variational derivative of a free energy functional \(\mathcal{F}\) with respect to the phase field \(\phi_{i}\). The free energy \(\mathcal{F}=\mathcal{F}_{CH}+\mathcal{F}_{INT}\) contains the Cahn-Hilliard energy
\[\mathcal{F}_{CH}=\frac{1}{Ca}\sum_{i=1}^{N}\int_{\Omega}\left(\frac{\epsilon} {2}||\nabla\phi_{i}||^{2}+\frac{1}{4\epsilon}(\phi_{i}^{2}-1)^{2}\right)d \mathbf{x}, \tag{2}\]
and the interaction energy [28, 34]
\[\mathcal{F}_{INT}=\frac{1}{In}\sum_{i=1}^{N}\int_{\Omega}B(\phi_{i})\sum_{j \neq i}w(\phi_{j})d\mathbf{x}. \tag{3}\]
The capillary number \(Ca\) and interaction number \(In\) are tuning parameters for the cell deformability and the strength of mutual repulsion/attraction interactions, respectively. In equation 2, the Cahn-Hilliard energy has a local free energy density given by the double well potential with the minima corresponding to the two bulk values and a gradient energy. The parameter \(\epsilon\) controls the width of the diffuse interface. The Cahn-Hilliard energy ensures phase separation into two bulk regions which are separated by a thin, diffusive interface. This energy alone is minimised by cells with circular shapes. In equation 3, each cell's interior and interface (\(B(\phi_{i})=(\phi_{i}+1)/2\)) is coupled with every other cell through a local interaction potential,
\[w(\phi_{j})=1-(a+1)\left(\frac{\phi_{j}-1}{2}\right)^{2}+a\left(\frac{\phi_{j} -1}{2}\right)^{4},\]
where the parameter \(a=1\) models repulsion, while \(a>1\) models attraction and repulsion (see [34] for a detailed analysis of role of \(a\)).
Cell activity is introduced through the advection velocity \(\mathbf{v}_{i}(\mathbf{x},t)\) in equation 1 and is given by
\[\mathbf{v}_{i}(\mathbf{x},t)=v_{0}B(\phi_{i})\mathbf{e}_{i}(t), \tag{4}\]
where \(v_{0}\) is a constant parameter that controls the magnitude of the activity, \(\mathbf{e}_{i}=[\cos\theta_{i}(t),\sin\theta_{i}(t)]\) where \(\theta_{i}\) is the orientation of the self-propulsion which evolves as
\[d\theta_{i}=\sqrt{2D_{r}}dW_{i}(t)+\alpha(\beta_{i}(t)-\theta_{i}(t))dt. \tag{5}\]
The first term on the right side of equation 5 is a rotational diffusion term with a Wiener process \(W_{i}\). The second term is a relaxation to the orientation of the cell's shape elongation. The cell elongation is identified by the principal eigenvector of the shape deformation tensor [29, 32]
\[\mathbf{S}_{i}=\begin{bmatrix}S_{i,0}&S_{i,1}\\ S_{i,1}&-S_{i,0}\end{bmatrix} \tag{6}\]
which is symmetric and traceless and has the two components
\[S_{i,0}=\frac{1}{8}\int_{\Omega}\left[\left(\frac{\partial\phi_{i}}{\partial y }\right)^{2}-\left(\frac{\partial\phi_{i}}{\partial x}\right)^{2}\right]d \mathbf{x}\quad\text{and}\quad S_{i,1}=-\frac{1}{4}\int_{\Omega}\left[\frac{ \partial\phi_{i}}{\partial x}\frac{\partial\phi_{i}}{\partial y}\right]d \mathbf{x}.\]
Its corresponding eigenvalues are \(\lambda_{i}^{\pm}=\pm\sqrt{S_{i,0}^{2}+S_{i,1}^{2}}\) and eigenvectors are \(\eta_{i}^{\pm}=(\frac{S_{i,0}+\lambda_{i}^{\pm}}{S_{i,1}},1)\). The vector \(\eta_{i}^{+}\) is parallel to the elongation axis of the cell and determines the preferred self-propulsion direction as
\[\beta_{i}(t)=\left\{\begin{array}{rl}\arg(\eta_{i}^{+}(t))&:&\mathbf{e}_{i}( t)\cdot\eta_{i}^{+}(t)>0\\ -\arg(\eta_{i}^{+}(t))&:&\mathbf{e}_{i}(t)\cdot\eta_{i}^{+}(t)<0\end{array}\right. \tag{7}\]
Therefore, the second term on the right hand side of equation (5) aligns \(\theta_{i}(t)\) with \(\beta_{i}(t)\). The parameter \(\alpha\) controls the time scale of this alignment of the self-propulsion direction with the elongation axis of the cell. There are different possibilities to define the advection velocity \(\mathbf{v}_{i}(\mathbf{x},t)\) (see Ref. [32] for an overview and comparison). The current form includes approaches of Ref. [29] and, as the elongation is a result of the interaction with neighbouring cells, it accounts for contact inhibition of locomotion [37, 38]. The model leads to properties appropriate to describe, e.g., Madin-Darby canine kidney (MDCK) cells [32, 39].
Figure 1: Successive time snapshots of tissue section undergoing a T1 transition, a finite-time neighbour exchange process between cells A, B, C and D. The transition starts when cells B and D lose contact and is completed when cells A and C make contact. During the T1 transition an extracellular space (gap) is formed between cells A, B, C and D. Also see Supplementary Movie 1
Figure 2: (a). Free energy density, in a region surrounding a T1 transition. (b) and (c) Coarse-grained energy density in a linear and log-scale, respectively. The white dot represents the epicenter of the T1 transition while the green dotted circle represents the coarse graining radius \(r_{avg}\), the estimated core of the T1 transition.
## 3 Results
### Energy profile of T1 transitions
Within our multi-phase field approach, T1 transitions are neighbour exchange processes with a finite duration. A prototypical time sequence of a T1 transition is illustrated in Figure 1. Four cells A, B, C and D are involved. Before the T1 transition, the cell junction shared by cells B and D shrink. The T1 transition starts when the cells B and D break contact and move apart. This results in the formation of an extracellular space which we call 'gap'. Cells A and C move towards each other, close the gap, and form a new contact concluding the T1 transition. After the T1 transition, this new junction between cells A and D expands. The junctions that shrink and expand are called T1 junctions. We refer to Section 5 for the procedure to detect T1 transitions and their durations. A T1 transition not only leads to topological rearrangements of the four neighbouring cells, it also involves deformation of the cells. While details, such as the specific shape of the cells and their deformation, the duration of the T1 transition and the relaxation process differ between T1 transitions, we will demonstrate that robust statistical features of T1 transitions exist.
We define the epicenter of a T1 transition as the point with the minimum total distance from the centers of the involved cells in the neighbour exchange process midway through the T1 transition. We define the immediate region around the epicenter as the core of a T1 transition, which is of essence because it is the region where T1 junctions shrink and expand, and the gap appears and disappears.
Figure 2a shows the total free energy density midway through a T1 transition. The epicentre is shown by the white dot and
Figure 3: (a) Evolution of energy (averaged for 158 T1 transitions) at the epicenter of the T1 transitions. Negative time corresponds to time before a T1 transition and positive time corresponds to time after a T1 transition. The shaded region denotes a width of 1 standard deviation. The gray dashed line is the average energy across the whole domain. (b) Average energy profile during a T1 transition as function of percentage of T1 duration. The standard deviation is also indicated. (c), (d) and (e) Montages of deformed cells involved in a T1 transition. Each montage is made up of 5 images, that capture the cells at equidistant times, stacked over each other. The darkest colored overlay represents the latest time. (c) Cell shapes before the start of the T1 transition, (d) during the T1 transition, and (e) after the end of the T1 transition. Also see Supplementary Movie 2 for corresponding simulation
the estimated core is highlighted by the green circle. It has a radius \(r_{avg}=0.02L\), where \(L\) is the side length of the computational domain. We compute a coarse-grained energy whose value at any point in the domain is the average of the energy density in a circular region centered at that point with radius \(r_{avg}\). Figure 2b shows this coarse grained energy field \(f_{r_{avg}}\), which we will call 'energy' field in the following. The signature of triple-junctions and T1 transitions already becomes appealing due to their higher energy. The difference between both is enhanced by using a log scale, see Figure 2c. Considering this energy field in the epicenter over time provides a spatial-temporal description of T1 transitions. For discussions on the sensitivity of this procedure on \(r_{avg}\) we refer to Section 5.
Figure 3a shows the time evolution of this energy averaged over 158 T1 transitions. The time is negative before a T1 transition and is positive after a T1 transition, and is denoted by \(t_{T1}\). The energy during the T1 transitions is excluded, which leads to a discontinuity at \(t_{T1}=0\). The two values at \(t_{T1}=0\) correspond to the averaged energies at the start and the end of the T1 transitions. As the duration of T1 transitions differs, an averaged energy as a function of time during the T1 transition does not provide any meaningful information. Details on the energy during the T1 transition are shown in Figure 3b using a normalized time. The energy profile in Figure 3a, 3b has a peak at the T1 transition. The profile is asymmetric with a strong increase in energy before the T1 transition and a sudden decrease after the T1 transition followed by a slow relaxation. The asymmetry can be quantified by considering the 75% of the maximum value, which is marked in Figure 3a. Figures 3c-3e illustrate the evolution for one T1 transition, the one depicted in Figure 1. These figures contain overlays of several snapshots as per the time marked in the figures. The darkest of these snapshots pertains to the latest time. The yellow region marks the estimated core of the T1 transition. The asymmetry before and after the T1 transition, Figure 3c, 3e, respectively, is clearly visible. The T1 junctions are longer at \(t_{T1}=5\) compared with \(t_{T1}=-5\). During the T1 transition, Figure 3d, the asymmetry is less pronounced. Most of the deformations are concentrated in the core. These deformations arise as a result of the formation of the gap, and subsequently its disappearance. The shrinking and formation of T1 junctions and the deformations within the core are a signature of the T1 transition. However, they also influence the deformation of the four cells outside of the core, and their neighbours, which can be perceived by the overlayed cell shapes. Interestingly in the depicted T1 transition, the deformations of each of the four cells seems to be persistent before, during and after the T1 transition (see the arrows indicating the direction of deformations). We will elaborate on this and other coarse grained effects in Section 3.4. The energy profile indicates an accumulation of energy to reach the energy barrier at the T1 transition. This is due to probing several possibilities in local movement and cell shape deformation, which are coupled by the definition of activity, taking into account cell elongation and contact inhibition of locomotion. After the energy barrier has been overcome the fast relaxation of the energy can be associated with a steep gradient in the energy landscape in one direction.
The asymmetric shape of the energy profile is robust to changes in most model parameters, as demonstrated in Figure 4 where \(\alpha\), \(Ca\), \(a\), \(D\) and \(v_{0}\) are varied and the energy profile associated with passive sheared foams is included for comparison. Figure 4b shows the energy rescaled by the maximum energy as changes in \(Ca\) directly affect the free energy, see equation (2). Within the range of parameters explored, the changes in the values of alignment parameter \(\alpha\), interaction coefficient \(a\), and diffusivity \(D\) have minimal effects on the energy profile. We see that the profile is robust even in absence of noise (\(D=0\)) (Figure 4d). On the other hand, the profile deviates from Figure 3a for low values of \(v_{0}\) and \(Ca\). Figure 4e shows that the cell activity \(v_{0}\) affects the rate at which the cells approach a T1 transition which is indicated by the slower accumulation of energy for low \(v_{0}\). However, change in \(v_{0}\) has a minor effect on the energy relaxation immediately after a T1 transition. The slow relaxation afterwards is largest for large values of \(v_{0}\). This can be associated with the definition of activity, which is related to cell elongation and at least on average cells elongate in the direction of movement after the T1 transition. The characteristic profile of the accumulation of energy before the T1 transition and the fast relaxation of energy after the T1 transition is also present for low values of \(Ca\), see Figure 4b. However, as Figure 4b considers a rescaled energy the actual rates depend on \(Ca\). The slow relaxation after the sudden decrease only slightly depends on \(Ca\). We would like to point out that the results for low values of \(v_{0}\) and \(Ca\) should be considered with care, as the number of T1 transitions considered in these cases is much lower. While the system is still in the fluid phase, the extreme values for \(v_{0}=0.1\) and \(Ca=0.05\) already approach the transition to the solid phase.
In passive foams T1 transitions can be induced by applying shear. This is considered by an advection velocity field \(v_{i}(\mathbf{x},t)=0.5|x_{1}-L/2|\) and the resulting energy profile is compared with the profile from Figure 3a, see Figure 4f. The profiles differ before the T1 transition and within the slow relaxation, but are similar in the sudden drop of energy right after the T1 transition. The latter reiterates that the energy relaxation right after a T1 transition is independent on activity. The differences in the accumulation of the energy can be associated with the persistent orientation of advection velocity due to shear, which results in collective deformation and a more deterministic approach of the T1 transition. Also the termination of the decay in the passive case results from the restricted possibilities of relaxation due to the applied shear.
### Duration and other properties of T1 transitions
As mentioned earlier, the duration of T1 transitions strongly depends on the specific cell arrangements. We now discuss the statistical properties of the duration. Figure 5a shows the probability distributions of the duration of T1 transitions. The distributions peak at smaller values and have a long tail for larger values. The profiles corresponds to repulsive and adhesive (\(a>1\)), and only repulsive interactions (\(a=1\)), and are fitted by Gamma distributions. The average duration of T1 transitions for repulsive interactions (3.418 measured for 539 T1 transitions across 3 simulations) is smaller compared to that for repulsive and adhesive interactions (3.826 for 631 T1 transitions across 4 simulations). Keeping other parameters fixed, the average number of T1 transitions in the repulsive and adhesive case was 157.75 while for the repulsive case was 179.66, respectively. Therefore, in the repulsive case, cells undergo neighbour exchanges faster and more often. Figure 5b shows the duration of T1
Figure 4: Evolution of energy for different parameter values. The pink and cyan shaded region are used to denote time before and after the T1 transitions, respectively. The number of T1 transition used to obtain these results in indicated. (a) The aligning parameter \(\alpha\) is varied. (b) The parameter to control cell deformability, \(Ca\) is varied. As \(Ca\) is a parameter that influences the overall total energy, for better comparison the energy is rescaled by division with the maximum energy. (c) Adhesion and repulsion corresponds to \(a=1.5\) and repulsion corresponds to \(a=1\). (d) The diffusivity \(D\) is varied. (e) The magnitude of the activity \(v_{0}\) is varied. (f) The passive shear corresponds to advection field \(v_{i}(\mathbf{x},t)=0.5|x_{1}-\frac{t}{2}|\) while the active case corresponds to parameters in Table 1.
transitions as a function of the maximum energy reached during a T1 transition. While the data is scattered, it qualitatively shows that high energy T1 transitions are faster. This qualitative result holds for both cases and can be explained by a larger accumulation of energy in the core, which increases the spatial energy gradients and in turn speeds up the relaxation of the energy which leads to the shorter duration.
Figure 5c shows the averaged shape index (\(\mathrm{perimeter}/\sqrt{\mathrm{area}}\)) of the four cells involved in a T1 transition as function of time relative to a T1 transition. The asymmetry found for the energy profile and the discontinuity at \(t_{T1}=0\) is also present for this quantity. The cells deform and elongate as they approach a T1 transition and relax afterwards. This increases and decreases their shape index, respectively. The faster relaxation leads to the asymmetry in the evolution of the shape index. The asymmetry around a T1 transition is also seen in the average velocity of the center of mass of the cells involved in a T1 transition as shown in Figure 5d. While the velocity is almost constant before the T1 transition, the velocity peaks at the T1 transition and slows down afterwards until it reaches the average value before the T1 transition. The peak in the average velocity of the center of mass is due to the large deformations of the portions of cells within the core and their fast relaxation after the T1 transition. Both quantities, the shape index and the cell velocity of the four cells involved in a T1 transition are also experimentally accessible. These quantities can be related to the energy considered above.
### Effect of cell deformability, activity and gaps on T1 transitions
The asymmetric energy profile in Figure 3a is robust to tuning of most of the model parameters. Significant variations only occur for low values of \(Ca\) and \(v_{0}\), see Figure 4b, 4e. We now analyse the effect of cell deformability and activity on T1 transitions in more detail. This requires a detailed analysis of the influence of gaps. The gap fraction is related to the confluency as \(\mathrm{confluency}=100(1-\mathrm{gap}\) fraction). It essentially is a fixed quantity set by the initial data. We fix all parameters as per table 1 and compare two different initial cell sizes, denoted by 'low gap' with gap fraction 0.00048 and 'high gap' with gap fraction 0.00212. Both can be considered as confluent. The number of T1 transitions within the considered time frame is not influenced by this variation. The total numbers of T1 transitions are 162 and 158 for low and high gap cases, respectively. However, the
Figure 5: (a) Probability distributions of the duration of T1 transitions for only repulsion interactions (magenta dots) and for both repulsion and adhesion (cyan dots). Both data sets are fitted by Gamma distributions highlighting the exponential tails. (b) Scatter plot of duration of T1 transition as function of the maximum energy reached during a T1 transition. (c) Evolution of average shape index and (d) Evolution of the average velocity of center of mass of the cells involved in the T1 transitions as function of time relative to a T1 transition. The shaded regions mark the standard deviations of both quantities.
average duration of T1 transitions is reduced by reducing the gap fraction. The values are 2.559 and 3.794 for low and high gap cases, respectively. We measure the gap fraction as the fraction of domain where \(\sum_{i}B(\phi_{i})\) is less than a fixed threshold which is set to 0.2. This essentially excludes possible partial overlap of the diffuse interface region of cells and only accounts for gaps at tri-junctions and rosettes. This makes the measured gap fraction to depend on deformability and activity. For the considered cases low \(Ca\) leads to rounder cells with stronger overlap of the diffuse interfaces of the cells, which are in contact. This leads to a decrease in the gap fraction as the fraction of domain where \(\sum_{i}B(\phi_{i})\) is less than a fixed threshold which is set to 0.2. This is because the gap fraction is larger than the critical value of \(Ca\).
Figure 6: Dependency of various properties on deformability \(Ca\) ((a) - (f)) and activity \(v_{0}\) ((g) - (l)). _Total T1_ considers the total number of T1 transitions within the considered time frame, _T1 duration_ is the averaged time from start to end of all T1 transitions, _Gap fraction_ is the extracellular space, considered as \(\sum_{i}B(\phi_{i})\) below a fixed threshold, again averaged over time, _Shape index_ considers the averaged shape index of the four cells involved in the T1 transitions. _Time between TI_ is the average time a cell spends between successive T1 transitions, _Max energy_ is the maximum energy reached at a T1 transition and \(v_{avg}\) is the average velocity of center of mass of all cells.
to an increase in the measured gap fraction, see Figure 5(c). A similar dependency, but smaller in magnitude, is found for activity. Larger \(v_{0}\) lead to stronger interactions between cells and thus more overlap of the diffuse interface region of cells in contact which again leads to an increase in measured gap fraction, see 5(i). The gap fraction in both figures is the average quantity over the considered time frame. Both results and the dependencies discussed below are considered for the 'high gap' setting.
As shown in Figure 5(a), the number of T1 transitions increases with increasing cell deformability parameter \(Ca\). Cells that are more deformable can more easily acquire the shape deformations associated with T1 transitions. When \(Ca\) is low, these deformations are energetically more expensive resulting in fewer T1 transitions. Also the duration of T1 transitions depends on \(Ca\), as shown in Figure 5(b). T1 transitions are shorter when cells are more deformable. We suspect that this might be due to the presence of smaller gaps at T1 transitions, as this requires less shape deformation. Figure 5(d) shows the average cell shape index of the four involved cells in a T1 transition as function of cell deformability \(Ca\). The shape index increases as deformability increases. The shape index of \(Ca=0.05\) is less than that of a regular pentagon. The shape index of regular pentagon (3.813) was attributed as the critical shape index for jamming transition in classical vertex models [40] without gaps. It has been argued that gaps influence the mechanical properties and solid-liquid transition [17], which might explain this discrepancy, as our system is still within the fluid phase. Further details, which are related to the previous dependencies are shown in Figures 5(e) and 5(f). Figure 5(e) shows the average time a cell spends between successive T1 transitions as function of \(Ca\). This quantity is large for low \(Ca\) but decreases and plateaus to low values upon increasing \(Ca\). Figure 5(f) shows the maximum energy reached during a T1 transition against \(Ca\). We see from the dotted curve that the maximum energy is proportional to \(1/Ca\). Recall that \(1/Ca\) scales the Cahn-Hilliard energy as per equation (2). This means that \(\mathcal{F}_{rag}\) is primarily affected by the Cahn-Hilliard energy, which explains the correspondence of our results with the length of T1 junctions discussed earlier and considered in [17, 18].
The dependency on \(v_{0}\) shows qualitatively similar behaviour for the number of T1 transitions, the duration of T1 transitions, the shape index of the cells involved in T1 transitions and the time a cell spends between successive T1 transitions, see Figures 5(g), 5(h), 5(j), 5(k), respectively. The increase in T1 transitions and decrease in the time between T1 transitions with activity is a property of active systems, which are driven out of equilibrium. T1 transitions are topological defects and thus an indication of out of equilibrium. The decrease in duration with increasing activity can again be associated with the decrease in measured gap fraction, see 5(i), and also the increasing shape index with activity is a direct consequence of the form of active forcing considered. Figure 5(l) shows the average velocity of center of mass of all cells as a function of \(\mathrm{v}_{0}\). As expected, activity is primarily converted into motion with an almost linear dependency.
### Chaining of T1 transitions
So far, we have analysed robust statistical properties of T1 transitions within their cores. However, we have also seen that these local features influence the position and shape of the four cells involved in a T1 transition, and their neighbours. This can induce new T1 transitions and lead to the formation of chains of T1 transitions as illustrated in Figure 7. Each of these images consists of 10 tissue states captured at equally-spaced time instants and overlaid on top of each other. The cell shapes outlined in the darkest colors correspond to the latest time. The yellow circles mark the cores of the T1 transitions at those time instants. The chaining of T1 transitions is a result of the assumptions on constant cell area and a confluent tissue. Any cell deformation associated with a T1 transition induces deformation of the neighbouring cells and thereby increases the possibility of new T1 transitions. This is further enhanced by activity and the considered propulsion mechanism which favours the direction of elongation.
This chaining of T1 transitions is also observed experimentally in sheared foams [41] and in our simulations of passive foams which are sheared with a constant shear velocity profile. For \(v_{0}=0\), typically one or two T1 transitions occur due to the initial non-equilibrium configuration of the tissue. As cells relax toward an equilibrium state, their motility is reduced which prevents any further T1 transitions. The situation for small \(v_{0}\) is similar. The tissue becomes jammed by cells being caged amongst their neighbours and no T1 transitions occur [32]. Furthermore, when cell deformability (\(Ca\)) is low, the energetic cost for cell deformations that are necessary to undergo T1 transitions is high, which prevents or at least reduces T1 transitions and the tissue also becomes jammed [32]. This corresponds to the low number of T1 transitions in Figure 3(b), 3(e) for low \(Ca\) and low \(v_{0}\), respectively.
However, in the considered case in Figure 7 we are far away from jamming and the chaining of T1 transitions leads to cell deformation propagating to larger scales. This is highlighted in Figure 7(a), which shows the evolution of the cell tissue in the whole time window considered in Figure 7 together with the trajectory of the center of mass of the colored cells, which highlights the movement on larger spatial scales. The chaining of T1 transitions is also a source of large-scale flows as evidenced in Figure 7(b). We consider the velocities of the centers of mass of all cells, average this quantity with the neighboring cells and construct a continuous velocity field by interpolating in space. The velocity field is shown together with the cell boundaries at \(t=52\). The mean direction corresponds with the direction of the black path shown in Figure 7(a). However, as the variations in magnitude and direction of the flow field in Figure 7(b) indicate, T1 transitions can also induce fluctuations and could play an important role in sustaining chaotic flows (active turbulence) in cell tissues [42, 43, 44].
Figure 8: (a) Montage of tissue snapshots from time \(t=25\) to \(t=79\) (see figure 7). The black path is the trajectory of the center of mass of the 11 coloured cells. (b) LIC visualization of streamlines, magnitude (color) and direction (black arrows) of the flow velocity. The velocity and the cell boundaries correspond to time \(t=52\).
Figure 7: Chaining of T1 transitions. Each panel is a montage of 10 snapshots of tissue configurations taken successively at constant times intervals. Latest time is represented by the cell shapes marked in the darkest color shades. The cores of the T1 transitions are highlighted in yellow. Also see Supplementary Movie 3.
## 4 Discussion
Large-scale tissue deformation requires cellular rearrangements. The simplest rearrangement in confluent cell tissue is a T1 transition. We have analysed these neighbour exchanges among cells in detail using a multi-phase field model and identified a characteristic asymmetric energy profile, see Figure 3. The energy profile has a peak at the T1 transition. The profile is asymmetric with a strong increase in energy before the T1 transition and a sudden decrease after the T1 transition which is followed by a slow relaxation. Detailed studies on the dependency of this profile on model parameters show robustness to variations in most parameters. They also allowed to associate the strong energy increase before the T1 transition with the strength in activity. This region is characterized by an accumulation of energy to reach the energy barrier at the T1 transition. This is achieved by probing several possibilities of direction of movement and shape deformation. This process is enhanced by activity, which is quantified by Figure 4e. In contrast to this the sudden relaxation after the T1 transition can clearly be associated with energy relaxation. It is almost independent of activity, see Figure 4e, and cell deformability, see Figure 4b, and also present in sheared foams, see Figure 4f. We would like to remark that the behaviour is independent but the actual slope and duration of this regime depends on deformability, as the energy is scaled in Figure 4b. The sudden decrease is associated with a steep gradient in the energy landscape in one direction set by the deformation of the cells in the core of the T1 transition. The third characteristic region, the slow relaxation, depends on activity and cell deformability. This relaxation profile provides insight in the mechanical properties of the tissue. Similar energy profiles have been obtained by actuation and relaxation of magnetic microdroplets which are injected into the tissue [17, 19, 35]. In these experiments a slow relaxation is associated with the fluidization of the tissue [19, 35], while stagnation of the relaxation indicates more solid-like behaviour [17] and is associated with irreversible (plastic) tissue rearrangements. We postulate that these mechanical characterizations can also be obtained from the energy decay of the T1 transitions.
In the considered confluent tissue the type of interaction between the cells, if repulsive or repulsive and attractive, seems to play a minor role on the characteristic energy profile of a T1 transition, see Figure 4c. However, the degree of confluency is known to influence the solid-fluid phase transition [35]. Increasing the extracellular space enhances fluidization. While we only consider the fluid phase, we observe an increased duration of T1 transitions for larger extracellular space. A finite duration of T1 transitions in cell tissues has been associated with molecular processes and is considered in an adhoc manner in vertex models [22]. Within the multi-phase field model a finite duration is a result of the mechanical properties of the cells and the their interactions. An increased duration of T1 transitions is observed for low deformability and low activity, see Figures 6b and 6h, respectively. Both indicating more solid-like behaviour, which is consistent with [22], where increased duration of T1 transitions leads to decreasing the overall number of T1 transitions and a possible stiffening of the global tissue mechanics. However, these results don't take extracellular space into account.
Even if characterized locally, due to the confluent cell tissue, large enough deformations induced by T1 transitions lead to permanent cell deformations in the neighbourhood, which can trigger other T1 transitions, leading to a chaining effect. This behaviour is associated with the foam-like architecture and consistent with previously reported nonlinear tissue mechanics [35]. It is this chaining of T1 transitions which allows for large-scale tissue deformations and flow patterns which can be associated with sustaining chaotic flows, see Figure 8b.
We believe these results also to hold in more general situations, e.g. for varying cell sizes and varying mechanical cell properties.
## 5 Numerical Methods
### Model Parameters
Unless otherwise specified, we use the model parameters as per Table 1
### Finite element simulations
The simulations are run for time interval \([0,T]\) discretised into \(N_{t}\) units with a uniform timestep size \(\tau\), i.e. \(T=N_{t}\tau\). We employ a semi-implicit discretization in time. Discretization in space follows the finite element method. We adaptively refine the diffuse interface and employ a parallelization approach which scales with the number of cells. For details we refer to [28, 32, 33, 34, 45, 46]. The algorithm is implemented in the open-source library AMDiS [47, 48].
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \(\tau\) & \(\tau_{save}\) & \(T\) & \(L\) & \(\epsilon\) & \(v_{0}\) & \(a\) & \(Ca\) & \(In\) & \(D_{r}\) & \(\alpha\) \\ \hline
0.005 & 0.5 & 150 & 100 & 0.15 & 0.5 & 1.5 & 0.2 & 0.1 & 0.1 & 0.1 \\ \hline \end{tabular}
\end{table}
Table 1: Default values of the model parameters.
### Detecting T1 transitions
The T1 transitions are detected by tracking the neighbour relations of all cells. If two cells \(A\) and \(B\) are in contact, their neighbour relation is denoted by \((A,B)\) or \((B,A)\), both of which are equivalent. Suppose, there are four cells as in the Figure 1. The set of neighbour relations between these four cells before, during and after a T1 transition are \(\{(A,B),(B,C),(C,D),(D,A),(B,D)\}\), \(\{(A,B),(B,C),(C,D),(D,A)\}\) and \(\{(A,B),(B,C),(C,D),(D,A),(A,C)\}\) respectively. Before and after a T1 transition, there are 5 distinct neighbour relations between the four cells. The sets of relations before and after a T1 transition have four elements in common. These common elements make up the set of relations during a T1 transition. The duration of a T1 transition is time difference when the number of neighbour relations between the four cells change from 5 to 4 and back to 5.
### Sensitivity of \(f_{r_{avg}}\) on \(r_{avg}\)
The coarse graining region of a point \(\mathbf{p}\) is the region with all points \(\mathbf{x}\) such that \(|\mathbf{p}-\mathbf{x}|<r_{avg}\). As the free energy is high at the cell edges, the points which include the edges within its coarse graining region around it would have high \(f_{r_{avg}}\). Moreover, points with triple junctions (where 3 edges meet) within its coarse graining region would have a higher \(f_{r_{avg}}\) due to the presence of longer total length of cell edges. Usually at a given time, \(f_{r_{avg}}\) has peaks near the T1 epicenter. This is because, the region around it would have either two triple junctions along with a gap as seen in the snapshots of Figure 1. Also, it is clear that points that do not have any cell edges within its coarse graining region, would have zero \(f_{r_{avg}}\). We have found that increasing the \(r_{avg}\) loses information about the T1 transition in the value of \(f_{r_{avg}}\) at the epicenter. A larger coarse graining region would entail a larger contribution from the bulk of the interior of the cell and would reduce \(f_{r_{avg}}\) at the epicenter's such that \(f_{r_{avg}}\) at epicenter's would not be uniquely discerned as a signature of a T1 transition. On the other hand, reducing \(r_{avg}\) would mean that we might not encompass the information of the two triple junctions and the gap formed during the T1 transition. It also increases the deviations in the statistics that we describe. Moreover, if the energy along the length of the edge is uniform then the energy field \(f_{r_{avg}}\) at a point gives an approximate measure of length of edges within the coarse graining region around that point.
## Data availability
All data are available from the corresponding author upon reasonable request. The AMDiS implementation and additional codes for pre- and postprocessing are available from the corresponding author upon reasonable request
## Supplementary Information
Supplementary Movie 1
Supplementary Movie 2
Supplementary Movie 3
|
2302.08759 | Value Engineering for Autonomous Agents | Machine Ethics (ME) is concerned with the design of Artificial Moral Agents
(AMAs), i.e. autonomous agents capable of reasoning and behaving according to
moral values. Previous approaches have treated values as labels associated with
some actions or states of the world, rather than as integral components of
agent reasoning. It is also common to disregard that a value-guided agent
operates alongside other value-guided agents in an environment governed by
norms, thus omitting the social dimension of AMAs. In this blue sky paper, we
propose a new AMA paradigm grounded in moral and social psychology, where
values are instilled into agents as context-dependent goals. These goals
intricately connect values at individual levels to norms at a collective level
by evaluating the outcomes most incentivized by the norms in place. We argue
that this type of normative reasoning, where agents are endowed with an
understanding of norms' moral implications, leads to value-awareness in
autonomous agents. Additionally, this capability paves the way for agents to
align the norms enforced in their societies with respect to the human values
instilled in them, by complementing the value-based reasoning on norms with
agreement mechanisms to help agents collectively agree on the best set of norms
that suit their human values. Overall, our agent model goes beyond the
treatment of values as inert labels by connecting them to normative reasoning
and to the social functionalities needed to integrate value-aware agents into
our modern hybrid human-computer societies. | Nieves Montes, Nardine Osman, Carles Sierra, Marija Slavkovik | 2023-02-17T08:52:15Z | http://arxiv.org/abs/2302.08759v1 | # Value Engineering for Autonomous Agents
###### Abstract
Machine Ethics (ME) is concerned with the design of Artificial Moral Agents (AMAs), i.e. autonomous agents capable of reasoning and behaving according to moral values. Previous approaches have treated values as labels associated with some actions or states of the world, rather than as integral components of agent reasoning. It is also common to disregard that a value-guided agent operates alongside other value-guided agents in an environment governed by norms, thus omitting the social dimension of AMAs. In this blue sky paper, we propose a new AMA paradigm grounded in moral and social psychology, where values are instilled into agents as context-dependent goals. These goals intricately connect values at individual levels to _norms_ at a collective level by evaluating the outcomes most incentivized by the norms in place. We argue that this type of normative reasoning, where agents are endowed with an understanding of norms' moral implications, leads to value-awareness in autonomous agents. Additionally, this capability paves the way for agents to align the norms enforced in their societies with respect to the human values instilled in them, by complementing the value-based reasoning on norms with agreement mechanisms to help agents collectively agree on the best set of norms that suit their human values. Overall, our agent model goes beyond the treatment of values as inert labels by connecting them to normative reasoning and to the social functionalities needed to integrate value-aware agents into our modern hybrid human-computer societies.
**Keywords** - values, value awareness, normative MAS, machine ethics
Introduction
Our society is a sociotechnical system that includes human agents, humans that augment their abilities with computational devices, and artificial agents capable of increasing degrees of unsupervised action. Multi-agent systems as a discipline is concerned with identifying and implementing coordination-promoting interactions among agents.
Agent coordination does not only require rationality and autonomy, it also requires an understanding of moral values - "the greater the freedom of a machine, the more it will need moral standards" [22]. Floridi and Sanders' [12] definition of moral agent clearly includes artificial agents: "An action is said to be morally qualifiable if and only if it can cause moral good or evil. An agent is said to be a moral agent if and only if it is capable of morally qualifiable action". How can we make artificial moral agents?
The area that investigates how to build artificial moral agents that are capable of moral reasoning is machine ethics (ME). Machine ethics "is concerned with the behaviour of machines towards human users and other machines" [2]. ME's goal is to enable automation and augmentation algorithms to uphold the "right" values. Machine ethics is however, not directly concerned with how to integrate moral reasoning together with an agent's interactions. What values are and how are they sourced is also very much an open question. To attain a socio-technical society in which the collective and individual values are aligned, we need to be able to construct artificial moral agents that "understand" values.
We here propose a new artificial moral agent paradigm that allows for human-selected values to be embedded in an artificial agent reasoning process. We follow moral psychology rather than moral philosophy when reaching a formal specification for "value". We motivate our reasoning and approach in Section 2, and present the approach in Section 3. There, we establish how agents can be endowed with the meaning of values and what view on norms we take. We also propose a mechanism for aligning values with norms that govern the behaviour in a shared environment. We conclude in Section 4 by discussing the novelty of our approach with respect to previous ones, and outlining directions for future work.
## 2 Background on values and norms
### What are values
In the broader domain of AI Ethics, the question of which values should be imposed on artificial agents (their use and development) dominates the landscape [14, 15]. However, the term "value" is often taken without being defined and not really linked to ethics.
In moral philosophy, ethical values are typically defined in relation to ethical principles. Ethical principles are part of a normative theory that justifies or defends moral rules and/or moral decisions and reflects objective positions of
right and wrong or what "ought" to be [11]. The IEEE 7000-2021 IEEE Standard Model Process for Addressing Ethical Concerns during System Design1 defines ethical values as "value in the context of human culture that supports a judgment on what is right or wrong". Values, broadly understood as this, are difficult to operationalise for computational agents. As a result, researchers take many different approaches, which are hard to compare or evaluate. Moreover, such definition does not facilitate agent interaction either. Namely, the developed paradigms and prototypes of artificial moral agents are such that the agent does not include the moral values of others when reasoning about its own actions and states.
Footnote 1: [https://standards.ieee.org/ieee/7000/6781/](https://standards.ieee.org/ieee/7000/6781/)
In reinforcement learning, which is increasingly explored as an approach for attaining morally behaving agents [8, 20, 5], we see a vast variety of formalisations (and quantifications) for values. In other machine ethics approaches, such as [17, 3, 9], values are used as labels for actions, options or other elements of the reasoning process.
To be able to build value "aware" agents, we need to formalise values as an element of the agent reasoning process. The choice of specialisation should allow qualification of the level of agent value "awareness" and agent interaction.
### What are norms (and multi-agent systems)
Agents who interact in a shared environment will be subject to the norms regulating it. Norms are social mechanisms that steer human behaviour towards certain desired outcomes. Norms are responsible for structuring the interactions among agents and of their overall organisation. Historical examples show that communities of humans have been successful and sustainable when its members follow a set of self-crafted, well-defined norms [21]. The reasons why a human follows a particular norm can come from a variety of sources, it can be because it is beneficial for them or perhaps because it is understood as a moral or professional obligation.
The formal languages used to represent norms are very rich and varied: constraint-based [13], time-based [1], or event-based [16], just to name a few. In many cases, specially in human societies, they come in the form of natural language expressions. However, most representation languages, formal or not, are consequential in the sense that norms act via punishments or rewards. Certain human actions make humans incur in a cost, while others provide a benefit.
Norms are a system-level construct, that apply to the society of agents as a whole. This feature of norms is a contrast with values, that reside on every individual agent.
## 3 Building Value-Aware Agents
This section presents our proposal for an artificial value-aware agent. We call an agent value-aware when it has an explicit representation of the operational
meaning of values, one that allows it to interpret the state of the MAS according to those values. Our main claim of this paper is as follows. If agents are value-aware, then they will be capable of reasoning about norms from the perspective of the value-alignment of those norms. In other words, they can analyse a set of norms in terms of the outcomes that it promotes (or the MAS behaviour it brings about) and the degree of alignment of those outcomes with the desired values. This opens the door to the creation and selection of norms from a moral perspective, and influencing the value-alignment of a MAS from within the system itself.
This section is divided into three parts. The first presents our view on values, and how representing them through concrete goals opens the door for developing value aware agents. The second presents our view on norms. Finally, the third links values to norms, which provides the basis for reasoning about norms and MAS behaviour from the perspective of the values they are aligned with.
### What are values for value-aware agents
As a starting point to build our artificial moral agent, we take the Schwartz's Theory of Basic Human Values (STBHV) [25, 26]. This is a well-established theory in moral and social psychology, which provides a definition for values, outlines the functions they serve in social life and hints at how value structures are organized. In addition, many of the features of values it outlines are compatible with other frameworks from the social sciences and humanities [24].
Taking inspiration from the STBHV, we start by establishing what values are and how they are made operational in humans. Schwartz acknowledges that values are "concepts or beliefs" that "transcend specific situations" [25, p. 4]. Despite their abstract nature, values are closely related to an individual's situation: "the primary content aspect of a value is the type of goal or motivational concern that it expresses. (...) values represent, in the form of conscious goals, three universal requirements of human existence to which all individuals and societies must be responsive (...)" [25, p. 4]. Like this, the STBHV links values to two other concrete entities: the explicit goals that values motivate in specific
Figure 1: The three main components of Schwartz’s Theory of Basic Human Values.
situations and the ultimate functions that these goals seek to achieve. The relationships between values, goals and the requirements of human existence they fulfil appear in Figure 1.
We take these motivational goals from the STBHV as the entities to makes values operational in a computational context. In other words, goals are the expression of the values that agents pursue, and they capture the meaning of a value in a particular situation or domain. Hence, the encoding of a goal into a software agent serves as a proxy for the value it is motivated by. While values are transcendental, their corresponding goals are _context-dependent_. For example, in a professional context, the value "gender equality" should be grounded as equal recognition for equal work in terms of salary and career advancement. Meanwhile, in a domestic context, the same value is better reflected by the even division of domestic tasks.
Traditionally, the term "goal" in AI denotes a _hard goal_ with a clear-cut definition that is evaluated to either true or false [30]. We believe that this kind of dichotomous goals are not nuanced enough to reflect the complexities of values. Hence, we propose to encode value-motivated goals using fuzzy logic statements that can capture levels of satisfaction on a continuous scale [31, 23]. Furthermore, this approach can also handle the relative importance of values (another feature identified in the STBHV) by assigning _satisfaction thresholds_ to their corresponding goals (analogous to the graded desires defined in [7]), so an agent does not seek to achieve a goal to its fullest, but to a _satisfactory_ extent. Just as value-motivated goals are context-dependent, so is the relative importance of values (i.e. the preference ordering over them), as a result, the thresholds that annotate their corresponding goals.
For example, if the value _equality_ is considered important in a context of wealth distribution, this value can be grounded as goal _economic-equality_ and its degree of truthfulness evaluated according to the Gini index computed from the agent's perception of wealth distribution. If the agent cares deeply about this value, its satisfaction threshold will be large (i.e. small Gini index), while if equality is low on its list of preferences its satisfaction threshold will be small (i.e. even a large Gini index will not spark the agent to act).
Now that we have an encoding of values as goals, we can discuss what we use these goals for. Here again, we take inspiration from the STBHV, which essentially states that values operate as _evaluation devices_ through their goal proxies [25]. These value-guided evaluations can be applied to a variety of constructs, such as actions, plans (i.e. sequences of actions), states, outcomes, or a combination of the above. Therefore, Schwartz's theory does not explicitly commit to a deontological or utilitarian position. In particular, value-motivated goals can also evaluate _norms_, i.e. patterns or directives on behaviour, either through the actions they prescribe/forbid or through the outcomes that their implementation brings about. The relationship between norms and values is central to our proposal and is detailed in Section 3.3.
One point that the STBHV does not address in depth is the _origin of values_. From the universal requirements for human existence that values serve (see Figure 1), one can infer that values are a consequence of evolution, and that
they alleviate the cognitive load of having to continuously think in terms of sheer survival. In the context of artificial software agents, such concerns do not apply. Nonetheless, values, their motivating goals and their relative importance has to originate somewhere. For the time being, however, our proposal remains agnostic with respect to the value elicitation process. We are concerned with the inner operation of value-aware agents, and not, for the moment, with the specifics on _how_ and _from whom_ value-motivated goals are queried from.
### What are norms for value-aware agents
We consider norms in this paper at this level of abstraction: norms, by acting on rewards and punishments, make certain environment transitions, resulting from agent actions, more probable than others. This view of "institutional" norms is rather general and includes "simple" norms such as those whose consequences are deontic (e.g. prohibition, permission, or obligation) operators over actions when a given pre-condition is satisfied.
### Connecting Values & Norms
As illustrated above, norms govern behaviour: they incentivize behaviour to go in a particular direction. As such, we argue that norms have a central role as the primary value-promoting mechanisms. Norms can, if carefully designed, facilitate the fulfilment to a large extent of the goals that ground the meaning of values in the environment where the agents are operating.
When implementing a new norm (or set of norms) leads to an outcome that is viewed as highly positive with respect to some value, we say that the norm _is aligned with respect to that value_. Hence, the relationship between norms and values is _consequential_ in nature. A norm is not moral in itself, it is so to the extent that the effects it brings about in the society agree with the members' values, represented in the form of goals.
Figure 2 represents the relationship between the two entities in schematic form. At the surface, values and norms form a feedback loop: values legitimise the enforced norms, and norms promote values when enforced. At a more fundamental level, the two are linked by the outcomes that norms steer the system towards and that are favourably evaluated in regard to values.
Figure 2: Relationship between values and norms through outcomes.
Ensuring AI is aligned with human values has been argued to be one of the main concerns for ethical AI [10]. To address this problem, and given the role of norms in driving behaviour, some works on automated norms synthesis have emerged, focusing on selecting norms on the basis of the moral values they support, see e.g. [28, 27]. Another line of work is Value-Sensitive Design (VSD), where the composition of morally adequate technical norms is hand-crafted by a human designer, which is made outside the multiagent system.
In our view, it should be the autonomous agents who attempt to align norms towards the values that the human designer has instilled in them. To the best of our knowledge, only [18] has proposed an architecture for achieving the endogenous emergence of prescriptive norms through the participation of the agents. However, we are not aware of any follow-up on that work.
We propose that prescriptive norms, with their explicit representation and syntax, should be handled by the agents populating the system, and evaluated by leveraging their understanding of values (i.e. the goals that values are grounded into) as evaluating devices. The designer, tasked with programming the agents, is not in charge of coding the technical norms directly. He/she is responsible, however, for including the necessary mechanisms that agents can resort to when crafting norms, figuring out the most probable outcomes that they lead to and ethically evaluating them.
The agents populating the system are instilled with values by a human who grounds their meaning as persistent goals, so that humans have complete control over the meaning of values. Every agent can be provided with its own, potentially different, version of grounding goals for the same value, so different agents might have conflicting value-grounding goals. Norms, on the other hand, are designed for the collective, with the objective of mediating the behaviour of the agent society as a whole. As illustrated above, our stance is that agents are responsible for aligning norms with the values that the human designer has instilled in them.
Agreement technologies could be applied here, making use of computational social choice, argumentation, and negotiation mechanisms [6]. Of course, this may result in collective norms conflicting with some individual values, but representative of the values in the system overall.
The degree of value alignment of a norm can be assessed by the agent either analytically, or via simulations. The analytical approach involves formal reasoning to analyse the outcome of the behaviour induced by the norm. The alternative approach is to run simulations that would allow the agent to observe the outcome of norms. In both cases, the objective is to evaluate to what extent the value-grounding goals are satisfied in that outcome, which represents the degree of value-alignment of the norm.
Our proposal, in summary, discusses how agents can collectively select the norms governing their interactions in such a way that ensures value-aligned behaviour. This is achieved through the agent's capability to reason about the value-alignment of norms, and hence, morally relating norms or sets of norms with values. This can be understood as empowering agents by making them value-aware. The capability of analysing norms from a moral perspective results
in making value-aware decisions when creating, selecting, combining norms, or even deciding to abide by or break norms. This is the essential claim of this paper.
## 4 Conclusion
Integrating ethical values in artificial agents becomes a necessity with the increase of the autonomy of the artificial agents. Current approaches to constructing artificial moral agents are based on implementing a specific moral theory or (typically reinforcement) learning moral behaviour. Ethical values are used as a syntactical construct, not as a part of the agent reasoning process. Specifically, values are used as labels that human programmers assign to particular options or plans. Alternatively, values have also been used to set human-given constraints, or regimented norms, on the available artificial agent actions, limiting the behaviour of the agent, as in [9, 4]. In addition, artificial moral agent "design" typically also omits the social aspects of agency [19, 29], namely that agents exist in environments that are governed by laws and norms, which they share with others who are also moral agents.
We propose a change in the agent paradigm that allows us to specify values as semantic constructs that can not only be used in the agent reasoning process, but also used to adjust the norms of the shared environment. In addition to reasoning about norms, we also aspire to agents that being value-aware, can reason about their own actions, and when to follow or break norms.
What we have not discussed in the scope of this paper is the embedding of agents with value-enriched theory of mind. A value-enriched theory of mind would allow an agent to directly reason about other agents' behaviour, which would in turn lead to more value-aware interactions. In our proposed paradigm, the agent reasons only about norms, which are influenced by the values of the other agents. However, we leave the discussion of reasoning about agent actions to another paper.
|
2306.15599 | Coupling a Recurrent Neural Network to SPAD TCSPC Systems for Real-time
Fluorescence Lifetime Imaging | Fluorescence lifetime imaging (FLI) has been receiving increased attention in
recent years as a powerful diagnostic technique in biological and medical
research. However, existing FLI systems often suffer from a tradeoff between
processing speed, accuracy, and robustness. In this paper, we propose a robust
approach that enables fast FLI with no degradation of accuracy. The approach is
based on a SPAD TCSPC system coupled to a recurrent neural network (RNN) that
accurately estimates the fluorescence lifetime directly from raw timestamps
without building histograms, thereby drastically reducing transfer data volumes
and hardware resource utilization, thus enabling FLI acquisition at video rate.
We train two variants of the RNN on a synthetic dataset and compare the results
to those obtained using center-of-mass method (CMM) and least squares fitting
(LS fitting). Results demonstrate that two RNN variants, gated recurrent unit
(GRU) and long short-term memory (LSTM), are comparable to CMM and LS fitting
in terms of accuracy, while outperforming them in background noise by a large
margin. To explore the ultimate limits of the approach, we derived the
Cramer-Rao lower bound of the measurement, showing that RNN yields lifetime
estimations with near-optimal precision. Moreover, our FLI model, which is
purely trained on synthetic datasets, works well with never-seen-before,
real-world data. To demonstrate real-time operation, we have built a FLI
microscope based on Piccolo, a 32x32 SPAD sensor developed in our lab. Four
quantized GRU cores, capable of processing up to 4 million photons per second,
are deployed on a Xilinx Kintex-7 FPGA. Powered by the GRU, the FLI setup can
retrieve real-time fluorescence lifetime images at up to 10 frames per second.
The proposed FLI system is promising and ideally suited for biomedical
applications. | Yang Lin, Paul Mos, Andrei Ardelean, Claudio Bruschini, Edoardo Charbon | 2023-06-27T16:37:37Z | http://arxiv.org/abs/2306.15599v2 | Coupling a Recurrent Neural Network to SPAD TCSPC Systems for Real-time Fluorescence Lifetime Imaging
###### Abstract
Fluorescence lifetime imaging (FLI) has been receiving increased attention in recent years as a powerful diagnostic technique in biological and medical research. However, existing FLI systems often suffer from a tradeoff between processing speed, accuracy, and robustness. In this paper, we propose a robust approach that enables fast FLI with no degradation of accuracy. The approach is based on a SPAD TCSPC system coupled to a recurrent neural network (RNN) that accurately estimates the fluorescence lifetime directly from raw timestamps without building histograms, thereby drastically reducing transfer data volumes and hardware resource utilization, thus enabling FLI acquisition at video rate. We train two variants of the RNN on a synthetic dataset and compare the results to those obtained using center-of-mass method (CMM) and least squares fitting (LS fitting). Results demonstrate that two RNN variants, gated recurrent unit (GRU) and long short-term memory (LSTM), are comparable to CMM and LS fitting in terms of accuracy, while outperforming them in background noise by a large margin. To explore the ultimate limits of the approach, we derived the Cramer-Rao lower bound of the measurement, showing that RNN yields lifetime estimations with near-optimal precision. Moreover, our FLI model, which is purely trained on synthetic datasets, works well with never-seen-before, real-world data. To demonstrate real-time operation, we have built a FLI microscope based on Piccolo, a 32x32 SPAD sensor developed in our lab. Four quantized GRU cores, capable of processing up to 4 million photons per second, are deployed on a Xilinx Kintex-7 FPGA. Powered by the GRU, the FLI setup can retrieve real-time fluorescence lifetime images at up to 10 frames per second. The proposed FLI system is promising and ideally suited for biomedical applications, including biological imaging, biomedical diagnostics, and fluorescence-assisted surgery, etc.
FLIM SPAD Neural network
## Introduction
Fluorescence lifetime imaging (FLI) is an imaging technique for the characterization of molecules based on time they decay from an excited state to the ground state [1]. Compared with fluorescence intensity imaging, FLI is insensitive to excitation intensity fluctuations, variable probe concentration, and limited photobleaching. Besides, through the appropriate use of targeted fluorophores, FLI is able to quantitatively measure the parameters of the microenvironment around fluorescent molecules, such as pH, viscosity, and ion concentrations[2, 3]. With these advantages, FLI has wide applications in the biological sciences, for example to monitor protein-protein interactions[4], and plays an increasing role in medical and clinical settings such as visualization of tumor margins[5], cancerous tissue detection[1, 6], and computer-assisted robotic surgery[7, 8].
Time-correlated single-photon counting (TCSPC) is popular among FLI systems due to its superiority over other techniques in terms of time resolution, dynamic range, and robustness. In TCSPC, one records the arrival time of individual photons when emitted by molecules upon photoexcitation [9, 10, 11]. After repeated measurements, one can construct a histogram of photon arrivals, which closely matches the true response of molecules, thus enabling the extraction of FLI, as shown Figure 1. The instrumentation of a typical TCPSC FLI system features a confocal setup, including a single-photon detector, a dedicated TCSPC module for time tagging, and a PC for lifetime estimation[12, 9]. Such systems are mostly unsuitable for rising clinical applications such as non-invasive monitoring, where a miniaturized and fast TCSPC system is desired [13]. Besides, the large amount of data generated by TCSPC lays a great burden on data transfer, data storage, and data processing. A powerful PC, sometimes equipped with dedicated GPUs, is required to acquire and process TCSPC data. TCSPC requires photodetectors with picosecond time resolution and single-photon detection capability. In the last decade, single-photon avalanche diodes (SPADs) have been used successfully in TCSPC systems and, with the advent of CMOS SPADs, the expansion of these detectors into high-resolution image sensors for widefield imaging has been accomplished successfully [14]. Several reviews of the use of SPADs in biophotonics have recently appeared [15, 16, 17].
Least-square (LS) fitting and maximum likelihood estimation (MLE) are widely used for fluorescence lifetime estimation[18, 19, 20]. These two methods rely on iterations to achieve high accuracy, but they are time-consuming since computationally expensive. Various non-fitting methods have been proposed to tackle these problems but often compromise on other specifications, among which the Center-of-Mass method (CMM) is a typical one. CMM is a simple, fast, and photon-efficient alternative, which has been already applied in some real-time FLI systems[21, 22, 23]. However, it is very sensitive to background noise, and the estimation is biased due to the use of truncated histograms[24].
Neural networks provide a new path to fast fluorescence lifetime extraction[25]. The first neural network-based model for fluorescence lifetime estimation was proposed in 2016, where higher accuracy and faster processing than LS fitting were reported[26]. Since then, several neural network architectures, including fully connected neural network (FCNN), convolutional neural network (CNN), and generative adversarial network (GAN) solutions have been explored for this end[27, 28, 29, 30, 31]. These techniques showed the ability to resolve multi-exponential decays and achieve accurate and fast estimation even in low photon-count scenarios. Apart from fluorescence lifetime determination, these neural networks can extract high-level features and can be integrated into a large-scale neural network for end-to-end lifetime image analysis such as cancerous tissue margin detection[32] and microglia detection[33].
In this work, we propose to adopt the paradigm of edge artificial intelligence (Edge AI), constructing a recurrent neural network (RNN) -coupled SPAD TCSPC system for real-time FLI. We train and test variants of RNNs for lifetime estimation and deploy them on FPGA to realize event-driven and near-sensor processing. The working principle is illustrated in Figure 1. Upon the arrival of photons, the timestamp is processed by the RNN directly without histogramming. From photon detection to lifetime estimation, the whole system is integrated into a miniaturized device, which achieves reduced data transfer rates. With the flexibility to retrain neural networks, the same system can be easily reused for other very different applications, such as classification.
## Results
The proposed system comprises a SPAD image sensor with timestamping capability coupled to an FPGA for implementation of neural networks _in situ_. In this section we describe the utilized RNN, its training, and the achieved results. We also describe the upperbounds on accuracy that were derived to contextualize the results obtained with the RNNs.
### RNNs trained on Synthetic Datasets and Performance
We train and evaluate RNNs on synthetic datasets. Three RNN variants, namely simple RNN, gated recurrent unit (GRU)[34], and long short-term memory (LSTM)[35], are adopted. These RNNs are constructed with 8, 16, and 32 hidden units, respectively. LS fitting and CMM are also benchmarked. The metrics used for evaluation are the root mean squared error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE). A common range of lifetime is covered here, which is from 0.2 to 5 ns, and we assume a laser repetition frequency of 20 MHz.
We start with a simple situation in which background noise is absent. All the tests are run on a PC with 32-bit floating point (32FP) precision. The results are presented in Table 1. One can observe that CMM achieves the lowest error in MAE and MAPE, GRU-32 achieves the lowest RMSE error, and GRU-32 and LSTM-32 have very similar performance to CMM. The performance of CMM itself is understandable. In this case, background noise is not considered, and the repetition period is 10 times the longest lifetime. Under these conditions, CMM is very close to the maximum likelihood estimator of the lifetime. When comparing Simple RNN, GRU, and LSTM, one can observe that GRU
outperforms LSTM by a small margin, and both of them perform much better than Simple RNN. As we can see with the decrease in model size, errors increase accordingly.
Background noise is often inevitable during fluorescence lifetime imaging, especially in diagnostic and clinical setups where the interruption to existing workflows is supposed to be minimized[13]. In our FLI system, it is estimated that at least 1% of the collected timestamps are from background noise. Therefore, we study the performance of each method under varying background noise levels. For simplicity, only LSTM-32 is used to compare with benchmarks. LSTM-32
\begin{table}
\begin{tabular}{c c c c} \hline \hline Model & RMSE & MAE & MAPE \\ \hline LS Fitting & 0.1642 & 0.1201 & 0.0553 \\ CMM & **0.0915** & **0.0642** & **0.0250** \\ Simple RNN-8 & 0.2516 & 0.1979 & 0.0969 \\ Simple RNN-16 & 0.2396 & 0.1798 & 0.0771 \\ Simple RNN-32 & 0.1877 & 0.1415 & 0.0659 \\ GRU-8 & 0.0957 & 0.0695 & 0.0297 \\ GRU-16 & 0.0928 & 0.0666 & 0.0274 \\ GRU-32 & **0.0908** & **0.0647** & **0.0261** \\ LSTM-8 & 0.0981 & 0.0720 & 0.0423 \\ LSTM-16 & 0.0928 & 0.0669 & 0.0277 \\ LSTM-32 & 0.0916 & 0.0656 & 0.0267 \\ \hline \hline \end{tabular}
\end{table}
Table 1: RNN models are trained and tested on a synthetic dataset, where the fluorescence decay model is mono-exponential, lifetime ranges from 0.2 and 5 ns, laser repetition frequency is 20 MHz, and background noise is not considered. Their performance is benchmarked against Least-square (LS) fitting and Center-of-Mass method (CMM). RMSE: root mean squared error, MAE: mean absolute error, MAPE: mean absolute percentage error.
Figure 1: In a traditional TCSPC FLI system, the sample is excited by a laser repeatedly, and the emission photons are detected and time-tagged. A histogram is gradually built on these timestamps, from which the lifetime can be extracted after the acquisition is completed. In our proposed system, upon the receiving of a photon, the timestamp is fed into the RNN immediately. The RNN updates the hidden state accordingly and idiles for the next photon. The schematic and formula of simple RNN are shown here. At timestep \(n\), the RNN takes the current information \(x_{n}\) and the past information \(h_{n-1}\) as input, then updates the memory to the current information \(h_{n}\) and gives out a prediction \(y_{n}\).
is trained on a synthetic dataset, where 0 to 10% uniform background noise is added to the samples randomly. Here we also illustrate the result of CMM with background noise subtraction, assuming that the number of photons from background noise is known, though it is often not the case in real-time FLI systems. Two synthetic datasets are built for evaluation, where the background noise ratios are 1% (SNR=20dB) and 5% (SNR=12.8dB), respectively. The results are presented in Table 2. We can see that LSTM-32 outperforms other methods in all metrics and scenarios. Combined with Table 1, one can observe that errors increase when the background noise increases for all the methods. However, LSTM and LS fitting are more robust to background noise, while CMM is extremely sensitive to it. This finding is in agreement with previous studies[36, 1].
### Cramer-Rao Lower Bound Analysis
To compare the performance with the theoretical optima, the Cramer-Rao lower bound (CRLB) is calculated of the accuracy of the lifetime estimate with an open source software [36], given the setting parameters. The variance of the lifetime estimation methods is calculated from Monte Carlo experiments. As for CMM and RNN, 3000 samples are used; as for the least-squares method, 1000 samples are used to reduce running time.
The relationship between lifetime and the relative standard deviation of the different estimators is shown in Figure 2a, where the photon count is 1024. One can observe that the variance of CMM and LSTM-32 almost reaches the CRLB, which suggests that CMM and LSTM-32 are near-optimal estimators. Considering that the laser repetition period is much longer than the lifetime and that background noise is not included, it is understandable that CMM reaches the CRLB, since it is approximately a maximum likelihood estimator. LS fitting performs worse than CMM and LSTM-32, which is likely due to the underlying assumption of Gaussian errors.
The relationship between the number of photons and the relative standard deviation of the different estimators is shown in Figure 2b, where the lifetime is set at 2.5 ns. Similar to Figure 2a, the relative standard deviations of CMM and LSTM-32 almost reach the CRLB, while the least square fitting performs worse. This result suggests that CMM and LSTM-32 are efficient estimators over different photon inputs, achieving excellent photon efficiency. They only need less than half of the data to obtain similar results as LS fitting.
We also analyze the CRLB with background noise. The results are shown in Figure 2. Comparing Figure 2c and Figure 2e with Figure 2a, we can see the CRLB is lifted a bit in the presence of background noise. The relative standard deviation of LS fitting stays almost unchanged, and that of LSTM-32 increases slightly but is still much better than LS fitting. As for CMM, one can see that the relative standard deviation increases dramatically at shorter lifetimes, which suggests that CMM is very sensitive to background noise for short lifetimes. By comparing Figure 2d and Figure 2f with Figure 2b, we find that the relative standard deviation does not vary with 1% background noise. With 5% background noise, however, CMM shows a clear degradation of performance, its relative standard deviation getting close to the one of LS fitting.
### Performance on Experimental Dataset
To verify the performance of RNNs, which are purely trained on synthetic datasets, on real-world data, the RNNs are tested on experimental data along with CMM and LS fitting as benchmarks. We prepare a fluorescence lifetime-encoded microbeads sample and acquire the TCSPC data with a commercial confocal FLIM setup. It is estimated that the background noise is below 1%. The LSTM-32 trained with 0% to 10% background noise dataset is used. The corresponding results are shown in Figure 3.
The histograms of the three samples share a similar shape. As for LS fitting, an instrument response function (IRF) is estimated from histograms of all pixels and then shared among them, which accounts for its good performance
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{3}{c}{1\% Background Noise} & \multicolumn{3}{c}{5\% Background Noise} \\ & RMSE & MAE & MAPE & RMSE & MAE & MAPE \\ \hline LS fitting & 0.1678 & 0.1226 & 0.0562 & 0.1883 & 0.1368 & 0.0609 \\ CMM & 0.2367 & 0.2168 & 0.1577 & 1.0742 & 1.0635 & 0.7799 \\ CMM* & 0.1099 & 0.0839 & 0.0456 & 0.2476 & 0.2128 & 0.1444 \\ LSTM-32 & **0.1019** & **0.0733** & **0.0304** & **0.1097** & **0.0784** & **0.0323** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance of LS fitting, CMM, CMM with background subtraction, and LSTM-32 in the presence of 1% and 5% background noise. *CMM with background noise subtraction. LSTM-32 is trained on a dataset including 0% to 10% random background noise for generalization in different scenarios.
Figure 2: Cramer-Rao lower bound analysis when including 0%, 1%, and 5% background noise levels.
here. The result of CMM has a 2 ns bias, which is corrected by the estimated IRF. It is worth noting that for the first peak in the histogram, LSTM shows a sharper Gaussian shape, which confirms LSTM's good performance under low fluorescence intensity and short lifetime.
#### Real-time FLIM Setup with on-FPGA RNN
We further built a real-time FLIM system by utilizing a SPAD array sensor with on-chip time-to-digital converters (TDCs) and deploying the aforementioned RNNs on FPGA for near-sensor processing. The schematic of our setup is shown in Figure 4. The 32\(\times\)32 Piccolo SPAD sensor developed at EPFL [37, 38] is utilized, on which 128 TDCs offer 50 ps temporal resolution. The sensor is controlled by a Kintex-7 FPGA, where four GRU cores are implemented for lifetime estimation. The four GRU cores are able to process up to 4 million photons per second. While the data transfer rate to the PC is 20Mb/s for histogram mode and 80Mb/s for raw mode, it reduces to only 240kb/s when applying the proposed RNN-based lifetime estimation method.
We prepare a sample containing fluorescent beads with a lifetime of 5.5 ns. The sample is measured by our system in real-time at 5 frames per second. During the imaging, we move the sample plate forward to observe the movement of beads in the images. The result is shown in Figure 5. The lifetime images are also displayed in rainbow scale. The average photon count for the beads is around 500 per pixel. This illustrates that our system can capture the movement of beads and provide an accurate lifetime estimation. One can also observe that there are some outliers, e.g. dark blue dots and red dots among the green beads. Apart from statistical fluctuations, RNN-based lifetimes tend to be lower when there are not enough photons, which explains why the blue dots are mostly darker than the surrounding pixels.
## Discussion
The proposed on-FPGA RNN removes the need for histogramming altogether by taking raw timestamps as input directly, which released hardware resources on FPGA or PC and significantly reduced the burden on data transfer and data processing. The analysis of synthetic data and CRLB shows that RNN, as a data-driven method, reaches excellent accuracy and robustness compared to its competitors, while retaining higher photon efficiency.
The performance of the system can be further improved by using a larger SPAD sensor and by accommodating more RNN cores on the FPGA. A more powerful FPGA or even dedicated neural network accelerators can be used to accommodate more RNN cores. More efficient quantization and approximate schemes can also be explored to reduce resource utilization and latency. In addition, these GRU cores can be further optimized by VHDL implementation. In
Figure 3: Comparison of LSTM, CMM, and LS Fitting on experimental data. The sample contains a mixture of fluorescent beads with three different lifetimes (1.7, 2.7, and 5.5 ns). The fluorescence lifetime images are displayed using a rainbow scale, where the brightness represents photon counts and the hue represents lifetimes. The lifetime histograms among all pixels are shown below. Most pixels are assumed to contain mono-exponential fluorophores. Two or three lifetimes might be mixed at the edge of the beads.
Figure 4: Real-time FLIM system based on the Piccolo 32\(\times\)32 SPAD sensor and on-FPGA RNNs. The main body of the microscope is from a single-channel Cerna® Confocal Microscope System (ThorLabs, Newton, New Jersey, United States). On the top is the Piccolo system, composed of the SPAD sensor itself, motherboard, breakout board, and FPGA. The SPAD sensor has 32\(\times\)32 SPADs and 128 on-chip TDCs, offering 50 ps temporal resolution. The FPGA is programmed to control the SPAD sensor and communicate with PC through USB 3. The RNN is also deployed on the same FPGA.
Figure 5: Real-time lifetime image sequence from our FLIM system. The sample contains fluorescent beads with a 5.5 ns reference lifetime. (See the full video in the Supplementary Material)
the future, the RNN cores could be implemented on ASIC and stacked together with SPAD arrays by means of 3-D stacking technology, realizing in-sensor processing[39].
Though the proposed system is only used for FLI to this date, it can be easily adapted for other applications by retraining the RNN. It can be further combined with other large-scale neural networks for high-level applications, where the output of the RNN is composed of high-level features learned by neural networks automatically, and it serves as input for other neural networks. The existing FLI-based high-level applications such as margin assessment[5, 32] could also be directly incorporated into our system.
## Methods
### Dataset
#### Synthetic Dataset
A simulation that well captures the features of the real scene is the key to constructing synthetic datasets. To accurately model a real FLI system, we take fluorescence decay, instrument response, background noise, and dark counts into account. The latter two are often neglected in previous studies. However, in several scenes such as fluorescence-assisted surgery exist strong background noise which can not be simply ignored. Different from existing NN-based methods, which take histograms as input, we generate synthetic datasets on the timestamp level. Assuming that at most one photon reaches the detector in every repetition period (i.e. the pile-up effect is not considered), the timestamps \(t\), namely the arrival time of photons, are modeled as:
\[t=\sum_{i=1}^{N-1}\mathbf{1}_{k=i}(t_{fluo_{i}}+t_{irf})+\mathbf{1}_{k=N}t_{bg}, \tag{1}\]
where \(\mathbf{1}\) is the indicator function, \(k\) is the component indicator, \(t_{fluo}\) is the fluorescence time delay, \(t_{irf}\) is the instrument response time delay, and \(t_{bg}\) is the arrival time of background noise or dark counts.
The component indicator \(k\) is a random variable with categorical distribution, representing the source of the incoming photon, which can be either a component of the fluorescence decay or background noise. The probability density function (PDF) of \(k\) is
\[f(k|\mathbf{p})=\prod_{i=1}^{N}p_{i}^{\mathbf{1}_{k=i}}, \tag{2}\]
where \(p_{i}\) represents the normalized intensity of fluorescence or background noise.
The fluorescence time delay \(t_{fluo_{i}}\) is subject to an exponential distribution. Its PDF is:
\[f(t_{fluo_{i}}|\tau_{i})=\frac{1}{\tau_{i}}e^{-\frac{t_{fluo_{i}}}{\tau_{i}}}, \tag{3}\]
where \(\tau_{i}\) is the lifetime of the fluorescence decay.
The instrument response time delay \(t_{irf}\) is subject to a Gaussian distribution. Its PDF is:
\[f(t_{irf}|t_{0},\sigma)=\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{1}{2}\left(\frac {t_{irf}-t_{0}}{\sigma}\right)^{2}}, \tag{4}\]
where \(t_{0}\) is the peak position, and \(\sigma\) can be represented by full width at half maximum(FWHM):
\[\sigma=\frac{FWHM}{2\sqrt{2\ln 2}}. \tag{5}\]
The time of arrival of background noise \(t_{bg}\) is subject to a uniform distribution. Its PDF is:
\[f(t_{bg}|T)=\frac{1}{T}, \tag{6}\]
where \(T\) is the repetition period.
Given a set of the above parameters, synthetic datasets can be generated with different lifetime ranges, background noise ratios, and components of fluorescence. In this work, the FWHM is assumed to be 167.3 ps, in accordance with previous studies[29, 30]; \(t_{0}\) for each sample is generated from a uniform distribution from 0 to 5 ns. To train models in the presence of background noise, \(p_{N}\) for each sample is generated from a uniform distribution from 0 to \(10\%\). Each dataset contains 500,000 samples, and each sample contains 1024 timestamps.
### Experimental Dataset
Testing the model, which is purely trained on synthetic data, on experimental data is essential to ensure its applicability in real-world scenarios, thus an experimental dataset is curated for evaluation. Fluorescent beads from PolyAn with reference lifetimes of 1.7, 2.7, and 5.5 ns are adopted as sample. The beads are made of 3D-carboxy, with a diameter of 6.5 \(\mu\)m. The excitation wavelength is around 488 nm and the emission spectra are 602-800 nm, 545-800 nm, and 559-718 nm, respectively. The fluorescence intensity of these three beads is around 1:2:5. Fluorescent beads with different lifetimes are mixed together with all possible combinations, and put in a 384-well plate for imaging.
A commercial FLIM system, available at the Bioimaging and Optics Platform (BIOP) of EPFL, is utilized to measure the sample and acquire the experimental data. A confocal microscope (Leica SP 8, w/ HyD SMD detector) is used for imaging, a super-continuum laser (NKT Photonics, Superk Extreme EXW-45) is used for illumination, and a TCSPC module (PicoHarp 300 TCSPC) is used for time-tagging. The sample is excited under a 20 MHz laser, corresponding to a repetition period of 50 ns. The excitation wavelength is 486 nm and the spectrum of the emission filter ranges from 600 to 700 nm. The temporal resolution of time-tagging is 16 ps.
### Neural Network
The neural network is first built, trained, and evaluated on the PC with PyTorch[40]. Then its weights are quantized and the activation functions are approximated. After that, the neural network is written in C/C++, loading the quantized weights and approximated activation functions, and is further translated into hardware description language (HDL) by Vitis High-level Synthesis (HLS).
### Model
Three RNN variants are adopted here, which are simple RNN, GRU, and LSTM. The default models in PyTorch are used, whereas the input size is 1, so selected to accommodate the timestamps. Considering the hardware limitation, only single-layer RNNs are considered. The hidden sizes range from 8 to 64. Since the timestamps are processed in real-time and are not stored, bidirectional RNNs cannot be used. An FCNN with one hidden layer takes the hidden state as input to predicts the lifetime.
### Training
Normally, the loss function for RNNs is built on the output of the last timestep or the average output of all timesteps. In fluorescence lifetime estimation, the performance of estimators is supposed to be improved with more photons. Under this principle, we design a weighted mean square percentage error (MSPE) function, assigning more importance to subsequent timesteps:
\[L(\mathbf{y},\mathbf{\hat{y}})=\sum_{i=1}^{N}w_{i}\left(\frac{y_{i}-\hat{y}_{ i}}{y_{i}}\right)^{2}, \tag{7}\]
where \(N\) is the number of timesteps, \(\mathbf{y}\) is the ground truth, \(\mathbf{\hat{y}}\) the prediction, and \(w_{i}\) the weight at timestep \(i\):
\[w_{i}=\frac{1}{1+e^{-\left(\frac{i-N/4}{N/4}\right)}}. \tag{8}\]
The weights for hidden states are initialized by an orthogonal matrix. All biases are initialized with 0s. For LSTM, the weights for cell states are initialized by Xavier initialization [41], and the bias for forget gates is initialized with 1s.
The dataset is randomly split into training, evaluation, and test set, with the ratio of sizes being 8:1:1. The batch size is 32. Adam optimizer is used with an initial learning rate of 0.001[42]. The learning rate decays every 5 epochs at the rate of 0.9. The whole training process takes 100 epochs.
### Evaluation
Three metrics are used to evaluate the performance of RNNs and benchmarks on synthetic data, which are:
\[\mathrm{RMSE}=\frac{\sqrt{\sum_{i=1}^{N}\left(y_{i}-\hat{y}_{i}\right)^{2}}}{N}, \tag{9}\]
\[\mathrm{MAE}=\frac{\sum_{i=1}^{N}\left|y_{i}-\hat{y}_{i}\right|}{N}, \tag{10}\]
\[\mathrm{MAPE}=\frac{\sum_{i=1}^{N}|\frac{y_{i}-\hat{g}_{i}}{y_{i}}|}{N}. \tag{11}\]
#### Cramer-Rao Lower Bound
Cramer-Rao lower bound (CRLB) gives the best precision that can be achieved in the estimation of fluorescence lifetime[43, 44, 36]. Mathematically, CRLB expresses a lower bound of variance of estimators and it is proportional to the inverse of the Fisher information:
\[Var(\hat{\theta})\geq\frac{(f^{\prime}(x;\theta))^{2}}{\mathcal{J}(\theta)}, \tag{12}\]
where \(f(x;\theta))\) is the PDF and \(\mathcal{J}\) is the Fisher Information, which is defined as:
\[\mathcal{J}(\theta)=nE_{\theta}\left[\left(\frac{\partial}{\partial\theta} \ln f(x;\theta)\right)^{2}\right]. \tag{13}\]
The CRLB is calculated with open-source software[36].
#### FPGA Implementation
Quantization is an effective way to reduce resource utilization and latency on hardware. In common deep learning frameworks, such as PyTorch or Tensorflow, model weights and activations are represented by 32-bit floating point numbers. However, it would be inefficient to perform operations for floating point numbers with such bitwidth. We aim to quantize the 32-bit floating point numbers with fixed-point numbers and to reduce the bitwidth as much as possible, while maintaining the same model behavior.
Both PyTorch and TensorFlow provide tools of quantization for edge devices, namely PyTorch Quantization and TensorFlow Lite. However, the quantized models rely on their own libraries to run, and the quantized weights cannot be exported. Therefore, we use Python and an open-source fixed point number library to realize a quantized GRU for evaluation. We compare 8-bit, 16-bit, and 32-bit fixed-point numbers to quantize weights and activations separately. The results show that the weights can be quantized to 16-bit fixed point numbers without a significant accuracy drop, and to 8-bit fixed point numbers with an acceptable accuracy drop. Activations can be quantized to 16-bit fixed point numbers without a significant accuracy drop, but 8-bit fixed point quantization will lead the model to collapse. Besides the fixed point precision, we find that the rounding method has a great impact on the performance. Truncating, often the default rounding method, brings larger error. Fixed point numbers with convergent rounding have almost the same behavior as floating point numbers.
The quantized GRU model is then implemented on FPGA. For convenience, the GRU is written in C++ and compiled to Vivado IP with Vitis HLS. The whole model is divided into two parts: a GRU core and an FCNN. The GRU core is designed to be shared among a group of pixels, and the FCNN will be run sequentially for each pixel after integration. Upon receiving a timestamp, GRU core loads hidden states from block RAMs (BRAMs), updates the hidden states, and sends them back to BRAM. After the integration of each repetition period, the FCNN loads the hidden state from BRAM, and streams the estimated lifetime to a FIFO.
#### Experimental Setup
A real-time FLI microscopy (FLIM) system with our SPAD sensor and on-FPGA RNN is built, which is shown in Figure. 4. The microscope is adapted from the sa confocal microscope system (Single-Channel Cerna(r) Confocal Microscope System), though it is only used for widefield imaging in this work. The same fluorescent bead samples are measured, hence a 480 nm pulsed laser (PicoQuant) is utilized. A set of filters is adopted for fluorescence imaging. The excitation filter (Thorlabs FITC Excitation Filter) has a central wavelength of 475 nm with a bandwidth of 35 nm. The emission filter is a long-pass filter (Thorlabs \(\mathcal{O}\)25.0 mm Premium Longpass Filter) with a cut-on wavelength of 600 nm. The dichroic filter (Thorlabs GFP Dichroic Filter) has a reflection band from 452 to 490 nm and a transmission band from 505 nm to 800 nm.
The Piccolo system is used for single-photon detection and time tagging[38]. The complete system, along with its components and a micrograph of the Piccolo chip is shown in Figure 4. Piccolo provides 50-ps temporal resolution and 47.8% peak photon detection probability (PDP). Versions with microlenses are available as well, to improve the light
collection efficiency. The median dark count rate (DCR) is 113 cps (per pixel at room temperature). A Xilinx FPGA was used to communicate with the PC and control the sensor. To minimize the system and reduce latency, the RNNs were deployed on the same FPGA.
The schematic of the FPGA design is shown in Figure 6. Four computation units are realized, each of which is in charge of a quarter of the sensor (32 by 8 pixels). The timestamps, sent to FPGA in parallel, are serialized and distributed to four computation units based on their SPAD IDs. Each computation unit is composed of one GRU core, one two-layer fully connected neural network (FCNN) core, and one BRAM. The computation speed is mainly limited by the latency of the GRU core, which is 1.05 ns when employing a 160 MHz clock. The photons that arrive when computation units are busy are simply discarded. The four computation units together are capable of processing up to 4 million photons per second.
|
2310.08548 | Stronger Coreset Bounds for Kernel Density Estimators via Chaining | We apply the discrepancy method and a chaining approach to give improved
bounds on the coreset complexity of a wide class of kernel functions. Our
results give randomized polynomial time algorithms to produce coresets of size
$O\big(\frac{\sqrt{d}}{\varepsilon}\sqrt{\log\log \frac{1}{\varepsilon}}\big)$
for the Gaussian and Laplacian kernels in the case that the data set is
uniformly bounded, an improvement that was not possible with previous
techniques. We also obtain coresets of size
$O\big(\frac{1}{\varepsilon}\sqrt{\log\log \frac{1}{\varepsilon}}\big)$ for the
Laplacian kernel for $d$ constant. Finally, we give the best known bounds of
$O\big(\frac{\sqrt{d}}{\varepsilon}\sqrt{\log(2\max\{1,\alpha\})}\big)$ on the
coreset complexity of the exponential, Hellinger, and JS Kernels, where
$1/\alpha$ is the bandwidth parameter of the kernel. | Rainie Bozzai, Thomas Rothvoss | 2023-10-12T17:44:59Z | http://arxiv.org/abs/2310.08548v1 | # Stronger coreset bounds for kernel density estimators via chaining
###### Abstract.
We apply the discrepancy method and a chaining approach to give improved bounds on the coreset complexity of a wide class of kernel functions. Our results give randomized polynomial time algorithms to produce coresets of size \(O\big{(}\frac{\sqrt{d}}{\varepsilon}\sqrt{\log\log\frac{1}{\varepsilon}}\big{)}\) for the Gaussian and Laplacian kernels in the case that the data set is uniformly bounded, an improvement that was not possible with previous techniques. We also obtain coresets of size \(O\big{(}\frac{1}{\varepsilon}\sqrt{\log\log\frac{1}{\varepsilon}}\big{)}\) for the Laplacian kernel for \(d\) constant. Finally, we give the best known bounds of \(O\big{(}\frac{\sqrt{d}}{\varepsilon}\sqrt{\log(2\max\{1,\alpha\})}\big{)}\) on the coreset complexity of the exponential, Hellinger, and JS Kernels, where \(1/\alpha\) is the bandwidth parameter of the kernel.
This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-2140004. R.B. is also partially supported by Grant No. G20221001-4371 in Aid of Research from Sigma Xi, The Scientific Research Honor Society. T.R. is supported by NSF grant 2318620 and a David & Lucile Packard Foundation Fellowship.
In particular, it is known that under certain conditions on the kernel function, \(\mathrm{KDE}_{X}\) approximates \(\rho\) at the minimax optimal rate as \(|X|\to\infty\)[26].
Although this is an elegant theoretical result, in practice it is computationally inefficient to store and make computations with an arbitrarily large number of data points \(n\). One solution to reduce the computational complexity is to use an \(\varepsilon\)_-coreset_ for a kernel density estimator.
**Definition 1.1** (KDE \(\varepsilon\)-coreset).: For fixed \(\varepsilon>0\), kernel function \(K:\mathcal{D}\times\mathcal{D}\to\mathbb{R}\), and data set \(X\subseteq\mathcal{D}\), an _\(\varepsilon\)-coreset_ for \(K\) is a subset \(Q\subseteq X\) so that
\[\|\mathrm{KDE}_{X}(y)-\mathrm{KDE}_{Q}(y)\|_{\infty}=\sup_{y\in\mathcal{D}} \Big{|}\frac{1}{|X|}\sum_{x\in X}K(x,y)-\frac{1}{|Q|}\sum_{q\in Q}K(q,y)\Big{|} \leq\varepsilon.\]
We will say that the _coreset complexity_ of a kernel function \(K\) is the minimum possible size of a \(\varepsilon\)-coreset \(Q\) for \(K\).
In general, coreset complexity bounds will depend on \(\varepsilon\) and the dimension \(d\) of the kernel domain, and they will be independent of the size of the set \(X\). These bounds are also often independent of the choice of \(X\subseteq\mathcal{D}\), although several previous results and several of our results give an explicit dependence on \(X\) that may allow improvement over existing bounds for sufficiently nice data sets (see Section 1.3). In particular, several of our bounds will depend on the radius of the set \(X\).
For more details about non-parametric methods and kernel density estimation, see for example [26].
### The Discrepancy Approach
One powerful method for proving bounds on the coreset complexity of kernel functions is the _discrepancy approach_. It has also been used in [19, 25, 21, 14] and is based on a method for computing range counting coresets [7, 21, 6, 20]. Following the notational conventions of [19], we make the following definition.
**Definition 1.2** (Kernel Discrepancy).: Given a data set \(X\subseteq\mathcal{D}\), a kernel \(K:\mathcal{D}\times\mathcal{D}\to\mathbb{R}\), and a coloring \(\beta\in\{\pm 1\}^{X}\), the _kernel discrepancy at a point \(y\in\mathcal{D}\)_ is defined as
\[\mathrm{disc}_{K}(X,\beta,y):=\Big{|}\sum_{x\in X}\beta(x)K(x,y)\Big{|}.\]
Then the _kernel discrepancy_ can then be defined as
\[\mathrm{disc}_{K}(n)=\max_{\begin{subarray}{c}X\subseteq\mathcal{D}:\ \beta \in\{\pm 1\}^{X}\\ |X|=n\end{subarray}}\min_{\beta\in\{\pm 1\}^{X}}\max_{y\in\mathcal{D}}\ \mathrm{disc}_{K}(X,\beta,y).\]
We will also use the notation \(\mathrm{disc}_{K}(X)\) to denote the kernel discrepancy with respect to a fixed data set \(X\). Bounds on kernel discrepancy can be leveraged to obtain bounds on the coreset complexity for a given kernel \(K\) via the following strategy, often called the "halving trick" [7, 6, 20]. We construct a coreset of \(X\) by iteratively removing half of the points in \(X\), and we select which half of the points
are removed by creating colorings \(\beta\in\{\pm 1\}^{X}\) minimizing the kernel discrepancy and then removing those points assigned \(-1\) (in principle, there is no reason to expect that exactly half of the points are assigned \(+1\), and half \(-1\), but there are standard techniques to overcome this challenge [17]). Indeed, supposing that we have an optimal choice of signs \(\beta\in\{\pm 1\}^{X}\) such that
\[\sup_{y\in\mathcal{D}}\Big{|}\sum_{x\in X}\beta(x)K(x,y)\Big{|}\leq f(n),\]
then we simply note that, letting \(X^{+}\) be the set of points assigned \(+1\) and \(X^{-}\) be the set of points assigned \(-1\), then under the assumption that \(|X^{-}|=|X|/2\), for any \(y\in\mathcal{D}\),
\[\begin{split}\frac{1}{|X|}\sum_{x\in X}\beta(x)K(x,y)& =\frac{1}{|X|}\sum_{x\in X^{+}}K(x,y)-\frac{1}{|X|}\sum_{x\in X^{-} }K(x,y)\\ &=\frac{1}{|X|}\sum_{x\in X}K(x,y)-\frac{1}{|X|/2}\sum_{x\in X^{- }}K(x,y).\end{split} \tag{1}\]
Taking a supremum over \(y\in\mathcal{D}\), the final line of (1) is exactly \(\operatorname{KDE}_{X}(y)-\operatorname{KDE}_{X^{-}}(y)\). Thus, iterating this procedure \(t\) times, and denoting the resulting set at iteration \(s\) by \(X_{s}\) (with \(X_{0}:=X\)), we find that
\[\|\operatorname{KDE}_{X}-\operatorname{KDE}_{X_{t}}\|_{\infty}\leq\sum_{s\in[ t]}\|\operatorname{KDE}_{X_{s-1}}-\operatorname{KDE}_{X_{s}}\|_{\infty}\leq \sum_{s\in[t]}\frac{2^{s-1}}{n}f\big{(}n/2^{s-1}\big{)}.\]
Assuming that the function \(f\) grows sufficiently slowly1, this sum will be dominated by the final term, which allows us to calculate the size of a coreset yielding error at most \(\varepsilon\). Based on this connection, our proofs will focus on bounding the quantity \(\operatorname{disc}_{K}(n)\) for different kernels \(K\) (or in some cases \(\operatorname{disc}_{K}(X)\), when we want to account for the geometry of the data set \(X\)), and then the "halving trick" can easily be used to determine the corresponding size of the coreset thus obtained.
Footnote 1: It suffices if \(f(n)\leq n^{c}\) for some fixed constant \(0<c<1\).
### Positive Definite Kernels and Reproducing Kernel Hilbert Spaces
A kernel \(K:\mathcal{D}\times\mathcal{D}\to\mathbb{R}\) is said to be _positive definite_ if given any selection of points \(x_{1},...,x_{m}\in\mathcal{D}\), the Gram matrix \(G\) given by \(G_{ij}=K(x_{i},x_{j})\) is positive definite. The importance of our assumption that our kernels are positive definite rests on the following famous theorem [1].
**Theorem 1.3** (Moore-Aronszajn, 1950).: _Let \(T\) be a set and \(K\) a positive definite function on \(T\times T\). Then there is a map \(\phi:T\to\mathcal{H}_{K}\) to a unique corresponding Hilbert space \(\mathcal{H}_{K}\) so that for any \(s,t\in T\),_
\[K(s,t)=\langle\phi(s),\phi(t)\rangle_{\mathcal{H}_{K}}.\]
The Hilbert space \(\mathcal{H}_{K}\) is called the _reproducing kernel Hilbert space_ (RKHS) associated to \(K\). Using this theorem, we can make the following definition:
**Definition 1.4**.: The _kernel distance_ associated to a kernel function \(K\) is the function
\[D_{K}(x,y):=\|\phi(x)-\phi(y)\|_{\mathcal{H}_{K}}.\]
Here \(\|h\|_{\mathcal{H}_{K}}=\sqrt{\langle h,h\rangle_{\mathcal{H}_{K}}}\) is the Euclidean norm in \(\mathcal{H}_{K}\). In general, it is only true that \(D_{K}\) is a pseudometric on the domain of \(K\); however, this is all we will need for our proofs. Almost all of the kernels commonly used in machine learning applications are positive definite.
### Summary of Known Results
The problem of coreset complexity for a wide variety of kernel functions has been studied for several decades. Early approaches focused on random samples [23, 22, 12], but more recently analytic approaches and new algorithms have been discovered, leading to much stronger bounds. We provide a brief summary of these results, which apply a wide range of techniques and apply to various classes of kernels.
Joshi et al. [13] used a sampling technique to prove a bound of \(O((1/\varepsilon^{2})(d+\log(1/\delta)))\) on the coreset complexity of any centrally symmetric, non-increasing kernel, where \(\delta\) is the probability of failure. Fasy et al. [10] used sampling to prove a different bound of \(O((d/\varepsilon^{2})\log(d\Delta/\varepsilon\delta))\), where \(\Delta:=\alpha\sup_{x,x^{\prime}\in X}\|x-x^{\prime}\|_{2}\) is the diameter of the data set. This result may improve upon that of Joshi et al. in the case that \(K(x,x)>1\), and it also applies to the broader class of Lipschitz kernels.
The following collection of results applies to the collection of _characteristic kernels_, a subset of positive definite kernels that satisfy the additional property that the mapping \(\phi_{K}\) into the associated RKHS is injective, which implies that the induced kernel distance \(D_{K}\) is a metric. This class contains many, though not all, positive definite kernels, with a notable exception being the exponential kernel (see Theorem 1.11). Again using random sampling, Lopaz-Paz et al. [16, 18] gave a simpler bound of \(O((1/\varepsilon^{2})\log(1/\delta))\).
Improved results were proved using an iterative technique called _kernel herding_ introduced by Chen et al. [8] to solve a closely related problem called _kernel mean approximation_, which is shown to upper bound kernel density approximation in the case of reproducing kernels [8, 24]. Chen et al. proved a bound of \(O(1/(\varepsilon r_{X}))\), where \(r_{X}\) is the largest radius of a ball centered at \(\frac{1}{|X|}\sum_{x\in X}\phi_{K}(x)\in\mathcal{H}_{K}\) that is completely contained in \(\operatorname{conv}\{\phi(x):\ x\in X\}\). This paper claimed that \(r_{X}\) is always a constant greater than \(0\); however, Bach et al. [3] gave a new interpretation of the algorithm and argued that although \(r_{X}\) is arbitrarily small for continuous distributions, the constant \(1/r_{X}\) is finite when \(X\) is finite. Their interpretation provided a bound of \((1/r_{X}^{2})\log(1/\varepsilon)\). Bach et al. also provided a bound of \(O(1/\varepsilon^{2})\)
in the case of _weighted coreset complexity_, where points in \(X\) can be assigned a non-negative weight, and this result was later improved to the setting of unweighted coresets [15].
Harvey and Samadi [11] applied the kernel herding technique to an even more general problem called _general mean approximation in \(\mathbb{R}^{d}\)_ to provide bounds on the coreset complexity of order \(O((1/\varepsilon)\sqrt{n}\log^{2.5}(n))\), where \(n=|X|\). The dependence on \(n\) is introduced by the worst case outcome of manipulating \(r_{X}\) using affine scaling. Locoste-Julien et al. [15] showed that actually one can take \(n=O(1/\varepsilon^{2})\) which improves the bound to \(O((1/\varepsilon^{2})\log^{2.5}(1/\varepsilon))\).
The next collection of bounds applies to _Lipschitz kernels_, that is, kernels where we can bound the _Lipschitz factor_\(C_{K}:=\max_{x,y,z\in\mathcal{D}}\frac{\|K(z,x)-K(z,y)\|_{2}}{\|x-y\|_{2}}\) of \(K\). In the case that \(C_{K}\) is a small constant, as is the case for most kernels of interest, it is easy to see that taking a \(2\varepsilon/(C_{K}\sqrt{d})\)-net \(G_{\varepsilon}\) over the domain of \(K\) and mapping each point \(x\in X\) to the closest point in \(G_{\varepsilon}\) (with multiplicity) to obtain \(X_{G_{\varepsilon}}\), we find that \(\sup_{y\in\mathcal{D}}|\mathrm{KDE}_{X}(y)-\mathrm{KDE}_{X_{G_{\varepsilon}} }(y)|\leq\varepsilon\). Cortes and Scott's work on the sparse kernel mean problem [9] combined with the discretization argument above implies a bound of \(O((\Delta/\varepsilon)^{d})\) on the coreset-complexity, in the case that \(\Delta\) is bounded.
Also in the setting of Lipschitz kernels, but now using the discrepancy-based approach described in Section 1.1, Phillips [21] showed a bound of
\[O((\alpha/\varepsilon)^{2d/(d+2)}\log^{d/(d+2)}(\alpha/\varepsilon))\]
for kernels with a Lipchitz factor \(C_{K}=O(\alpha)\). Using a sorting argument, Phillips also showed in this paper that for \(d=1\) one can achieve a coreset of size \(O(1/\varepsilon)\), matching a tight lower bound. Note that in general the coreset complexity is always bounded below by \(O(1/\varepsilon)\), as can be seen by taking \(O(1/\varepsilon)\) points that are spread far apart [21]. Phillips and Tai [19] improved these results by combining the discretization approach with the discrepancy approach to give a bound of \(O((1/\varepsilon)\sqrt{d\log(1/\varepsilon)})\) for any positive definite, Lipschitz kernel with _bounded influence_, a restriction similar to the impact radius condition that we will define. This result applies to a very wide class of kernels, including all of the kernels that we will discuss in our paper, and also the sinc kernel, for which no earlier
\begin{table}
\begin{tabular}{c|c|c} \multicolumn{1}{c|}{Phillips [21]} & \(O((\alpha/\varepsilon)^{2d/(d+2)}\log^{d/(d+2)}(\alpha/\varepsilon))\) & Lipschitz, PD \\ Phillips and Tai [19] & \(\sqrt{d\log n}\) & Lipschitz, PD \\ Tai [25] & \(2^{O(d)}\) & Gaussian \\ New & \(2^{O(d)}\sqrt{\log\log n}\) & Laplacian \\ New & \(O(\sqrt{d\log(\mathrm{radius}X+\log n)})\) & Gaussian, Laplacian \\ New & \(O(\sqrt{d\log(2\max\{\alpha,1\})})\) & Exponential, JS, Hellinger \\ \end{tabular}
\end{table}
Table 1. Results from the Discrepancy Approach
non-trivial bounds were known. They also provide a lower bound of \(\Omega(\sqrt{d}/\varepsilon)\) for \(d\in[2,1/\varepsilon^{2})\), and a tight lower bound of \(O(1/\varepsilon^{2})\) in the case that \(d\geq 1/\varepsilon^{2}\), for all shift and rotation invariant kernels that are somewhere-steep. Tai [25] later proved that for \(d\) constant, the Gaussian kernel has coreset complexity \(O(1/\varepsilon)\), matching the optimal lower bound in terms of \(\varepsilon\). This result supresses an exponential dependence on \(d\), but is still interesting for small-dimensional data sets.
Finally, a related but not directly comparable result due to Karnin and Liberty [14] applies to kernels that are analytic functions of the dot product and satisfy the very strong condition that \(\sup_{x,x^{\prime}\in\mathcal{D}}\|x-x^{\prime}\|_{2}\leq R_{K}\), where \(R_{K}\) is a fixed constant determined by the kernel \(K\). In this setting, they show a bound of \(O(\sqrt{d}/\varepsilon)\) on the coreset complexity. However, as we will see, for kernels such as the Gaussian or Laplacian kernel, one can only assume that \(\sup_{x,x^{\prime}\in\mathcal{D}}\|x-x^{\prime}\|_{2}\leq n\log n\), where \(n=|X|\), and thus this result does not apply. Their result can however be interpreted to give a bound of \(O(\alpha\exp(\alpha)\sqrt{d}/\varepsilon)\) on the coreset complexity of the exponential kernel, as in this case the domain of the kernel function does have constant diameter.
### Our Contributions
Our results require the following two definitions.
**Definition 1.5** (Impact Radius).: Given a kernel \(K:\mathcal{D}\times\mathcal{D}\to\mathbb{R}\) for \(\mathcal{D}\subseteq\mathbb{R}^{d}\), we define the _impact radius_ of \(K\) as
\[r_{K}(n):=\inf\big{\{}r\geq 0:\ \|x-y\|_{2}\geq r\ \implies\ |K(x,y)|\leq 1/n\ \ \forall x,y\in\mathcal{D}\big{\}}.\]
**Definition 1.6** (Query Space).: Given a kernel \(K:\mathcal{D}\times\mathcal{D}\to\mathbb{R}\) and a data set \(X\subseteq\mathcal{D}\), we define the _query space of \(K\) with respect to \(X\)_ as
\[Q=\Big{\{}y\in\mathcal{D}:\ \exists x\in X\ \text{s.t.}\ \|x-y\|_{2}\leq r_{K}(|X |)\Big{\}}=\mathcal{D}\cap\Big{(}\bigcup_{x\in X}(x+r_{K}(|X|)B_{2}^{d})\Big{)}.\]
Note that in general both the impact radius and the query space may depend explicitly on the bandwidth parameter \(\alpha\), but this dependence often cancels out, making the bounds obtained independent of \(\alpha\). One notable exception where this cancellation does not occur is when the domain \(\mathcal{D}\) of the kernel is compact, for example for the exponential, Hellinger, and Jensen-Shannon kernels; we return to this idea in Sections 4 and 5. We also note that in the event that the query space \(Q\) given by a particular data set is disconnected, it suffices to consider the largest connected component of \(Q\). To see this, note that by the definition of impact radius, the query points and data points from distinct connected components do not interact, thus we can apply the same bound to each connected component independently. For the proof of our results, we will assume that \(Q\) is connected.
Our two main results will be in terms of the size of the query space and the impact radius of the kernel, respectively. One of the key challenges in earlier applications of the discrepancy approach was that the domain \(\mathcal{D}\) is often \(\mathbb{R}^{d}\), the sphere \(S^{d}\), the standard \((d-1)\)-dimensional simplex \(\Delta^{d}\), or some other potentially unbounded and/or uncountably infinite space. These domains make bounding the
discrepancy challenging, as probabilistic techniques are often used, and the size of these spaces make the union bound ineffective. The following lemma shows how the query space and impact radius can simplify this problem to at least ensure that the domain is bounded for kernels with sufficiently nice decay properties.
**Lemma 1.7**.: _Let \(K:\mathcal{D}\times\mathcal{D}\to\mathbb{R}\) be a kernel, \(X\subseteq\mathcal{D}\) a data set, and \(Q\) the query space associated to \(K\) and \(X\). Then_
\[\mathrm{disc}_{K}(n)\leq\mathrm{disc}_{K|_{Q}}(n)+O(1).\]
Proof.: To prove the lemma, it suffices to show that for any \(y\in\mathcal{D}\setminus Q\), \(\mathrm{disc}_{K}(X,\beta,y)=O(1)\) for all \(\beta\in\{\pm 1\}^{X}\). But this follows immediately, as we know by the definition of \(Q\) that for any \(y\in\mathcal{D}\setminus Q\) and \(x\in X\), \(|K(x,y)|\leq 1/n\). Thus
\[\mathrm{disc}_{K}(X,\beta,y)=\Big{|}\sum_{x\in X}\beta(x)K(x,y)\Big{|}\leq\sum _{x\in X}|K(x,y)|\leq 1.\qed\]
Earlier approaches using the discrepancy approach [19, 25] needed an even stronger version of Lemma 1.7 for Lipschitz kernels that also ensured that the query space could be made finite up to an \(O(1)\) error using the Lipschitz constant of the kernel and a sufficiently small \(\varepsilon\)-net. This lemma provides an extra factor of \(\sqrt{\log n}\) in many results that we will avoid by using _chaining_[27], a multi-step construction of \(\varepsilon\)-nets. Note that in general, for an arbitrary data set, the query space can have volume up to \(n\mathrm{Vol}_{d}(r_{K}B_{2}^{d})\), as it is possible that the data points are well spread out and thus the impact radii of data points do not intersect. This observation is why [14] cannot be applied to the general Gaussian kernel, for example.
Our first result gives a discrepancy bound that depends explicitly on the choice of the data set \(X\subseteq\mathbb{R}^{d}\).
**Theorem 1.8**.: _Let \(K:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}\) be a kernel with bandwidth parameter \(1/\alpha\) and \(X\subseteq\mathbb{R}^{d}\) be a dataset. Denote the query space of \((K,X)\) by \(Q\), and define \(R=R(X)\) so that \(Q\subseteq RB_{2}^{d}\). If \(K\) satisfies the following properties:_
1. \(K\) _is positive definite;_
2. \(K\) _satisfies_ \(K(x,x)=1\) _for all_ \(x\in Q\)_;_
3. \(K=\kappa(\alpha\|x-y\|_{2})\)_, where_ \(\kappa:\mathbb{R}_{\geq 0}\to[-1,1]\) _is strictly decreasing and continuous; and_
4. _The following bound holds, where_ \([0,b_{K})\) _is the domain of the integrand:_ \[\int_{0}^{b_{K}}\sqrt{-\ln\left[\kappa^{-1}\left(\frac{2-r^{2}}{2}\right) \right]}\mathrm{d}r=O(1);\]
_then_
\[\mathrm{disc}_{K}(X)=O(\sqrt{d\log(2\max\{R\alpha,1\})}).\]
_Moreover, there is a randomized algorithm that on input of \(X\) and \(K\), finds an according coloring \(\beta\in\{\pm 1\}^{X}\) in polynomial time._
We will see in Section 4 that Theorem 1.8 can give strong improvements on the current best known bounds on the coreset complexity for the Gaussian and Laplacian kernels in the case that the data set \(X\) has bounded diameter. In particular, we have the following corollary.
**Corollary 1.1**.: _Let \(K\) be a kernel satisfying the conditions of Theorem 1.8, and let \(X\subseteq\mathbb{R}^{d}\) be any data set. Then_
\[\operatorname{disc}_{K}(X,n)\leq O(\sqrt{d\log(\operatorname{radius}(X)+\kappa ^{-1}(1/n))}).\]
Note that because of the discretization necessary to prove Phillip and Tai's bounds [19], even in the case where the data set is bounded uniformly, the \(\sqrt{\log n}\) factor in their discrepancy bound cannot be removed.
In the case where we do not want to account for the geometry of the data set \(X\), we present the following stronger bound in the case that the dimension \(d\) is taken to be constant.
**Theorem 1.9**.: _Let \(K:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}\) be a kernel with bandwidth parameter \(1/\alpha\) and \(X\subseteq\mathbb{R}^{d}\) a data set. If \(K\) satisfies the following properties:_
1. \(K\) _is positive definite;_
2. \(K\) _satisfies_ \(K(x,x)=1\) _for all_ \(x\in Q\)_;_
3. \(K=\kappa(\alpha\|x-y\|_{2})\)_, where_ \(\kappa:\mathbb{R}_{\geq 0}\to[-1,1]\) _is strictly decreasing and continuous; and_
4. _The following bound holds:_ \[\int_{0}^{b_{K}}\sqrt{-\ln\left[\kappa^{-1}\left(\frac{2-r^{2}}{2}\right) \right]}\mathrm{d}r=O(1);\]
_then_
\[\operatorname{disc}_{K}(n)=2^{O(d)}\sqrt{\log(\kappa^{-1}(1/n))}.\]
_Moreover, there is a randomized algorithm that on input of \(X\) and \(K\), finds an according coloring \(\beta\in\{\pm 1\}^{X}\) in polynomial time._
We will see in Section 4 that Theorem 1.9 yields significantly stronger bounds than [19] for several important kernels in machine learning, assuming the data is sufficiently low dimensional.
Our last three results demonstrate that in cases where the kernel is not a function of the Euclidean distance, a similar technique can still provide strong bounds. In particular, we give \(O(\sqrt{d})\) bounds for the Jensen-Shannon, exponential, and Hellinger kernels, with a logarithmic dependence on the bandwidth parameter \(\alpha\). We define \(\Delta_{d}:=\{x\in\mathbb{R}_{\geq 0}^{d}\mid\sum_{i\in[d]}x_{i}=1\}\) as the _standard \((d-1)\)-dimensional simplex_ in \(\mathbb{R}^{d}\).
**Theorem 1.10**.: _The Jensen-Shannon (JS) kernel_
\[K_{JS}:\Delta_{d}\times\Delta_{d}\to\mathbb{R},\quad K_{JS}(x,y)=\exp\left(- \alpha\big{(}H\big{(}\tfrac{x+y}{2}\big{)}-\tfrac{H(x)+H(y)}{2}\big{)}\right),\]
_where \(H(x)=-\sum_{i\in[d]}x_{i}\log x_{i}\) is the Shannon entropy function, has discrepancy satisfying_
\[\operatorname{disc}_{K_{JS}}(n)=O(\sqrt{d\log(2\max\{\alpha,1\})}).\]
_Further, the JS Kernel has coreset complexity_
\[O(\sqrt{d\log(2\max\{\alpha,1\})}/\varepsilon),\]
_and such a coreset can be constructed in randomized polynomial time._
This result greatly improves on the current best bound of \(O(\sqrt{d\log n})\) on the discrepancy of the JS kernel, in particular by dropping all dependence on the size \(n\) of the data set. We note that as the JS kernel is not shift or rotation invariant, it is not known to satisfy any lower bounds other than \(O(1/\varepsilon)\), which holds for all kernels.
We also have the same bound for the exponential kernel.
**Theorem 1.11**.: _The exponential kernel_
\[K_{e}:S^{d}\times S^{d}\to\mathbb{R},\quad K_{e}(x,y)=\exp(-\alpha(1-\langle x,y\rangle)),\]
_has discrepancy satisfying_
\[\operatorname{disc}_{K_{e}}(n)=O(\sqrt{d\log(2\max\{\alpha,1\})}).\]
_Further, the exponential kernel has coreset complexity_
\[O(\sqrt{d\log(2\max\{\alpha,1\})}/\varepsilon),\]
_and such a coreset can be constructed in randomized polynomial time._
As we mentioned in Section 1.3, Karnin and Liberty's proof technique in [14] gives a bound of \(O(\alpha\exp(\alpha)\sqrt{d})\) on this quantity, hence our analysis gives a doubly-exponential improvement in terms of the bandwidth parameter \(\alpha\).
As a corollary of Theorem 1.11 we obtain the same bound for the Hellinger kernel, again giving the best known result for this kernel.
**Theorem 1.12**.: _The Hellinger kernel_
\[K_{H}:\Delta^{d}\times\Delta^{d}\to\mathbb{R},\quad K_{H}(x,y)=\exp\left(- \alpha\sum_{i=1}^{d}\left(\sqrt{x}-\sqrt{y}\right)\right),\]
_has discrepancy satisfying_
\[\operatorname{disc}_{K_{H}}(n)=O(\sqrt{d\log(2\max\{\alpha,1\})})\]
_Further, the Hellinger kernel has coreset complexity_
\[O(\sqrt{d\log(2\max\{\alpha,1\})}/\varepsilon),\]
_and such a coreset can be constructed in randomized polynomial time._
As for the JS kernel, no better lower bound than \(O(1/\varepsilon)\) is known on the coreset complexity of the exponential and Hellinger kernels.
## 2. Preliminaries
In this section we introduce several results from discrepancy theory that will be essential in our proofs.
### Banaszczyk's Theorem and the Gram-Schmidt Walk
As we always work with positive definite kernels, by Theorem 1.3 each kernel \(K\) has an associated RKHS \(\mathcal{H}_{K}\) and a map \(\phi_{K}:\mathcal{D}\to\mathcal{H}_{K}\). Having mapped the data set \(X\) into \(\mathcal{H}_{K}\) via \(\phi_{K}\), a key step of our analysis will be to apply Banaszczyk's theorem [4], in its algorithmic form [5], to the vectors \(\{\phi_{K}(x):\ x\in X\}\), which we observed above have norm \(1\) for any kernel satisfying \(K(x,x)=1\) for \(x\in\mathcal{D}\). The algorithmic variant of Banaszczyk's Theorem can be stated as follows.
**Theorem 2.1** (Gram-Schmidt Walk [5]).: _There is a polynomial-time randomized algorithm that takes as input vectors \(v_{1},\ldots,v_{n}\in\mathbb{R}^{m}\) of \(\ell_{2}\) norm at most \(1\) and outputs random signs \(\beta\in\{\pm 1\}^{n}\) such that the (mean zero) random variable \(\sum_{i=1}^{n}\beta_{i}v_{i}\) is \(O(1)\)-subgaussian._
We give one of several equivalent definitions of subgaussian random variables, as well as the definition of the _subgaussian norm_, which we will need in Section 2.2 (see [27] for more details).
**Definition 2.2**.: A random variable \(X\) is \(K\)_-subgaussian_ if the tails of \(X\) satisfy
\[\mathbb{P}[|X|\geq t]\leq 2\exp(-t^{2}/K^{2})\ \ \forall t\geq 0.\]
The _subgaussian norm_ of \(X\) is then
\[\|X\|_{\psi_{2}}:=\inf\{s>0:\ \mathbb{P}[|X|\geq t]\leq 2\exp(-t^{2}/s^{2})\ \ \forall t\geq 0\}.\]
### Chaining and Dudley's Theorem
As we mentioned in Section 1.4, one of the key factors contributing to the factor of \(\sqrt{\log n}\) in [19] is the necessity of introducing a \(1/n\)-net on the query space. This net necessitates at least \(O(n^{d})\) data points in the query space, which is precisely the factor yielding \(\sqrt{d\log n}\), as the final bound is proportional to \(\sqrt{\log|Q|}\). To avoid this extra factor of \(\sqrt{\log(n)}\), we replace the union bound-type argument of [19] by a _chaining_ approach. The term chaining refers to the fact that rather than applying an \(\varepsilon\)-net to the entire space, we continue to construct finer and finer \(\varepsilon\)-nets. For more details see [27].
**Definition 2.3**.: Given a (pseudo)metric space \((T,d)\) and \(r>0\):
* \(B_{d}(s,r)=\{t\in T:\ d(s,t)\leq r\}\);
* \(\operatorname{diam}(d):=\sup_{t,s\in T}d(t,s)\);
* \(\mathcal{N}(T,d,r)\) is the size of a minimal \(r\)-cover of \(T\) w.r.t. \(d\), i.e. \[\mathcal{N}(T,d,r)=\min\Big{\{}|S|:\ S\subseteq T,\ T\subseteq\bigcup_{s\in S }B_{d}(s,r)\Big{\}}.\]
**Theorem 2.4** (Dudley's Integral Inequality).: _Let \((X_{t})_{t\in T}\) be a random process on a pseudometric space \((T,d)\) satisfying_
\[\|X_{t}-X_{s}\|_{\psi_{2}}\leq d(t,s)\ \ \forall t,s\in T.\]
_Then for any \(t_{0}\in T\),_
\[\mathbb{E}\sup_{t\in T}|X_{t}-X_{t_{0}}|\lesssim\int_{0}^{\mathrm{diam}(d)} \sqrt{\log\mathcal{N}(T,d,r)}\ \mathrm{d}r.\]
For our application we need to control the absolute value which can be done as follows:
**Theorem 2.5** (Dudley's Integral Inequality II).: _Let \((X_{t})_{t\in T}\) be a random process on a pseudometric space \((T,d)\) satisfying_
\[\|X_{t}-X_{s}\|_{\psi_{2}}\leq d(t,s)\ \ \forall t,s\in T.\]
_Then for any \(t_{0}\in T\),_
\[\mathbb{E}\sup_{t\in T}|X_{t}|\lesssim\int_{0}^{\mathrm{diam}(d)}\sqrt{\log \mathcal{N}(T,d,r)}\ \mathrm{d}r+\|X_{t_{0}}\|_{\psi_{2}}.\]
Proof.: By the triangle inequality \(\mathbb{E}\sup_{t\in T}|X_{t}|\leq\mathbb{E}\sup_{t\in T}|X_{t}-X_{t_{0}}|+ \mathbb{E}[|X_{t_{0}}|]\). The claim then follows from Theorem 2.4 and the fact that \(\mathbb{E}[|X_{t_{0}}|]\lesssim\|X_{t_{0}}\|_{\psi_{2}}\).
There is also a concentration version of this inequality.
**Theorem 2.6** (Concentration Version of Dudley's Inequality).: _Let \((X_{t})_{t\in T}\) be a random process on a pseudometric space \((T,d)\) satisfying_
\[\|X_{t}-X_{s}\|_{\psi_{2}}\leq d(t,s)\ \forall t,s\in T.\]
_Then, for every \(u\geq 0\), the event_
\[\sup_{t,s\in T}|X_{t}-X_{s}|\leq C\left[\int_{0}^{\mathrm{diam}(T)}\sqrt{\log \mathcal{N}(T,d,r)}\ \mathrm{d}r\ +\ u\cdot\mathrm{diam}(T)\right]\]
_holds with probability at least \(1-2\exp(-u^{2})\), where \(C>0\) is a universal constant._
We will see that condition (iv) in Theorems 1.8 and 1.9 is exactly the quantity that will appear in the integral of interest once we make a comparison between the pseudometric \(D_{K}\) and the Euclidean distance. For this comparison we will also need the following facts about covering numbers [2].
**Lemma 2.7**.: _Let \(N(A,B)\) denote the number of copies of body \(B\) needed to cover \(A\). For any convex set \(K\subseteq\mathbb{R}^{d}\) and \(r>0\), one has_
\[N(K,rB_{2}^{d})\leq 2^{d}\frac{\mathrm{Vol}_{d}(K+\frac{r}{2}B_{2}^{d})}{ \mathrm{Vol}_{d}(rB_{2}^{d})}.\]
**Lemma 2.8**.: _For any symmetric convex body \(P\subseteq\mathbb{R}^{d}\) and \(r>0\), one has_
\[N(P,rP)\leq\Big{(}1+\frac{2}{r}\Big{)}^{d}.\]
## 3. Proofs of Theorems 1.8 and 1.9
We begin with the proof of Theorem 1.8.
Proof of Theorem 1.8.: We may assume that \(R\alpha\geq 1\), otherwise replace \(R\) by \(\frac{1}{\alpha}\). Let \(K:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}\) and \(X\subseteq\mathbb{R}^{d}\) satisfy conditions (i)-(iv) outlined in Theorem 1.8. As \(K\) is positive definite, there exists an RKHS \(\mathcal{H}_{K}\) and a map \(\phi:\mathbb{R}^{d}\to\mathcal{H}_{K}\) so that
\[K(x,y)=\langle\phi(x),\phi(y)\rangle_{\mathcal{H}_{K}}\ \ \forall x,y\in \mathbb{R}^{d}.\]
We apply the Gram-Schmidt walk from Theorem 2.1 to the vectors \(\phi(x)\) for \(x\in X\), noting that by condition (ii), \(\|\phi(x)\|_{\mathcal{H}_{K}}=1\) for each \(x\in X\). The algorithm yields a distribution \(\mathcal{P}\) over \(\{\pm 1\}^{X}\) so that the random variable \(\Sigma:=\sum_{x\in X}\beta(x)\phi(x)\), with \(\beta\sim\mathcal{P}\), is \(O(1)\)-subgaussian. In particular, as we know that \(\|\phi(y)\|_{\mathcal{H}_{K}}=1\) for any \(y\in Q\), the (mean zero) random variable
\[\Sigma_{y}:=\big{\langle}\Sigma,\phi(y)\big{\rangle}=\sum_{x\in X}\beta(x)K(x, y)=\operatorname{disc}_{K}(X,\beta,y)\]
is \(O(1)\)-subgaussian. We will apply Theorem 2.4 to the mean zero process \((\Sigma_{y})_{y\in Q}\) with the pseudometric \(D_{K}(x,y)=\|\phi(x)-\phi(y)\|_{\mathcal{H}_{K}}\), the kernel distance defined in Section 1.2. To see why \(D_{K}\) satisfies the condition of Theorem 2.4, note that for any \(y,q\in Q\),
\[\operatorname{Var}[\Sigma_{y}-\Sigma_{q}]^{1/2}=\mathbb{E}[\big{\langle}\Sigma,\phi(y)-\phi(q)\big{\rangle}^{2}]^{1/2}\lesssim\|\phi(y)-\phi(q)\|_{ \mathcal{H}_{K}}=D_{K}(y,q).\]
Thus by Theorem 2.5,
\[\mathbb{E}\ \operatorname{disc}_{K|_{Q}}(X)=\mathbb{E}\sup_{y\in Q}|\Sigma_{y}| \lesssim\int_{0}^{\operatorname{diam}(D_{K})}\sqrt{\log\mathcal{N}(Q,D_{K},r )}\ \mathrm{d}r+\|\Sigma_{y_{0}}\|_{\psi_{2}} \tag{2}\]
for any fixed \(y_{0}\in Q\). Here \(\|\Sigma_{y_{0}}\|_{\psi_{2}}\leq O(1)\), and so we may ignore this lower order term. To estimate \(\mathcal{N}(Q,D_{K},r)\), we use conditions (ii) and (iii) to note that
\[D_{K}(q,y)=\sqrt{2-2\kappa(\alpha\|q-y\|_{2})}\leq r\quad\Longleftrightarrow \quad\|q-y\|_{2}\leq\frac{1}{\alpha}\kappa^{-1}\left(\frac{2-r^{2}}{2}\right),\]
where we use that \(\kappa\) is strictly decreasing. From this calculation we conclude that
\[\mathcal{N}(Q,D_{K},r)=\mathcal{N}\left(Q,\|\cdot\|_{2},\frac{1}{\alpha}\kappa ^{-1}\left(\frac{2-r^{2}}{2}\right)\right).\]
Taking \(c:=\frac{1}{\alpha}\kappa^{-1}\left(\frac{2-r^{2}}{2}\right)\) for a moment, we can bound the quantity on the right using Lemma 2.7:
\[\mathcal{N}\left(Q,\|\cdot\|_{2},c\right) =N(Q,\ cB_{2}^{d})\] \[\leq N(RB_{2}^{d},cB_{2}^{d})\] \[\leq 2^{d}\frac{\text{Vol}_{d}((R+\frac{c}{2})B_{2}^{d})}{\text{ Vol}_{d}(cB_{2}^{d})} \tag{3}\] \[\leq\left(\frac{4R}{c}\right)^{d},\]
where we use Lemma 2.7 and \(c\leq R\) (if \(c>R\) then the covering number is trivially 1). Using this bound in (2), we find
\[\mathbb{E}\ \text{disc}_{K|_{Q}}(X) \lesssim\int_{0}^{\text{diam}(D_{K})}\sqrt{\log\left[\left(\frac {4R\alpha}{\kappa^{-1}\left(\frac{2-r^{2}}{2}\right)}\right)^{d}\right]}\ \text{d}r\] \[\lesssim\sqrt{d}\int_{0}^{\text{diam}(D_{K})}\sqrt{\log\left[ \frac{4R\alpha}{\kappa^{-1}\left(\frac{2-r^{2}}{2}\right)}\right]}\ \text{d}r \tag{4}\] \[\lesssim\sqrt{d\log(2R\alpha)}+\sqrt{d}\int_{0}^{\text{diam}(D_{ K})}\sqrt{\log\left[\frac{1}{\kappa^{-1}\left(\frac{2-r^{2}}{2}\right)} \right]}\ \text{d}r\] \[\lesssim\sqrt{d\log(2R\alpha)}+O(\sqrt{d}).\]
To justify these calculations, note that in the event that the quantity inside the radical is negative, it means that
\[4R\alpha\leq\kappa^{-1}\Big{(}\frac{2-r^{2}}{2}\Big{)},\]
and hence by (3) we know that \(\mathcal{N}(Q,D_{K},r)=2^{O(d)}\). As \(\text{diam}(D_{K})\leq 2\), the domain of \(r\) values for which this occurs has length \(\leq 2\), so we can simply restrict the domain to this point while losing at most an additive \(O(\sqrt{d})\). Note that as \(\kappa\) is assumed strictly decreasing and continuous, \(\kappa^{-1}\big{(}\frac{2-r^{2}}{2}\big{)}\) is an increasing, continuous function of \(r\). Thus in particular \(\kappa^{-1}\big{(}\frac{2-r^{2}}{2}\big{)}\to 0\) as \(r\to 0\), based on our assumption that \(K(x,x)=1\) for all \(x\in\mathbb{R}^{d}\). Finally, by Lemma 1.7,
\[\mathbb{E}\ \text{disc}_{K}(X)\leq\mathbb{E}\ \text{disc}_{K|_{Q}}(X)+O(1) \lesssim\sqrt{d\log(2\alpha R)}.\qed\]
Next we prove Theorem 1.9, which gives improved bounds independent of the geometry of the data set \(X\).
Proof of Theorem 1.9.: Fix a kernel \(K\) satisfying conditions (i)-(iv) with impact radius \(r_{K}\), and let \(X\subseteq\mathbb{R}^{d}\). Fix the associated query space \(Q\) as given by Definition
1.6. As we assume that \(K\) is positive definite and satisfies \(K(x,x)=1\) for all \(x\in Q\), there exists a map \(\phi:\mathbb{R}^{d}\to\mathcal{H}_{K}\), where \(\mathcal{H}_{K}\) is a RKHS such that \(K(x,y)=\langle\phi(x),\phi(y)\rangle\) for all \(x,y\in\mathbb{R}^{d}\), and \(\|\phi(x)\|_{\mathcal{H}_{K}}=1\) for all \(x\in\mathbb{R}^{d}\). We first find a maximal set of points \(q_{1},\ldots,q_{m}\in Q\) such that for any \(\|q_{i}-q_{j}\|_{2}\geq r_{K}\) for any \(i,j\in[m]\). We then partition the space \(Q\) into disjoint cells
\[Q:=R_{1}\dot{\cup}\cdots\dot{\cup}R_{m}\]
so that
\[R_{i}\subseteq q_{i}+r_{K}B_{2}^{d} \tag{5}\]
for each \(i\in[m]\). These cells also partition the set \(X\) into smaller sets \(X_{i}:=X\cap R_{i}\). We begin with the set \(X_{1}\); because the conditions in Theorems 1.8 and 1.9 are the same, by identical arguments to the proof of Theorem 1.8 we obtain a random variable \(\Sigma^{1}:=\sum_{x\in X_{1}}\beta^{(1)}(x)\phi(x)\), where \(\beta^{(1)}\sim\{\pm 1\}^{X_{1}}\) is drawn according to the Gram-Schmidt walk. By Theorem 2.1, \(\Sigma^{1}\) is \(O(1)\)-subgaussian, so in particular, for any \(y\in Q\), as \(\|\phi(y)\|_{\mathcal{H}_{K}}=1\), the random variable
\[\Sigma^{1}(y):=\Big{\langle}\phi(y),\sum_{x\in X_{1}}\beta^{(1)}(x)\cdot\phi(x )\Big{\rangle}\]
is also \(O(1)\)-subgaussian.
For each \(j\in[m]\), we now have the collection of (mean zero) random variables \(\{\Sigma^{j}(y):\ y\in R_{j}\}\). We fix \(j\in[m]\); then by our assumption in (5), our query space \(R_{j}\) has volume at most \(\operatorname{Vol}_{d}(r_{K}B_{2}^{d})\). By a calculation analogous to that in (3), we have that for all \(r>0\),
\[\mathcal{N}(R_{j},D_{K},r)\leq\max\Big{\{}\Big{(}\frac{4\alpha r_{K}}{r} \Big{)}^{d},1\Big{\}}. \tag{6}\]
Here we note that because of our assumption that \(K(x,y)=\kappa(\alpha\|x-y\|_{2})\) and the definition of \(r_{K}(n)\), we have for \(x,y\) with \(\|x-y\|_{2}=r_{K}(n)\),
\[K(x,y)=\kappa(\alpha r_{K}(n))\leq 1/n\ \implies\ \alpha r_{K}(n)\leq\kappa^{-1}(1/n )\ \implies\ r_{K}(n)\leq\frac{1}{\alpha}\kappa^{-1}(1/n). \tag{7}\]
Thus the quantity \(\alpha r_{K}\) appearing in (6) can actually be bounded by a term independent of \(\alpha\). Note that by assumption (iii), \(\kappa^{-1}\) is a decreasing function. Thus by the same analysis in the proof of Theorem 1.9, we have
\[\operatorname{disc}_{K|_{R_{j}}}(X_{1})\lesssim\int_{0}^{\operatorname{diam}( D_{K}|_{R_{j}})}\sqrt{\log\mathcal{N}(R_{j},D_{K},r)}\ \mathrm{d}r=O(\sqrt{d\log\kappa^{-1}(1/n)}).\]
Applying Theorem 2.6 with \(u:=C_{0}\sqrt{d}\) yields that
\[\mathbb{P}\big{\{}\sup_{y\in R_{j}}\ |\Sigma^{1}(y)|\geq C\cdot C_{0}\sqrt{d} \big{\}}\leq 2e^{-dC_{0}^{2}}, \tag{8}\]
where \(C\) is the constant from Theorem 2.6 and \(C_{0}>0\). Moreover, by standard packing arguments, this probability is in fact \(0\) for all but at most \(2^{O(d)}\) choices of \(j\in[m]\), as any \(R_{j}\) not adjacent to (or the same as) \(R_{1}\) will have distance
\(\Omega(r_{K})\) from any point \(x\in X_{1}\), hence by the definition of the impact radius \(r_{K}\) and the bound in (7), for any such \(R_{j}\) the above probability is \(0\). We apply the union bound over the \(2^{O(d)}\) cells where (8) could fail and obtain that for \(C_{0}\) large enough,
\[\begin{split}\mathbb{P}\Big{\{}\forall j\in[m]:\ \sup_{y\in R_{j}}\ | \Sigma^{1}(y)|\geq C\cdot C_{0}\sqrt{d\log r_{K}}\Big{\}}&= \mathbb{P}\Big{\{}\sup_{y\in Q}|\Sigma^{1}(y)|\geq C_{0}\sqrt{d\log r_{K}} \Big{\}}\\ &\leq 2^{O(d)}\cdot 2(\kappa^{-1}(1/n))^{-C_{0}^{2}d}<1.\end{split} \tag{9}\]
Thus with positive probability, \(\operatorname{disc}_{K}(X_{1})\leq C\cdot C_{0}\sqrt{d\log\kappa^{-1}(1/n)}\). We then fix such an outcome and repeat this construction independently for \(R_{2},...,R_{m}\), at each step repeating the Gram-Schmidt algorithm until we get the outcome as in (9). After repeating this construction \(m\) times, we have a choice of signs \(\beta=(\beta^{(1)},...,\beta^{(m)})\in\{\pm 1\}^{X}\) so that by the triangle inequality
\[\operatorname{disc}_{K}(X)\leq\sum_{i\in[m]}\operatorname{disc}_{K}(X_{i}). \tag{10}\]
However, for our purpose this bound is too weak. But because the function \(K=\kappa(\|x-y\|_{2})\) is symmetric in \(x\) and \(y\), by the same packing argument we made above, we know that for each fixed point \(y\in Q\), only \(2^{O(d)}\) terms in the summand in (10) can contribute a discrepancy larger than \(O(1)\). Thus we conclude that
\[\operatorname{disc}_{K}(X)\leq 2^{O(d)}\sqrt{d\log\kappa^{-1}(1/n)}.\]
As \(X\subseteq\mathbb{R}^{d}\) was arbitrary, the bound follows.
## 4. Applications to Specific Kernels of Interest
In this section we highlight the applications of Theorems 1.8 and 1.9 to several kernels of interest in machine learning.
### The Gaussian and Laplacian Kernels
The Gaussian and Laplacian kernels are two of the most important and most commonly used kernels in machine learning [22]. The Gaussian kernel is defined by
\[K_{G}:\ \mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}\quad\ K_{G}(x,y)=\exp(- \alpha^{2}\|x-y\|_{2}^{2}),\]
and the Laplacian kernel is defined by
\[K_{L}:\ \mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}\quad K_{L}(x,y)=\exp(- \alpha\|x-y\|_{2}),\]
where again \(1/\alpha>0\) is the bandwidth parameter of the kernels. Currently the best known bound for both the Gaussian and the Laplacian kernels in arbitrary dimension \(d\) is \(O(\sqrt{d\log n})\), and because these bounds rely on taking \(1/n\)-nets of the query space, even if the given data set \(X\) satisfies nice properties such as boundedness, the \(\sqrt{\log n}\) term remains. However, Theorem 1.8 allows us to give significant improvements on this \(O(\sqrt{d\log n})\) bound given such a data set.
**Corollary 4.1**.: _For the Gaussian kernel \(K_{G}\) with any bandwidth parameter \(\alpha>0\) and any data set \(X\subseteq RB_{2}^{d}\) for any fixed constant \(R>0\), we have_
\[\operatorname{disc}_{K_{G}}(X)=O(\sqrt{d\log\log n}).\]
_In particular, one can find (in randomized polynomial time) a coreset for \(X\) of size_
\[O\left(\frac{\sqrt{d}}{\varepsilon}\sqrt{\log\log\frac{1}{\varepsilon}}\right).\]
Proof.: We first check the conditions of Theorem 1.8 for the Gaussian kernel. It is well-known that \(K_{G}\) is positive definite, and clearly \(K_{G}(x,x)=e^{0}=1\) for any \(x\in\mathbb{R}^{d}\). Taking \(\kappa_{G}(z):=e^{-z^{2}}\) gives us \(K_{G}(x,y)=\kappa_{G}(\alpha\|x-y\|_{2})\), and \(\kappa\) is strictly decreasing on \(\mathbb{R}^{+}\); finally,
\[\sqrt{-\log\left[\kappa^{-1}\left(\tfrac{2-r^{2}}{2}\right)\right]}\lesssim \sqrt{\log\left[\frac{1}{\log\left(\tfrac{2}{2-r^{2}}\right)}\right]}\]
is dominated by \(O(1/r^{2})\) for \(r\in\mathbb{R}^{+}\), hence is integrable on its domain. Finally, we note that the query space \(Q\) is given by
\[Q:=\bigcup_{x\in X}B_{2}^{d}(x,r_{K_{G}})\subseteq O(r_{K_{G}})B_{2}^{d},\]
as we assume that \(\operatorname{radius}(X)=O(1)\). Thus Theorem 1.8 yields
\[\operatorname{disc}_{K_{G}}(X)\lesssim O(\sqrt{d\log R\alpha})=O(\sqrt{d\log \log n})\]
by the fact that \(R=O(r_{K})\) and the same observation as in the proof of Theorem 1.9 that
\[\alpha r_{K_{G}}\leq\kappa_{G}^{-1}(1/n)=\sqrt{\log n}.\]
To see the bound on the size of a minimal coreset, we will apply the "halving trick" shown in Section 1.1. The above argument shows a bound of \(f(n)=O(\sqrt{d\log\log n})\) on the kernel discrepancy; iterating the discrepancy calculation \(t\) times and removing the points colored \(-1\) at each step, we obtain the following bound after \(t\) iterations:
\[\|\mathrm{KDE}_{X}-\mathrm{KDE}_{X_{t}}\|_{\infty}\lesssim\sum_{s\in[t]}\frac {2^{s-1}}{n}\sqrt{d\log\log\tfrac{n}{2^{s-1}}}.\]
This sum is dominated by the \(t^{th}\) term, so we can repeat this calculation until the size of the remaining set of data points \(m:=n/2^{s-1}\) satisfies
\[m\sqrt{d\log\log m}=\varepsilon,\]
which occurs for \(m=O\big{(}\frac{\sqrt{d}}{\varepsilon}\sqrt{\log\log\frac{1}{\varepsilon}} \big{)}\), yielding the bound.
This exact technique can always be used to provide coreset bounds, assuming that the term \(f(n)\) grows significantly slowly with \(n\), which it always will for our applications. Thus we will not repeat this calculation for the proofs of the remaining theorems. Note that an identical argument for the discrepancy yields the proof of Corollary 1.1 stated in Section 1.4. However, in general if the diameter of the set \(X\) depends on the number of data points, one has to be more careful to obtain bounds on the coreset complexity, as this dependence may change as we remove data points.
The following theorem follows from essentially identical arguments to those in the previous proof, noting that the Laplacian kernel can be written as \(K_{L}(x,y)=\kappa_{L}(\alpha\|x-y\|_{2})\), with \(\kappa_{L}(z)=e^{-z}\).
**Corollary 4.2**.: _For the Laplacian kernel \(K_{L}\) with any \(\alpha>0\) and any data set \(X\subseteq RB_{2}^{d}\) for any fixed constant \(R>0\) we have_
\[\operatorname{disc}_{K_{L}}(X)=O(\sqrt{d\log\log n}).\]
_In particular, one can find (in randomized polynomial time) a coreset for \(X\) of size_
\[O\left(\frac{\sqrt{d}}{\varepsilon}\sqrt{\log\log\frac{1}{\varepsilon}}\right).\]
Thus in the interest of applications, we can invoke useful properties of the data set \(X\) in order to obtain better bounds, something that was not possible due to the discretization technique used in [19]. Karnin and Liberty [14] provide bounds assuming that the entire query space lies in a bounded region (for a particular constant depending on the kernel), but our assumption is much weaker: we are only interested in the geometry of the data set, not the query space.
In the case when our data is sufficiently low dimensional that a \(2^{O(d)}\) factor is not prohibitively large, i.e. for \(d\) constant, Theorem 1.9 yields the following improvement on the current best known bound for the Laplacian kernel [19].
**Corollary 4.3**.: _For the Laplacian kernel \(K_{L}\) with any \(\alpha>0\), we have_
\[\operatorname{disc}_{K_{L}}(n)=2^{O(d)}\sqrt{\log\log n}.\]
_In particular, given any data set \(X\subseteq\mathbb{R}^{d}\), we can find (in randomized polynomial time) a coreset of size_
\[\frac{2^{O(d)}}{\varepsilon}\sqrt{\log\log\frac{1}{\varepsilon}}.\]
Proof.: The verification that the Laplacian kernel satisfies conditions (i)-(iv) of Theorem 1.9 is essentially identical to showing that the Gaussian kernel satisfies these properties, so we refer back to the proof of Corollary 4.1. As mentioned previously, the Laplacian kernel can be written as \(K_{L}(x,y)=\kappa_{L}(\alpha\|x-y\|_{2})\), where \(\kappa_{L}(z)=e^{-z}\), and applying Theorem 1.9 then yields the bound above.
We note that the same bound follows for the Gaussian kernel as well, though a recent paper of Tai [25] gives a bound of \(O(1)\) on the discrepancy of the Gaussian kernel for \(d\) constant. The technique Tai used for the Gaussian does not appear to generalize to other kernels, whereas our technique applies to a broader class of kernels that also includes the Laplacian kernel.
### The Exponential and the Hellinger kernel
We can also obtain improved bounds for the exponential kernel
\[K_{e}:S^{d}\times S^{d}\to\mathbb{R}^{d},\quad K_{e}(x,y)=\exp(-\alpha(1- \langle x,y\rangle)),\]
and the Hellinger kernel
\[K_{H}:\Delta_{d}\times\Delta_{d}\to\mathbb{R},\quad\ K_{H}(x,y)=\exp\Big{(}- \alpha\sum_{i\in[d]}(\sqrt{x_{i}}-\sqrt{y_{i}})^{2}\Big{)}\]
by applying Theorem 1.8.
Proof of Theorem 1.11.: First we note that \(x,y\in S^{d}\), hence in particular \(\|x\|_{2}=\|y\|_{2}=1\), so we can re-write the exponential kernel as
\[K_{e}(x,y)=\exp\Big{(}-\frac{\alpha}{2}\|x-y\|_{2}^{2}\Big{)}=K_{G}|_{S^{d}},\]
from which we see that \(K_{e}\) satisfies the conditions of Theorem 1.8. As the query space is \(Q=S^{d}\subset B_{2}^{d}\), we can apply Theorem 1.8 with \(R=1\).
We now argue that the same bound applies to the Hellinger kernel.
**Lemma 4.1**.: _The Hellinger kernel \(K_{H}\) has discrepancy at most that of the exponential kernel \(K_{e}\), i.e._
\[\operatorname{disc}_{K_{H}}(n)\leq\operatorname{disc}_{K_{e}}(n).\]
Proof of Lemma 4.1.: Given a data set \(X\subseteq\Delta_{d}\) of size \(n\), we make the transformation
\[f:\mathbb{R}^{d}\to\mathbb{R}^{d},\quad(x_{1},\ldots,x_{d})\to(\sqrt{x_{1}}, \ldots,\sqrt{x_{d}}).\]
Here we make two observations: first, \(f\) maps \(\Delta^{d}\) into \(S^{d}\), and second, for \(x,y\in S^{d}\), and
\[K_{H}(x,y)=\exp\Big{(}-\alpha\sum_{i\in[d]}(\sqrt{x_{i}}-\sqrt{y_{i}})^{2} \Big{)}=\exp(-\alpha\|f(x)-f(y)\|_{2}^{2})=K_{e}(f(x),f(y)).\]
By the first observation, we can apply our bound on the exponential kernel discrepancy of the set \(\{f(x):\ x\in X\}\subseteq S^{d}\) to find signs \(\beta\in\{\pm 1\}^{X}\) so that
\[\sup_{y\in S^{d}}\Big{|}\sum_{x\in X}\beta(x)K_{e}(f(x),y)\Big{|} =\sup_{z\in\Delta^{d}}\Big{|}\sum_{x\in X}\beta(x)K_{e}(f(x),f(z)) \Big{|}\] \[=\sup_{z\in\Delta^{d}}\Big{|}\sum_{x\in X}\beta(x)K_{H}(x,z)\Big{|}\] \[=\operatorname{disc}_{K_{H}}(X)\]
satisfies the bound of Theorem 1.11. Thus we conclude that
\[\operatorname{disc}_{K_{H}}(X)\leq\operatorname{disc}_{K_{e}}(X),\]
and as \(X\) was an arbitrary data set, the lemma follows.
Theorem 1.12 then follows immediately from Theorem 1.11 and Lemma 4.1.
## 5. The Jensen-Shannon kernel
Finally, we prove the following bound for a specific kernel that does not directly satisfy the conditions of Theorems 1.8 and 1.9: the Jensen-Shannon (JS) Kernel, defined by
\[K_{JS}:\Delta_{d}\times\Delta_{d}\to\mathbb{R}\quad\ K_{JS}(x,y)=\exp\left[- \alpha\left(H\left(\frac{x+y}{2}\right)-\frac{H(x)+H(y)}{2}\right)\right],\]
where \(H(z)=-\sum_{i\in[d]}z_{i}\ln z_{i}\) is the Shannon entropy function. Note that the quantity in the exponent of the JS Kernel, \(H\left(\frac{x+y}{2}\right)-\frac{H(x)+H(y)}{2}\), is precisely the Jensen gap for the Shannon entropy function (hence the name Jensen-Shannon kernel).
The Jensen-Shannon kernel, as well as the Hellinger kernel introduced in Section 4, are commonly used in information theory, as they represent similarity between histograms (i.e. the data points are discrete probability distributions over \(d\) variables). One example from text analysis [19] is encoding with each data point \(X\), corresponding to a text, the probabilities of \(d\) certain words appearing in that text. Although the JS kernel does not satisfy many of the conditions of our theorems, a similar proof strategy yields Theorem 1.10.
To prove this result, we will need the following lemmas bounding the one-dimensional entropy function, which we will denote by \(h(x):=x\ln(\frac{1}{x})=-x\ln(x)\).
**Lemma 5.1**.: _For \(a,\delta\geq 0\) one has \(|2h(a+\delta)-(h(a)+h(a+2\delta))|\leq 3\delta\)._
Proof.: By continuity of \(h\) it suffices to prove the inequality for the case of \(a>0\). We have
\[|2h(a+\delta)-h(a)-h(a+2\delta)|\] \[= |2(a+\delta)\ln(a+\delta)-a\ln(a)-(a+2\delta)\ln(a+2\delta)|\] \[= \Big{|}2(a+\delta)\cdot\Big{(}\ln(a)+\ln\Big{(}1+\frac{\delta}{a }\Big{)}\Big{)}-a\ln(a)-(a+2\delta)\cdot\Big{(}\ln(a)+\ln\Big{(}1+\frac{2 \delta}{a}\Big{)}\Big{)}\Big{|}\] \[= \Big{|}2(a+\delta)\ln\Big{(}1+\frac{\delta}{a}\Big{)}-(a+2\delta )\ln\Big{(}1+\frac{2\delta}{a}\Big{)}\Big{|}\] \[\stackrel{{\text{triangle ineq.}}}{{\leq}} a\cdot\underbrace{\big{|}2\ln\Big{(}1+\frac{\delta}{a}\Big{)}-\ln\Big{(}1+ \frac{2\delta}{a}\Big{)}\Big{|}}_{\leq\delta/a}+2\delta\cdot\underbrace{ \big{|}\ln\Big{(}1+\frac{\delta}{a}\Big{)}-\ln\Big{(}1+\frac{2\delta}{a} \Big{)}\Big{|}}_{\leq 1}\leq 3\delta\]
Here one can verify that \(|2\ln(1+z)-\ln(1+2z)|\leq z\) for all \(z\geq 0\) which we use to bound the left term. To bound the right term we use that \(|\ln(1+z)-\ln(1+2z)|\leq 1\) for all \(z\geq 0\).
**Lemma 5.2**.: _For \(a,b\geq 0\) one has \(|h(\frac{a+b}{2})-\frac{h(a)+h(b)}{2}|\leq\frac{3}{4}|a-b|\)._
Proof.: Suppose \(b\geq a\). Applying Lemma 5.1 with \(a\) and \(\delta:=\frac{b-a}{2}\) gives
\[\Big{|}2h\Big{(}\frac{a+b}{2}\Big{)}-(h(a)+h(b))\Big{|}\leq 3\cdot\frac{b-a}{2}= \tfrac{3}{2}|a-b|.\]
The claim follows after dividing both sides by \(2\).
Proof of Theorem 1.10.: As the JS Kernel is positive definite, there exists a map \(\phi:\Delta^{d}\to\mathcal{H}_{K_{JS}}\), where \(\mathcal{H}_{K_{JS}}\) is a RKHS for \(K_{JS}\), i.e. for any choice of \(x,y\in\Delta_{d}\), \(K_{JS}(x,y)=\langle\phi(x),\phi(y)\rangle_{\mathcal{H}_{K_{JS}}}\). Then as \(\|\phi(x)\|_{\mathcal{H}_{K_{JS}}}=e^{0}=1\) for any choice of \(x\in\Delta^{d}\), we can apply the Gram-Schmidt walk to the collection of vectors \(\{\phi(x)\}_{x\in\Delta_{d}}\) exactly as in the proofs of Theorems 1.8 and 1.9 to obtain a distribution \(\mathcal{D}\) over \(\{\pm 1\}^{\bar{X}}\) and a corresponding family of \(O(1)\)-subgaussian random variables
\[\Sigma_{y}:=\operatorname{disc}_{K_{JS}}(X,\beta,y),\quad y\in\Delta_{d},\ \beta\sim\mathcal{D}.\]
Then applying Theorem 2.4 to this collection of random variables with the pseudometric
\[D_{K_{JS}}(x,y)=\|\phi(x)-\phi(y)\|_{\mathcal{H}_{K_{JS}}}=\sqrt{2-2K_{JS}(x, y)},\]
we find that
\[\mathbb{E}\ \mathrm{disc}_{K_{JS}}(X)\lesssim\int_{0}^{\operatorname{diam}(D _{K_{JS}})}\sqrt{\log\mathcal{N}(\Delta_{d},D_{K_{JS}},r)}\ \mathrm{d}r. \tag{11}\]
We focus on bounding the term \(\mathcal{N}(\Delta_{d},D_{K_{JS}},r)\), for which we will use Lemma 5.2. First, note that for \(x=(x_{1},\ldots,x_{d})\in\mathbb{R}^{d}\), breaking the \(d\)-dimensional entropy function \(H\) down component-wise as \(H(x)=\sum_{i\in[d]}h(x_{i})\):
\[D_{K_{JS}}(x,y)\leq r\quad\Longleftrightarrow\quad\sum_{i\in[d]}\left(h\left( \frac{x_{i}+y_{i}}{2}\right)-\frac{h(x_{i})+h(y_{i})}{2}\right)\leq\frac{1}{ \alpha}\log\big{(}\tfrac{2}{2-r^{2}}\big{)}. \tag{12}\]
By Lemma 5.2, we know that for each \(i\in[d]\),
\[h\left(\frac{x_{i}+y_{i}}{2}\right)-\frac{h(x_{i})+h(y_{i})}{2}\leq|x_{i}-y_{ i}|,\]
hence
\[\sum_{i\in[d]}\left(h\left(\frac{x_{i}+y_{i}}{2}\right)-\frac{h(x_{i})+h(y_{i })}{2}\right)\leq\sum_{i\in[d]}|x_{i}-y_{i}|=\|x-y\|_{1}.\]
In particular, by (12), \(\|x-y\|_{1}\leq\frac{1}{\alpha}\log\left(\frac{2}{2-r^{2}}\right)=:c\) implies that \(D_{K_{JS}}(x,y)\leq r\), so we have the bound
\[\mathcal{N}(\Delta_{d},D_{K_{JS}},r)\leq\mathcal{N}(\Delta_{d},\|\cdot\|_{1},c) \leq N(B_{1}^{d},cB_{1}^{d})\leq\left(1+\frac{2}{c}\right)^{d}\leq\left(\frac{3 }{c}\right)^{d}\]
for \(c\leq 1\) using Lemma 2.8 (and if \(c>1\), the covering number is \(1\) anyway). Returning to (11), we conclude
\[\mathbb{E}\ \mathrm{disc}_{K_{JS}}(X)\lesssim\int_{0}^{\mathrm{diam}(D_{K})} \sqrt{\log\left[\left(\frac{3\alpha}{\log\frac{2}{2-r^{2}}}\right)^{d}\right] }\ \mathrm{d}r\lesssim\sqrt{d\log(2\max\{1,\alpha\})},\]
by the same argument as in the proofs of Theorems 1.8 and 1.9.
## 6. Conclusions and Potential Future Questions
In this paper we give improved bounds of order \(O(\frac{\sqrt{d}}{\varepsilon}\sqrt{\log\log\frac{1}{\varepsilon}})\) on the coreset complexity of the Gaussian and Laplacian kernels (as well as other kernels satisfying sufficiently nice properties) in the case that the data set of interest has \(O(1)\) diameter. We also give improved bounds of order \(O(\frac{1}{\varepsilon}\sqrt{\log\log(1/\varepsilon)})\) for the Laplacian kernel in the case that the dimension \(d\) is constant, extending a recent result of Tai for the Gaussian kernel [25] with a \(\sqrt{\log\log(1/\varepsilon)}\) loss. Finally, we provide the best known bounds of \(O(\frac{\sqrt{d}}{\varepsilon}\sqrt{\log(2\max\{\alpha,1\})})\) for the Jensen-Shannon, exponential, and Hellinger kernels, in particular significantly improving the dependence on the bandwidth parameter \(1/\alpha\) for the exponential kernel. To conclude the paper, we echo two remaining open question of interest given also in [19, 25] respectively. First, it would be interesting to prove (or disprove) an upper bound of \(O(\sqrt{d}/\varepsilon)\) on the coreset complexity of Lipschitz positive definite kernels with nice decay properties to match the bound in [19]. Second, it would be interesting to extend our \(O(\sqrt{\log\log n})\) bound from Theorem 1.9 in constant dimension to either an even broader class of kernels, or even more interestingly, to drop the \(\sqrt{\log\log n}\) dependence entirely, as Tai is able to do in the special case of the Gaussian. Our analytic method allows us to move one step closer to either of these results by removing the extra factor of \(\sqrt{\log n}\) introduced by discretizing the query space.
|
2304.03582 | Reaction kinetics of CN + toluene and its implication on the productions
of aromatic nitriles in the Taurus molecular cloud and Titan's atmosphere | Reactions between cyano radical and aromatic hydrocarbons are believed to be
important pathways for the formation of aromatic nitriles in the interstellar
medium (ISM) including those identified in the Taurus molecular cloud (TMC-1).
Aromatic nitriles might participate in the formation of polycyclic aromatic
nitrogen containing hydrocarbons (PANHs) in Titan's atmosphere. Here, ab initio
kinetics simulations reveal a high efficiency of $\rm
\sim10^{-10}~cm^{3}~s^{-1}$ and the competition of the different products of
30-1800 K and $10^{-7}$-100 atm of the CN + toluene reaction. In the
star-forming region of TMC-1 environment, the product yields of benzonitrile
and tolunitriles for CN reacting with toluene may be approximately 17$\%$ and
83$\%$, respectively. The detection of main products, tolunitriles, can serve
as proxies for the undetected toluene in the ISM due to their much larger
dipole moments. The competition between bimolecular and unimolecular products
is extremely intense under the warmer and denser PANH forming region of Titan's
stratosphere. The computational results show that the fractions of
tolunitriles, adducts, and benzonitrile are 19$\%$-68$\%$, 15$\%$-64$\%$ and
17$\%$, respectively, at 150-200 K and 0.0001-0.001 atm (Titan's stratosphere).
Then, benzonitrile and tolunitriles may contribute to the formation of PANHs by
consecutive $\rm C_{2}H$ additions. Kinetic information of aromatic nitriles
for the CN + toluene reaction calculated here helps to explain the formation
mechanism of polycyclic aromatic hydrocarbons (PAHs) or PANHs under different
interstellar environments and constrains corresponding astrochemical models. | Mengqi Wu, Xiaoqing Wu, Qifeng Hou, Jiangbin Huang, Dongfeng Zhao, Feng Zhang | 2023-04-07T10:48:34Z | http://arxiv.org/abs/2304.03582v1 | Reaction kinetics of CN + toluene and its implication on the productions of aromatic nitriles in the Taurus molecular cloud and Titan's atmosphere
###### Abstract
Reactions between cyano radical and aromatic hydrocarbons are believed to be important pathways for the formation of aromatic nitriles in the interstellar medium (ISM) including those identified in the Taurus molecular cloud (TMC-1). Aromatic nitriles might participate in the formation of polycyclic aromatic nitrogen containing hydrocarbons (PANHs) in Titan's atmosphere. Here, ab initio kinetics simulations reveal a high efficiency of \(\sim 10^{-10}\) cm\({}^{3}\) s\({}^{-1}\) and the competition of the different products of 30--1800 K and \(10^{-7}\)-100 atm of the CN + toluene reaction. In the star-forming region of TMC-1 environment, the product yields of benzonitrile and tolumitriles for CN reacting with toluene may be approximately 17% and 83%, respectively. The detection of main products, tolumitriles, can serve as proxies for the undetected toluene in the ISM due to their much larger dipole moments. The competition between bimolecular and unimolecular products is extremely intense under the warmer and denser PANH forming region of Titan's stratosphere. The computational results show that the fractions of tolumitriles, adducts, and benzonitrile are 19%-68%, 15%-64% and 17%, respectively, at 150--200 K and 0.0001--0.001 atm (Titan's stratosphere). Then, benzonitrile and tolumitriles may contribute to the formation of PANHs by consecutive C\({}_{2}\)H additions. Kinetic information of aromatic nitriles for the CN + toluene reaction calculated here helps to explain the formation mechanism of polycyclic aromatic hydrocarbons (PAHs) or PANHs under different interstellar environments and constrains corresponding astrochemical models.
Astrochemistry (75); Interstellar medium (847); Titan (1198); Interstellar clouds (834); Computational methods (1965); Interstellar molecules (849); Reaction rates (2081)
0000-0002-1887-088]Mengqi Wu
0000-0002-1887-7886]Xiaoqing Wu
0000-0002-1887-0886]Qifeng Hou
0000-0002-1887-7886]Jiangbin Huang
0000-0002-1887-0886]Dongfeng Zhao
0000-0002-1887-0886]Feng Zhang
## 1 Introduction
Polycyclic aromatic hydrocarbons (PAHs) and their derivatives, presumably, the nitrogen containing analogues (PANHs), are important constituents of the interstellar medium (ISM) (Tielens, 2008; Allamandola et al., 1985; Puget and Leger, 1989; van Dishoeck, 2014). Based on the ubiquitously observed infrared emission bands in space, these species are inferred to contribute 10-25% of the total cosmic carbon (Ali-Haimoud et al., 2014; Allamandola et al., 1989; Bernstein et al., 2018), and play key roles in the evolution of interstellar biological systems. They are also present in planetary atmospheres, such as that of Saturn's largest moon, Titan, where PA(N)Hs has been derived from mass spectrometric and spectroscopic data (Liang et al., 2007; Waite et al., 2007; Lopez-Puertas et al., 2013) and are likely important prebiotic molecules and precursors of aerosols in the brownish-red organic haze layers of Titan's atmosphere. However, identification of specific PAHs by radio astronomy is always challenging, because many of them have relatively small rotational constants for these large molecules, and for pure PAHs very small permanent dipole
moments (Lovas et al., 2005; Cernicharo et al., 2021). Very recently, a number of small aromatic molecules, including a number of aromatic nitriles (benzonitrile, cyanonaphtalene, and cyanoindene), have been discovered in the prototypical cold dark Taurus molecular cloud TMC-1 via rotational transitions, and are considered to be precursor molecules in the formation of larger PANHs (McGuire et al., 2018, 2021; Sita et al., 2022).
The CN radical, although highly reactive, is known to be ubiquitously and abundantly present in the ISM, circumstellar shells, and planetary atmospheres (McKellar, 1940; Adams, 1941). Its reactions with small aromatic hydrocarbons may play an important role in the formation mechanism of PANHs in the ISM (Lee et al., 2019; Landera and Mebel, 2010, 2013). E.g., in TMC-1, the detected benzonitrile, cyanonaphtalene and cyanoindene, are suggested to be formed by CN reactions with benzene, naphthalene and indene, respectively, in the currently available astrochemical models. However, predicted column densities of the detected aromatic nitrilesby the models are found to be underestimated (McGuire et al., 2018, 2021; Sita et al., 2022; Burkhardt et al., 2021). Therefore, in-depth investigations for CN radical reactions with aromatic hydrocarbons helps understand the formation and evolution of PA(N)Hs in the interstellar medium and in Titan's atmosphere.
The focus of the present study is on the reaction kinetics of toluene + CN under the temperature-pressure conditions relevant to both cold, harsh interstellar environments and upper atmosphere of Titan. This reaction also serves as a model system for understanding the CN reactions with methyl-substituted aromatic molecules. Joblin et al. (Joblin et al., 1996) concluded that methyl- and ethylene-substituted PAHs may be potential candidates responsible for the weak 3.4 \(\mu\)m emission feature of observed infrared emission bands. Recently Cernicharo et al. (Cernicharo et al., 2022) reported a high 3\(\sigma\) upper limit of 6 \(~{}\times~{}10^{12}\) cm\({}^{-2}\) for toluene in the QUIJOTE1 line survey. Their model shows that large uncertainty of the toluene's abundance brings large uncertainty to the models of the TMC-1. In Titan's upper atmosphere, toluene has been detected by the Cassini ion and neutral mass spectrometer (Cui et al., 2009), and Loison et al.'s model also shows that toluene is one of the most abundant substituted benzene molecules there (Loison et al., 2019). Few experimental and theoretical investigations have focused on the reactions of CN radical with toluene, which may be a potential candidate contributing to the formation of benzonitrile in the ISM (Messinger et al., 2020). Previous experimental results on the CN + toluene reaction are from Trevitt et al. (Trevitt et al., 2010) and Messinger et al. (Messinger et al., 2020). Trevitt et al. (Trevitt et al., 2010) first used a pulsed Laval nozzle expansion coupled with laser-induced fluorescence detection to measure the rate coefficients of the CN + toluene reaction and obtained only a value of (1.3 \(~{}\pm~{}0.3\)) \(~{}\times~{}10^{-10}\) cm\({}^{3}\) s\({}^{-1}\) at 105 K due to the nonexponential decays of CN at room temperature in the presence of toluene. They also discussed product detections from their synchrotron VUV multiplexed photoionization mass spectrometry (MPIMS) measurements. The only detected product is the cyanotoluene constitutes with no significant detection of benzyl radicals. Messinger et al. (Messinger et al., 2020) reported rate coefficients of (4.1 \(~{}\pm~{}0.2\)) \(~{}\times~{}10^{-10}\) cm\({}^{3}\) s\({}^{-1}\) over the temperature range of 15-294 K, as measured by a Cinetique de Reaction en Ecoulement Supersonique Uniforme (CRESU) apparatus coupled with the pulsed laser photolysis-laser-induced fluorescence (PLP-LIF) technique. They concluded that there is negligible pressure and temperature dependence for the total rate coefficients under the studied conditions. The authors also theoretically calculated the potential energy surface (PES) with M06-2X/aug-cc-pVTZ method, which is the only theoretical result for the CN + toluene reaction. Their theoretical results revealed that the barrierless addition-dissociation pathways with exit barriers are well below the energy of the reactant show great advantages under low temperature conditions. In contrast to the consistent experimental results of the CN + benzene reaction with rate coefficients of \(\sim 4~{}\times~{}10^{-10}\) cm\({}^{3}\) s\({}^{-1}\)(Trevitt et al., 2010; Balucani et al., 1999; Cooke et al., 2020), the discrepancy between the two studies on the CN + toluene reaction reached a factor of 3. The lack of product yields of CN +toluene reaction raises questions on the structure effect of the aromatic molecule on its reaction mechanism for forming CN-substitute compounds, which motivates us to study it from the view of kinetic theories.
Here, high-level quantum chemical calculations combined with RRKM/ME simulations was used to explore the reaction mechanics and chemical kinetics of the CN + toluene reaction under conditions of 30-1800 K and 10\({}^{-7}\)-100 atm, covering the conditions related to molecular clouds, planetary atmosphere, and circumstellar environments (Georgievskii et al., 2013). The chosen Rice-Ramsperger-Kassel-Marcus/Master equation (RRKM/ME) method has been used to obtain the temperature- and pressure-dependent product yields for the reaction systems of CN radical with various molecules including methylamine and ethylene (Landera and Mebel, 2013; Barone and Puzzarini, 2022; Puzzarini et al., 2020; Sleiman et al., 2016). Based on the computed PES and rate coefficients, the feasibility of identifications of the related products for the titled reaction in the ISM is then discussed. Kinetic information obtained by theoretical
calculations here provides new insights into the the tholins' formation in Titan's atmosphere and constains the model of TMC-1.
## 2 Methodology
The hybrid-meta-exchange correlation functional, M06-2X (Zhao & Truhlar, 2008), with D3 dispersion correction (Grimme et al., 2010) and 6-311+G(d,p) basis sets (Krishnan et al., 1980; Clark et al., 1983; Rassolov et al., 2001) was chosen to obtain the geometries of the reactants, products, and transition states (TSs) on the PES of the CN + toluene reaction. The zero-point energies (ZPEs) and vibrational frequencies were computed at the same level with a scaling factor of 0.970 (Alecu et al., 2010). The chosen M06-2X functional has shown excellent performance in studying main-group thermochemistry (Zhao & Truhlar, 2008). Single point energy (SPE) calculations were implemented using the coupled-cluster single-, double-, and perturbative triple-excitations [CCSD(T)] method (Raghavachari et al., 1989); the approximate complete basis set (CBS) limit was evaluated with the help of the Moller-Plesset second-order perturbation theory (MP2) calculations. The CCSD(T) energy is given by the sum of correlation energy and the SCF energy using the two-point extrapolation method via the following expression (Zhong et al., 2008):
\[E(HL)=E_{SCF}(CBS)+E_{corr}(CCSD(T)/CBS) \tag{1}\]
\[E_{corr}(CCSD(T)/CBS)=E_{corr}(MP2/CBS)+(E_{corr}(CCSD(T)/aVTZ)-E_{corr}(MP2/ aVTZ)) \tag{2}\]
This calculation scheme has been applied in previous studies by Barone et al. and Baiano et al. (Barone & Puzzarini, 2022; Baiano et al., 2020). The T1 diagnostic values of stationary points and TSs suggest negligible multi-reference characteristics (Rienstra-Kiracofe et al., 2000; Lee & Taylor, 1989). Main reaction types and the corresponding products are shown in Figure 1. All M06-2X calculations were implemented with the Gaussian16 (Frisch et al., 2016) program, and the CCSD(T) calculations were performed with the ORCA 4.2.0 (Neese, 2012, 2018) program package.
Based on the calculated results of PESs, the rate coefficients and branching ratios of various channels were computed by the RRKM/ ME method (Georgievskii et al., 2013), with temperatures ranging from 30 to 1800 K and pressures ranging from \(10^{-7}\) to 100 atm. We adopted the chemically significant eigenstate (CSE) approach (Miller & Klippenstein, 2013) to solve the one-dimensional master equation. Here, we chose N\({}_{2}\) and H\({}_{2}\) as the bath gases due to their abundant fractions in Titan's atmosphere and TMC-1, respectively (Flasar et al., 2005). The Lennard-Jones (L-J) potential model was applied to mimic the interactions between the C8H8N species (\(\sigma=5.96\)A, \(\epsilon=591.54K\)) and the bath gas--N\({}_{2}\) (\(\sigma=3.681\)A, \(\epsilon=67.89K\)) and H\({}_{2}\) (\(\sigma=2.920\)A, \(\epsilon=26.41K\)). The single-exponential down model was selected to describe the collisional energy transfer for this system with the average energy transferred in downward collisions of \(\langle\Delta\mathrm{E}\rangle_{\mathrm{down}}=260\times(\mathrm{T}/300)^{0.875}\) cm\({}^{-1}\) analogously to the collisional parameters of N\({}_{2}\) with toluene (Hippler et al., 1983). The RRKM theory was adopted to obtain the microcanonical reactive flux except the barrierless addition. The one-dimensional hindered rotor (1D-HR) model was employed to treat the torsions of CH\({}_{3}\) groups, and the Eckart tunneling correction (Eckart, 1930) was used for its simplicity. The phase space theory (PST) method (Fernandez-Ramos et al., 2006) assumes that the interaction between the two reacting fragments can be described by an isotropic
Figure 1: Main reaction types and corresponding products for the CN + toluene reaction.
potential, \(-\rm C_{n}/R^{n}\). The PST method has been proven to be valid and efficient for the reactions of the CN radical with formaldehyde and acetaldehyde by Tonolo et al. under low-temperature conditions (Tonolo et al., 2020). Here, the exponential term was chosen to be 6, which was usually implemented in neutral-neutral reactions (Tonolo et al., 2020). The coefficient \(\rm C_{6}\) was approximated by the sum of dipole-induced-dipole and dispersion terms (Woon and Herbst, 1997; Georgievskii and Klippenstein, 2005). The finally applied \(\rm C_{6}\) value was 387 Eh. Nevertheless, it is worth noting that the PST method may result in greater uncertainties when calculating the reaction flux of barrierless pathways at high temperatures. Despite this limitation, we chose to employ this method in our study due to its computational efficiency. The capture rate coefficients will be incorporated into the RRKM/ME simulation to obtain the final rate coefficients with a rough approximation that capture possibilities by each site are equal. All the RRKM/ME rate coefficients were computed with the MESS code (Georgievskii et al., 2013).
## 3 Results and Discussion
### Potential Energy Surface
Figure 2 illustrates the PES for the CN + toluene reaction at the level of CCSD(T)/CBS//M06-2X-D3/6-311+G(d,p), including four addition channels, four addition-elimination channels and three abstraction channels (Figure 1). Previous studies on the CN + benzene reaction have suggested that the isocyano products cannot compete with the cyano compounds in the ISM or Titan's atmosphere (Lee et al., 2019) due to the high barriers. Thus the pathways yielding isocyano compounds are not included here. The CN + toluene reaction begins with the C atom of the CN radical attacking the pi molecular orbital of toluene, which is consistent with the reaction between the CN radical and
Figure 2: Potential energy surface of the CN + toluene reaction at the level of CCSD(T)/CBS//M06–2X-D3/6-311+G(d,p) in this work. The energetically favorable pathways are represented in red. Energies in parentheses are the results calculated at the level of M06-2X/aug.cc-pVTZ from Messinger et al. (Messinger et al., 2020).
other unsaturated hydrocarbons (Landera & Mebel, 2013; Woon & Herbst, 1997; Balucani et al., 2000; Jamal & Mebel, 2013; Woon, 2006). All of these addition processes are barrierless with exothermicities of nearly 40 kcal/mol, producing four adducts--ipso -adduct, o-adduct, m-adduct and p-adduct. There exist some barriers below the reactants between two different adducts and more details can be found in Figure 12 in the Appendix. The adducts can subsequently undergo isomerization, elimination and abstraction processes. The most energetically favorable reaction pathway is the elimination of CH\({}_{3}\) group forming benzonitrile + CH\({}_{3}\) with a submerged barrier of 22.28 kcal/mol.The elimination of the hydrogen atom forming H + o-, m-, p-tolunitrile with barriers of 26.6, 28.28, and 27.99 kcal/mol can also be competitive with the CH\({}_{3}\) + benzonitrile channel, because all of the barriers are below the bimolecular reactants. These adducts can also isomerize with the hydrogen atom at the addition site migrating to adjacent sites, while the much higher barriers of the isomerization indicate that they cannot compete with the addition-elimination channels. The HCN + o-, m-, p-tolylradical products can be formed via a roaming H-abstraction mechanism (Jamal & Mebel, 2013), which means that the CN group leaves the adducts, roams around, and finally abstracts an H atom from the addition site. The roaming H-abstraction pathways show poor competitiveness compared to direct elimination channels due to their higher barriers. We failed to locate the TSs of direct H-abstraction. The previous results about the reactions between CN and other unsaturated hydrocarbons showed slightly positive barriers for direct H-abstraction, indicating its relatively unimportant role in the low-temperature kinetics of the CN + toluene reaction (Jamal & Mebel, 2013).
Theoretical results obtained by Messinger et al. at the level of M062X/aug-cc-pVTZ are also shown in parentheses in Figure 2 for comparison (Messinger et al., 2020). They revealed barrierless addition channels and energetically favorable addition-elimination channels as well, while partial structures for isomerization and abstraction channels of this work were not included in Messinger et al.'s calculations. The largest deviation between the energies at the level of M06-2X/aug-cc-pVTZ and our calculations by the CCSD(T)/CBS//M06-2X-D3/6-311+G(d,p) method is 5.47 kcal/mol, which could introduce great errors in the kinetic results--especially at low temperatures.
### Rate Coefficients
The related experimental data are especially limited for the CN + toluene reaction, including the only measurement of 1.3 \(\times\) 10\({}^{-10}\) cm\({}^{3}\) s\({}^{-1}\) at 105 K from Trevitt et al. (Trevitt et al., 2010) and an average of 4.1 \(\times\) 10\({}^{-10}\) cm\({}^{3}\) s\({}^{-1}\) using CRESU apparatus at 15-294 K from Messinger et al. (Messinger et al., 2020). Based on the computed high-level PES, temperature and pressure dependent rate coefficients of the CN + toluene reaction were then calculated using the RRKM/ME theory, with temperatures ranging from 30 to 1800 K and the pressures ranging from 10\({}^{-7}\) atm to 100 atm. Figure 3 shows the comparison of the calculated total rate coefficients in this work with previous experimental measurements. The calculated rate coefficients of the CN + toluene reaction approach the collisional limit with a
Figure 3: Comparison of the total rate coefficients calculated in this work with previous experimental measurements for the CN + toluene reaction, with temperatures ranging from 30 to 1800 K and pressures ranging from 10\({}^{-7}\) to 1000 atm. The contents inside the blue box are partially enlarged picture. Trevitt et al. (Trevitt et al., 2010), used a pulsed Laval nozzle expansion coupled with laser-induced fluorescence detection (total density: \(2.4\times 10^{16}\) cm\({}^{-3}\)); Messinger et al. (Messinger et al., 2020), used a CRESU apparatus coupled with the PLP-LIF technique (There were a series of pressures, and three types of bath gas: N\({}_{2}\) shown by black square markers, He shown by red square markers, and Ar shown by blue square markers). The average value of the results from Messinger et al. is represented by a red dashed line (4.1 \(\times\) 10\({}^{-10}\) cm\({}^{3}\) s\({}^{-1}\)).
range of \(1.09~{}\times~{}10^{-10}\) to \(7.96~{}\times~{}10^{-10}\) cm\({}^{3}\) s\({}^{-1}\), which is consistent with those of the reactions between CN and other unsaturated hydrocarbons (Landera and Mebel, 2013; Woon and Herbst, 1997; Balucani et al., 2000; Jamal and Mebel, 2013; Woon, 2006). The total rate coefficients increase slightly with temperatures rising initially and then decrease sharply when the temperature goes over a critical value. From 100 to 200 K, there is generally a good agreement between the calculated rate coefficients in this work and the experimental measurements of Messinger et al. with a ratio ranging from 1.16 to 1.70. The corresponding deviation between this work and Messinger et al.'s results increases monotonously with temperature (Messinger et al., 2020). However, our predictions overestimate the measurements of Trevitt et al. by a larger factor of 4.48 at 105 K. The calculations of energies and the collisional parameters show only a slight influence on the total rate coefficients, but these parameters have some impacts on branching ratios. Two different choices of bath gas obtained similar total rate coefficients. The rough approximation of the isotropic long-range potential used in the PST method fails to capture the chemical bonding interactions and short-range repulsions (Woon and Herbst, 1997), which may contribute to a slight overestimation of the calculated rate coefficients at higher temperatures compared to experimental results. Nevertheless, considering the low temperatures of interest in this study, the uncertainties introduced by the PST method are deemed acceptable. For the high-temperature regime, more detailed theoretical methods such as ab initio molecular dynamics (AIMD), variable-reaction-coordinate variational transition state theory (VRC-VTST), or trajectory simulations are more suitable and precise. Unfortunately, the multidimensional PES calculations required by these methods are prohibitively expensive for the titled reaction and fall beyond the scope of this work. Rate coefficients for various channels with the bath gas of H\({}_{2}\) can be also found in Table 1 in the Appendix.
### Branching Ratios
Previous branching ratios information of various channels required by astrochemical models were limited for CN + toluene reaction. The only direct clue of product yields at low temperature comes from the synchrotron VUV-MPIMS measurement at 105 K by Trevitt et al. (Trevitt et al., 2010) for the CN + toluene reaction. They concluded the dominant role of the tolunitrile products and a limit of \(\leq 15\%\) for the CH\({}_{3}\) elimination channel, while the low signal-to-noise ratios prevent them from obtaining the exact branching ratios of different isomers of tolunitrile. The RRKM/ME calculations here show great advantages for providing the corresponding product yields for various reaction channels at a wide range of conditions (Landera and Mebel, 2013; Barone and Puzzarini, 2022; Puzzarini et al., 2020; Sleiman et al., 2016). Figure 4 (a) and (b) shows the temperature dependence of main product yields at low pressure (0.0001 atm) and atmospheric pressure (1 atm) respectively, where the product yeilds of the intermediates are not shown for their minor yields. Competition among the addition channels, addition-elimination channels and abstraction channels from different sites proceeds throughout the wide range of temperature and pressure. The yields of various addition
Figure 4: Branching ratios of the addition-elimination channels (solid lines), addition channels (dashed line) and abstraction channels (dash-dot-doted lines) for CN + toluene in this work. a) Branching ratios calculated at the pressure of 0.0001 atm; b) Branching ratios calculated at the pressure of 1 atm.
elimination increase initially and then decrease as the temperature rises, while their competitiveness decreases sharply with pressure rising. On the contrary, the stabilization of adducts becomes more favorable at pressures over 0.01 atm, and the yields of adducts decrease monotonously with temperature. The branching ratios for abstraction channels start to compete with the addition-elimination channels at temperatures over 800 K, where their yields monotonously increase with temperature over a broad range of pressures. Among the three abstraction channels from different sites, the m-abstraction channel edges out the others. Figure 5 shows the pressure dependence of various bimolecular channels at low and atmospheric temperatures in this work (30-300 K). We concluded that the branching ratios for the addition elimination channels remain virtually unchanged below 250 K and \(10^{-5}\) atm. The low-pressure and low-temperature competition results in the dominance of H + o-tolunitrile products with a yield of \(\sim\) 34%, followed by the H + m-tolunitrile products with a yield of 32%. The CH\({}_{3}\) + benzonitrile and H + m-tolunitrile products obtain a smaller yield of nearly 17% under extreme cold and thin conditions. This study of CN + toluene provides more detailed and complete results of product yields over a wide range of temperatures and pressures. The results show a slightly higher branching ratio of CH\({}_{3}\) + benzonitrile products compared to the predicted limit from Trevitt et al. (Trevitt et al., 2010).
## 4 Astrochemical Implications
Figure 5: Pressure dependence of the branching ratios of various bimolecular products for the CN + toluene reaction at temperatures of 30–300 K and pressures ranging from \(10^{-7}\) to 1000 atm with H\({}_{2}\) as the bath gas. (a): Products of ipso-addition-elimination pathway; (b): Products of o-addition-elimination pathway; (c): Products of m-addition-elimination pathway; (d): Products of p-addition-elimination pathway.
The reaction between toluene and CN radicals shows high efficiency, even under extremely cold conditions, indicating its potential importance in the TMC-1 or Titan's atmosphere. The astrochemical implications of the detailed and complete product yield information for the CN + toluene reaction is discussed in detail here.
### Tmc-1
Figure 6 illustrates the branching ratios and dipole moments of the main products for CN + toluene reaction under the TMC-1 environment (5-10 K, \(\mathrm{n_{H}}~{}\approx~{}10^{4}\) cm\({}^{-3}\)). Benzonitrile has recently been detected in the molecular cloud TMC-1 by McGuire et al. (McGuire et al., 2018). The only reaction contributing to the formation of benzonitrile in McGuire et al.'s model of TMC-1 was CN + benzene \(\rightarrow\) benzonitrile + H with a rate coefficient of \(3~{}\times~{}10^{-10}\) cm\({}^{3}\) s\({}^{-1}\)(McGuire et al., 2018). The calculated results in this work show that the CN + toluene reaction may be another candidate for producing benzonitrile in TMC-1 due to its large rate coefficients and considerable branch ratio of nearly 17% for the benzonitrile + CH\({}_{3}\) channel (McGuire et al., 2018; Woon, 2006). The missing CN +toluene reaction in McGuire et al.'s model of TMC-1 may help to explain the underestimation of the benzonitrile column. The results also indicate the important role of the reaction between the CH\({}_{3}\)-substituted aromatic ring and the CN radical during the production of CN-substituted aromatic rings.
This work also reveals the large yields of three isomers of tolunitriles of the CN + toluene reaction as shown in Figure 6. All the tolunitriles possess a much higher dipole moment than their parent toluene. Cernicharo et al. (Cernicharo et al., 2022) reported a high 3\(\sigma\) upper limit of \(6~{}\times~{}10^{12}\) cm\({}^{-2}\) for toluene with the QUIJOTE1 line survey, while the poor dipole moment of toluene hindered the further quantification of its precise column density in the TMC-1. The reaction between toluene and CH radical is an important route of producing styrene in the model of TMC-1. The reaction between styrene and the CH radical will contribute to the production of indene (Doddipatla et al., 2021). This work shows that the detection of tolunitriles can serve as a proxy for the identification of toluene in the TMC-1, and the branching ratio information here will constrain the prediction of toluene from the model of the TMC-1.
### Titan's Atmosphere
Competition between bimolecular and unimolecular products is extremely intense in the PAH/PANH-forming region of Titan's stratosphere (3--0.1 mbar; 160--180 K). Despite the negligible pressure-dependence of total rate coefficients, the yields of different products vary greatly with pressure. The fractions of tolunitriles, adducts and benzonitrile are 19%-68%, 15%-64% and 17% at 150-200 K and 0.0001-0.001 atm, respectively. Landera and Mebel (Landera and Mebel, 2010, 2013) show that consecutive C\({}_{2}\)H additions to aromatic nitriles may contribute to the formation of PANHs. To investigate the similar processes of the "hetero" N atoms doping into the related products of the titled reaction in the PAH/PANH-forming region of Titan's stratosphere, we determined the reaction pathways of PANH formation from tolunitriles/benzonitrile by the C\({}_{2}\)H additions, whose details can be found in Figure A2 in the Appendix. The C\({}_{2}\)H addition formed the adducts initially, and the adducts may undergo elimination of the hydrogen atom to form the C\({}_{2}\)H-substituted benzonitrile/tolunitriles or isomerization to form various PANHs shown in Figure 7. The submerged barriers for the PANH formation by C\({}_{2}\)H addition to benzonitrile/tolunitriles indicate that these reactions
Figure 6: Main products and their corresponding branching ratios of CN + toluene reaction under the TMC-1 environment. The dipole moments of the toluene and related products are also demonstrated, where the tags represent different sources of data. \({}^{\rm a}\)Calculated data at wB97M-V/aug-cc-pVTZ in this work; \({}^{\rm b}\)experimental data obtained by Rudolph et al. (Rudolph et al., 1967); \({}^{\rm c}\)experimental data from Wohlfart et al. (Wohlfart et al., 2008).
may be especially efficient in the cold environment of Titan's atmosphere. However, the byproducts formed by the addition/addition-elimination pathways in Figure A2 in the Appendix are also competitive. Further studies are needed to verify their relative yields.
## 5 Conclusions
The formation mechanism of aromatic nitriles under cold and collision-free conditions is critical for understanding the tholins' formation in Titan's atmosphere and to model the bottom-up formation of PAHs in the ISM. Previous studies of CN + toluene reaction have shown its high efficiency and relationship with benzonitrile formation in the ISM. The exact yields and rate coefficients of the various channels needed in these astrochemical models remain ambiguous. Here, the RRKM/ME calculations reveal the competition of its addition-elimination, addition, and abstraction channels at various temperatures and pressures. In the extreme environment related to the star-forming region of TMC-1, it's conjectured from our calculations that the fraction of benzonitrile product for the CN reacting with toluene may be approximately 17%, indicating that the CN + toluene reaction can serve as one of gas-phase formation mechanisms for benzonitrile in the models of TMC-1. It's also revealed that three isomers of toluitrile with tremendous yields of the CN + toluene reaction are potential proxies of toluene for the larger dipole moments. In contrast to the negligible pressure-dependence in the awfully cold region of ISM, competition between bimolecular and unimolecular products is extremely intense in the temperature range of Titan's stratosphere. Under the warmer and denser PANH forming region of Titan's stratosphere, our results show that the fractions of tolunitriles, adducts, and benzonitrile are 19%-68%, 15%-64% and 17% at 150--200 K and 0.0001--0.001 atm, respectively. The closed shell products, benzonitrile and tolunitriles, may contribute to the formations of PANHs by consecutive C\({}_{2}\)H additions. The yield information about aromatic nitriles for the CN + toluene reaction is calculated in this work. Such insights can be applied in future astrochemical models to better understand the evolution of tholins in Titan's atmosphere and the formation of PAHs in the ISM.
The authors appreciate Prof. Long Zhao of the University of Science and Technology of China for helpful discussions. This study was financially supported by the National Natural Science Foundation of China (22173089, 51876199). Some of the quantum chemical calculations were carried out on the supercomputing system in the Supercomputing Center of the University of Science and Technology of China. We thank LetPub (www.letpub.com) for its linguistic assistance during the preparation of this manuscript.
Figure 7: Main products of the CN + toluene reaction at Titan’s atmosphere. Potential PANHs produced by the subsequent C\({}_{2}\)H additions are also shown schematically. Detailed processes and related barriers of the PANH formation are shown in Figure A2 in the Appendix.
Table A1 lists rate coefficients of various channels with H\({}_{2}\) as the bath gas at 30-200 K, under pressures of \(10^{-7}\), 0.0001, and 0.001 atm. Entrance channels of the CN + toluene reaction in section 3.1 are shown in Figure A1. Reaction pathways of PANH formation from tolunitriles/benzonitrile by the C\({}_{2}\)H addition in section 4.2 are shown in Figure A2.
Figure 14: Entrance channels of the CN + toluene reaction at the level of CCSD(T)/CBS//M06–2X-D3/6-311+G(d,p). The energetically favorable pathways are represented in red. Energies in parentheses are the results calculated at the level of M06-2X/aug-cc-pVTZ from Messinger et al. (Messinger et al., 2020). |
2303.08470 | Who's in Charge? Roles and Responsibilities of Decision-Making
Components in Conversational Robots | Software architectures for conversational robots typically consist of
multiple modules, each designed for a particular processing task or
functionality. Some of these modules are developed for the purpose of making
decisions about the next action that the robot ought to perform in the current
context. Those actions may relate to physical movements, such as driving
forward or grasping an object, but may also correspond to communicative acts,
such as asking a question to the human user. In this position paper, we reflect
on the organization of those decision modules in human-robot interaction
platforms. We discuss the relative benefits and limitations of modular vs.
end-to-end architectures, and argue that, despite the increasing popularity of
end-to-end approaches, modular architectures remain preferable when developing
conversational robots designed to execute complex tasks in collaboration with
human users. We also show that most practical HRI architectures tend to be
either robot-centric or dialogue-centric, depending on where developers wish to
place the ``command center'' of their system. While those design choices may be
justified in some application domains, they also limit the robot's ability to
flexibly interleave physical movements and conversational behaviours. We
contend that architectures placing ``action managers'' and ``interaction
managers'' on an equal footing may provide the best path forward for future
human-robot interaction systems. | Pierre Lison, Casey Kennington | 2023-03-15T09:18:32Z | http://arxiv.org/abs/2303.08470v1 | # Who's in Charge? Roles and Responsibilities of Decision-Making Components in Conversational Robots
###### Abstract.
Software architectures for conversational robots typically consist of multiple modules, each designed for a particular processing task or functionality. Some of these modules are developed for the purpose of making decisions about the next action that the robot ought to perform in the current context. Those actions may relate to physical movements, such as driving forward or grasping an object, but may also correspond to communicative acts, such as asking a question to the human user. In this position paper, we reflect on the organization of those decision modules in human-robot interaction platforms. We discuss the relative benefits and limitations of modular vs. end-to-end architectures, and argue that, despite the increasing popularity of end-to-end approaches, modular architectures remain preferable when developing conversational robots designed to execute complex tasks in collaboration with human users. We also show that most practical HRI architectures tend to be either robot-centric or dialogue-centric, depending on where developers wish to place the "command center" of their system. While those design choices may be justified in some application domains, they also limit the robot's ability to flexibly interleave physical movements and conversational behaviours. We contend that architectures placing "action managers" and "interaction managers" on an equal footing may provide the best path forward for future human-robot interaction systems.
Software architectures, dialogue management, robot planning, human-robot interaction +
Footnote †: journal: Who’s in Charge? Roles and Responsibilities of Decision-Making Components in Conversational Robots
## 1. Introduction
Conversational interaction allows humans to communicate with robots using a medium that is both intuitive and highly expressive (i.e., speech), but endowing robots with the ability to comprehend, represent, and produce spoken responses and integrate a speech system into the robots' own states and plans has been a subject of research for many years with limited success. The mere integration of an off-the-shelf automatic speech recognizer into the robotic platform is far from sufficient, as system developers need to address a host of additional challenges in communication, including how to handle turn-taking, dealing with noise, uncertainties and misunderstandings, and building common ground (Sydney et al., 2016). Those phenomena, which extend well beyond speech processing, are precisely what the spoken dialogue systems (sds) community has been addressing for many years. If humans are to naturally interact with robots, then robots need sds-particularly, multimodal sds, given the intrinsically multimodal nature of face-to-face interactions (Brock et al., 2017).
Both robotic platforms and multimodal sds are complex, multifaceted technical systems, but their integration in a single architecture raise a number of additional challenges that the community needs to face as robots that converse with humans using speech are expected to become more commonplace. In this position paper, we focus on one central question for autonomous systems: _decision making_. The problem of selecting the next action to take given the current state (as perceived by the robot sensors) has been widely studied in robotics (Sutton et al., 2017) and has led to a wide range of specialized decision systems, from motion and path planning (Grover et al., 2017; Sridharan et al., 2017) to object manipulation (Brock et al., 2017) and task planning (Kumar et al., 2017). Those decisions may either rely on reactive mechanisms (Kumar et al., 2017), planning algorithms (Sutton et al., 2017) or, increasingly, data-driven strategies optimised via imitation or reinforcement learning (Kumar et al., 2017; Sridharan et al., 2017).
sds also need to make decisions about what to say next, given the current state of the conversation (as perceived/understood by the agent). Those decisions are typically made by a dedicated module called the _dialogue manager_(Sutton et al., 2017; Kern et al., 2017), sometimes also called the _interaction manager_ in human-robot interaction (Brock et al., 2017). The dialogue manager is responsible for deciding what to say and when to say it, and is itself typically divided in two sub-tasks: dialogue
state tracking and action selection. While the goal of dialogue state tracking is to maintain a representation of the current dialogue state given the stream of speech inputs and other observations, action selection relies on this state representation to select the most appropriate system response.
A key question for action selection in both robotics and in multimodal sds is how to operate in large state-action spaces. This question becomes particularly acute when both types of systems are combined into a single platform. In this position paper, we review some of the design choices related to the "chain of command" underpinning software architectures for conversational robots. In particular, we discuss the relative merits and limitations of modular architectures in light of the growing popularity of end-to-end approaches. We also observe that many HRI systems tend to adopt either a robot-centric or dialogue-centric view of the decision chain, often influenced by the particular research background of their developers. We argue that such design biases, while justifiable in some application domains, may also limit the system ability to handle complex situations including both physical and conversational tasks. The view advocated in this paper is that task/motion managers (responsible for the physical actions executed by the robot) and interaction managers (responsible for the robot's conversational behaviour) should ideally be placed side-by-side as part of a larger, modular architecture.
## 2. Background
In this section, we review some seminal and recent research literature on decision making in sds and robotics. The reviews are by no means comprehensive, but should give the reader enough background to understand the architectural issues discussed in Section 3.
### Decision-Making in SDS
The _Information State_ approach to dialogue management (Sundundar et al., 2016; Sundar et al., 2016) has been highly influential in sds research and continues to inspire modern dialogue systems. The approach revolves around a symbolic representation of the current dialogue state, which includes all types of information relevant for managing the system's conversational behaviour, such as questions under discussion, commitments, beliefs, and intentions from dialogue participants. Two sets of rules operate on this dialogue state:
* Update rules modify/extend the dialogue state on the basis of observations, such as new user inputs.
* Decision rules then select the most appropriate dialogue move(s) on the basis of the dialogue state.
Traum and Larsson (Traum and Larsson, 2016) already note that there is no universal agreement on what should constitute a decision maker because different systems might carve up system modularity in different ways, some centralizing all decision-making in a single module while others rely on a more de-centralized distribution of labor (for example, to a natural language generation module that takes high-level information from the decision maker and generates an actual utterance).
Early implementations of the Information State framework used symbolic representations of the dialogue state (and task/domain knowledge) along with handcrafted rules for both state update and action selection. Those handcrafted rules are now often replaced by data-driven models, as detailed in the large literature on dialogue state tracking and dialogue policy optimization. Dialogue state tracking (Sundar et al., 2016; Sundar et al., 2016; Sundar et al., 2016) seeks to predict the current dialogue state (typically represented as slot-value pairs) based on the stream of user inputs. Based on the current state, the next action is then selected by a dialogue policy optimized through supervised or reinforcement learning (Sundar et al., 2016; Sundar et al., 2016; Sundar et al., 2016).
Also seminal is the early work of (Becker et al., 2017) on plan-based approaches to dialogue management, building on earlier work by (Becker et al., 2017). Planning, the authors argue, is necessary if an agent is to be _intentional_, defined as having choice with
commitment. Although plan-based frameworks are still used in a number of application domains where the task completion necessitates long-term reasoning (Kang et al., 2017), most current approaches rely on dialogue policies specifying a (learned) direct mapping between states and actions1. If the dialogue state is represented as probability distribution over possible slot-value pairs, this mapping can be formalized as a Partially Observable Markov Decision Process (POMDP), as shown by (Song et al., 2016; Wang et al., 2017). POMDPs have also been employed in robotics (Song et al., 2016), although they are - as in sds research - increasingly replaced by neural methods.
Footnote 1: This direct mapping from state to actions can be seen as representing precomputed plans for every possible state. This ”precomputation” is often achieved through reinforcement learning, i.e. by traversing large numbers of possible dialogue trajectories and using the resulting outcomes to estimate the expected cumulative reward \(Q(s,a)\) of an action \(a\) in a given state \(s\). Those \(Q\)-values are now typically expressed as a neural network. By contrast, online planning algorithms determine the desirability of each action at runtime, using e.g. forward planning (Kang et al., 2017).
End-to-end approaches that directly produce system responses from the user inputs (or their transcription in the case of speech-enabled interfaces) using encoder-decoder neural models are also increasingly popular (Han et al., 2016; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017). Those approaches do not compute any explicit representation of the dialogue state, but encode the user inputs and the dialogue history into latent vector representations that are then employed to generate the system response token-by-token. Those end-to-end models are typically trained from dialogue corpora, either directly or on the basis of existing large neural language models.
### Decision-Making in Robotics
Decision-making and planning constitute a central problem in robotics, encompassing various tasks from high-level task planning (Kang et al., 2016; Wang et al., 2017) to path planning (Kang et al., 2017; Wang et al., 2017) and to object manipulation/grasping and motion planning (Song et al., 2016; Wang et al., 2017; Wang et al., 2017). Given a particular robot morphology and degrees of freedom, the planning algorithm seek to derive a path to an expected target configuration or location of the robot. This path is then employed to determine which actuators or motors to activate, and when. Robot planning can either rely on model-based approaches with formal planning languages such as PPDL and its variants (Song et al., 2016; Wang et al., 2017; Wang et al., 2017), or employ data-driven methods, often based on (deep) reinforcement learning (Kang et al., 2016; Wang et al., 2017).
(Kang et al., 2017) recently the literature of planning and control of mobile robots that can navigate a space autonomously using one or more of various sensors (e.g., laser or vision-based guidance). Interestingly, in recent years autonomous mobile robots are increasingly relying on decentralized decision-making architectures (particularly for navigation). Planning is still central, but smaller decisions can be made by other modules more specific to a particular aspect of the robot, a claw for example. This is similar to work done in sds, where many systems only use the decision making module for high-level decisions, though the issue of division of labor is by no means settled.
Similar to, though distinct from, robotics is autonomous vehicle control, which also requires (fast, high-stakes) real-time decision making in novel situations. (Song et al., 2016)'s review of the autonomous vehicle control literature identified a need to fill in a gap between integrating continuous sensor information and making decisions based on more abstract logic-based reasoning. Interestingly, they mention beliefs, goals, and intentions which have been important in planning and decision making literature in several fields, including sds(Song et al., 2016), as explained above.
### Decision-Making in Human-Robot Interaction
As a general trend, the choice of decision-making framework for human-interaction frameworks seems to be strongly influenced by the particular background and research interests of their developers. In other words, sds researchers will often view the interaction manager as constituting the central decision module of their platform (Kang et al., 2016; Wang et al., 2017; Wang et al., 2017). Under this view, task planning is either subordinated to this interaction manager or encompassed into it. Similarly, robotics
researchers will typically place task/motion planners at the center of their architecture and will often relegate dialogue management to a second-order component limited to handling speech inputs/outputs (Han et al., 2017; Chen et al., 2018).
Much of the literature that reports on the integration of (multimodal) sds in robotic platforms seems to be focused on either sds or robotics, with few examples of practical systems integrating conversational abilities to robots executing complex physical tasks. Conversational robots are typically developed and evaluated in the context of social tasks that either do not require much movement, as in (Krishnan et al., 2019)) or where the movements are subordinated to communicative functions, such as gestures (Han et al., 2017; Chen et al., 2018) or facial expressions (Beng et al., 2018).
## 3. Open questions
In this section, we review two key questions related to the design of software architectures for conversational robots:
* where to adopt modular vs. end-to-end architectures ;
* how the decision modules should interact with one another.
### Modular vs. End-to-end SDS
End-to-end sds rely on a single model - typically a neural network - that takes an (often text-based) input and directly produce an output, such as a question-answering system that provides an answer to a question, or a social chatbot that produces responses given text input. End-to-end systems often focus on the ability to produce a plausible response to any input, based on an encoder-decoder model trained or fine-tuned from dialogue data. The "decisions" made by the model correspond in this case to the successive selection of tokens in the response. In other words, end-to-end conversational models do not have any explicit concept of task or plan - they are only optimized to produce plausible responses. As such, their use for task-oriented systems remains challenging, although see (Han et al., 2017; Chen et al., 2018) for a few attempts.
In contrast, modular systems are made up of modules that have well-defined roles in the system, and which can be made to communicate with each other. For example, a prototypical sds is often made up of a set of modules including automatic speech recognition (asr) that transcribes speech to a text representation of the human utterances, natural language understanding (nlu) that takes the text and yields a semantic abstraction, dialogue management (including dialogue state tracking) that makes the high-level decision about the next action to take given the semantic abstraction, natural language generation (nlg) that takes the dialogue manager's decision and determines which words to use and in what order, and text-to-speech (tts) which actually utters the response.
Both types of approaches have their advantages and limitations. End-to-end systems are conceptually simpler (even though their neural architectures may be large and composed of many layers), and do not suffer from cascading failures as in modular systems. As they are fully differentiable, they may also be optimized directly from the observed inputs and outputs, without needing to rely on annotations or semantic abstractions.
However, end-to-end approaches also have a number of shortcomings. Most importantly, from a designer perspective, their conversational behaviour is much more difficult to control, adjust, and interpret. System designers can never be sure of what the system will end up producing as response, and the tendency of generative language models to e.g. hallucinate facts is well documented (Krishnan et al., 2019). As they are primarily text-based, their integration with non-linguistic knowledge sources may also be challenging, although there exists a number of approaches integrating neural models with knowledge bases (Krishnan et al., 2019; Chen et al., 2018) or image/videos (Shen et al., 2018; Wang et al., 2019). Finally, even though few-shots learning approaches may be employed to mitigate problems of data scarcity (Krishnan et al., 2019; Chen et al., 2018), their portability to scenarios with no or limited data remains difficult.
In our view, end-to-end models may be useful for speech-enabled robots whose primary purpose is to serve as social companion. However, modular approaches remain preferable for conversational robots set to execute physical tasks in the real world, either alone or in collaboration with humans. This position is motivated by several considerations. Most importantly, task-oriented robots are themselves developed around modular architectures such as ROS (Rosen et al., 2017), and a modular sds is much easier to integrate in those platforms than a monolithic, end-to-end conversational model. This is particularly true when the interaction management needs to be tightly interfaced with other decision-making modules for task, path or motion planning, which need to rely on dedicated representations and algorithms and cannot be reduced to a token-by-token generation task. In addition modular approaches also provide better control and interpretability, which is a decisive factor when developing complex robotic platforms operating in the real world.
### Interactions between Decision-Making Modules
Which modules should be responsible for making decisions about the (linguistic and non-linguistic) actions that can be performed by the robot? As noted in Section 2.3, most current approaches tend to adopt either a sds-centric view or a robot-centric view of the decision chain. In the sds-centric view, the main center of command is the interaction manager, which is typically retooled to include decisions for robot actions. In contrast, a robot-centric view places the task/motion planning as the center stage, only including the sds as a secondary subsystem limited to receiving speech inputs and/or communicating information back to the user.
Both design choices may certainly be appropriate in certain application domains. For instance, sds-centric architectures may work well for social robots designed to provide stimulating social interactions with their human interlocutors. In those cases, managing the conversational behaviour of the robot is indeed the main objective to which all modules are subordinated. Similarly, robot-centric architectures are well-suited for robots that are primarily developed for the execution of a particular physical task, and only use sds to convey to information to its human users.
However, we contend that such architectures are insufficient for robotic systems that aim to achieve complex, collaborative tasks including mixed-initiative dialogues with human users. Indeed, such scenarios will often interleave task execution and spoken interactions in complex, often unpredictable ways. This interplay of behaviours is difficult to address in a single decision-making module, and is better handled through an architecture where the interaction manager operates in parallel to the robot planners and action controllers. Conceptual visualizations for some of these different decision making architectures can be seen in Figure 1.
Figure 1. Architectural choices for decision-making components in conversational human–robot interaction platforms. The left side shows a sds-centric view in which the interaction manager is the top decision-making module, and the right side a robot-centric view where the robot planners and controllers take precedence. At the center, a schematic architecture in which both decision-making modules are set on an equal footing and operate on the basis of a shared memory and common resources.
The interaction manager and the other robot-related components need, however, to interact closely, as their actions may impact one another. For instance, the robot may not be able to make gestures while in motion. In addition, the interaction manager may need to obtain information about the physical state of the robot and/or status of the task to complete in order to answer questions uttered by the user. As shown in Figure 1, one way to allow for an efficient exchange of information is to rely on a shared memory that can be read/written by those decision-making modules, and can also be used to resolve potential conflicts between the actions selected by each component.
Recent work on the EVA platform shows an impressive system that is equal parts sds- and "robot-" (i.e., virtual agent-) centric in that both the decisions made about system speech acts and virtual agent articulation are both treated as first-class citizens and important to the unfolding interaction.2 The underlying model is plan-based and, from our understanding, despite being fairly modular is quite centralized in how decisions are made. Modeling such a system of course is challenging as state and actions spaces can potentially be prohibitively large. Other work has used a centralized module for dialogue and robot action decision making, though the set of actions were fairly limited (Sadr et al., 2017; Wang et al., 2018)
Footnote 2: [https://openstream.ai/](https://openstream.ai/)
## 4. Discussion & Conclusion
Decision making in sds and robotics is a very important part of a workable system in both fields. In this paper, we looked at how the fields differ, and offered some possible paths forward on how to model decision making when sds and robots are brought together. In order for the community to share resources such as models and platforms, it is prudent to find a principled solution to what often seems to be ad-hoc modeling decisions. We have by no means comprehensively solved the problem of determining how decision-making components should interact in complex human-robot interaction systems. Moreover, we did not discuss how different state representations (i.e., how inputs and actions are represented in a system) can affect modeling decisions, nor did we touch on the importance of time horizons in decision making (i.e., some decisions are near-term, while others are longer-term like in chess where there is always a next move, but there is also an overall strategy that includes planning moves beyond just the next one). There may be a way to systematically test different methods of decision making to determine important trade offs using a task that is equal parts dialogue and robot control, such as (Wang et al., 2018), a virtual robot recipe task, which has been annotated with dialogue acts (Han et al., 2019), which we leave for future work.
|
2301.06224 | Finding meaningful and workable applied mathematics problems in science | In this short review, I will summarize my research experience in three fields
in applied mathematics: mathematical biology, applied probability, and applied
discrete mathematics. Specifically, I will show how each project was initiated,
and what wrong approaches were applied. Such details are important in learning
how to do research, but they cannot be read out from research papers. I wish
that students and junior researchers in applied mathematics could learn a
lesson from this summary. | Yue Wang | 2023-01-16T00:51:20Z | http://arxiv.org/abs/2301.06224v1 | # Finding meaningful and workable applied mathematics problems in science
###### Abstract
In this short review, I will summarize my research experience in three fields in applied mathematics: mathematical biology, applied probability, and applied discrete mathematics. Specifically, I will show how each project was initiated, and what wrong approaches were applied. Such details are important in learning how to do research, but they cannot be read out from research papers. I wish that students and junior researchers in applied mathematics could learn a lesson from this summary.
## 1 Introduction
I have been working on applied mathematics since 2012. Although my research might not be very attractive, my research experience should be helpful to students and junior researchers in applied mathematics, in both positive (what to do) and negative (what to avoid) sense. The main focus of this review is to present the story behind each paper: how did I find this problem, and what right/wrong approaches did I apply? Such lessons cannot be learned by just reading papers. I would be happy to read this type of review if I were much younger. This is the motivation of writing this summary. In this review, I will summarize my research in mathematical biology (Section 2), applied probability (Section 3), and applied discrete mathematics (Section 4).
For the convenience of readers, I decide to move the final discussions here. The following are some lessons you might learn from this summary, and you can directly jump to the corresponding section: After proving some results, try to strengthen them by considering a more general or more realistic setting. A small improvement might be worth doing if this improvement is necessary in the real world (Subsection 2.1). As a researcher in applied mathematics, sometimes it is inevitable to work in an unfamiliar field. In this case, it is important to find a collaborator in the corresponding field. Therefore, it is always good to have more friends with different backgrounds (Subsection 2.2). After solving a problem, think in the opposite direction, so that your results can be used to formulate another problem (Subsection 2.3). Before tackling a hard problem, search for related papers and talk to different experts to avoid reinventing the wheel (Subsection 3.1). When writing the
paper, think about your potential readers (also reviewers), especially if the paper is related to multiple fields (Subsection 3.1). When reading papers that propose fancy methods, check them in weird situations and try to construct counterexamples (Subsections 3.2, 3.3). After developing new methods, always try your best to apply your methods to some real data, or at least simulated data. Otherwise, this will always be questioned by reviewers (Subsection 2.2). Besides, to evaluate or compare algorithms, the best way is to implement and test them, since your theoretical analyses might overlook some factors. After learning something outside of your fields, try to apply your mathematical thinking mode to analyze it (Subsection 4.1). Improve your coding skill. You need to implement the methods you develop, which is sometimes nontrivial, especially if you need to design fast algorithms (Subsection 4.2). Otherwise, you need a collaborator for this. I hope that you can find meaningful and workable problems, and avoid the mistakes I made.
## 2 Mathematical biology
### Population dynamics
In some types of cancer, there are multiple phenotypes that can convert into each other. It has been observed that starting from any one phenotype, after some time, the final proportions of different phenotypes always converge to the same constants. This phenomenon is called phenotypic equilibrium. I read a paper that uses a Markov chain model to explain this phenomenon [7]. This model has some flaws in mathematics and biology. I built a linear ODE model and proved the phenotypic equilibrium phenomenon as the convergence to the unique attracting fixed point [40]. This work also intrigued other works that study multiple phenotypes in cancer [15, 3]. Later, I built a branching process model, which makes more sense in biology. I tried to prove a law of large numbers in this branching process model to explain the phenotypic equilibrium phenomenon. I found some results in a probability paper [8], but those results need a certain condition that all phenotypes form a single communicating class, which might not hold in reality. Based on the results in this paper, I used some complicated arguments to remove this condition and proved the law of large numbers I wanted [10].
I collaborated with a wet lab that cultivated leukemia cells. There were many cell populations, starting from 10/4/1 cells, and the cell area of each population was measured every day. Due to some technical issues, the cell area data were not quite accurate for small populations. Therefore, the analysis has to be done only for relatively large populations. I processed the data and drew different figures. Finally, I found the best figure that illustrates the most important feature. In this figure, I translated all growth curves, so that they start at the time point of reaching 2500 cells. In this figure, all populations from 10 initial cells grew fast, while some populations from 1 initial cell grew much slower. I built many toy models and found the most probable one, which has multiple phenotypes with different growth rates. Although from the same cell line, there might be multiple
phenotypes [16]. The difference in growth rates can be simply explained: starting from 10 cells, it is very likely to have at least one fast cell, which will dominate, and the overall growth rate is high; starting from 1 cell, it might be of the slow type, and the overall growth rate is low [19, 34]. For researchers in mathematics, communication and collaboration with experimental biologists might not be smooth at the beginning, since the thinking modes are different. I also participated in other projects on population dynamics [1, 23, 6, 36].
### Gene regulation
Genes are transcribed to mRNAs, which are then translated to proteins. Some proteins can affect the expression of other genes (mutual regulation) or their own corresponding genes (autoregulation). Since it is extremely difficult to directly determine gene regulatory relations through biochemical methods, there have been many methods to infer gene regulation [41, 2]. I read some papers on gene regulation inference, and found that the settings are diverse: different methods use different models, which are based on different assumptions. Besides, different methods may require different data types. For this complicated situation, I gradually came up with a classification system. The data types can be classified by four dimensions: measuring gene expression or phenotype; measuring at single-cell level or bulk level; measuring at one time point or multiple time points; measuring with or without interventions. For single-cell data at multiple time points, we need to distinguish whether one can measure the joint probability distribution, or just marginal probability distributions. In total, there are 20 data types. For each data type, I summarized known methods, or developed novel methods, or proved that data type could not be used to infer gene regulation. I implemented those new methods, and found that one method is too sensitive to small perturbations, thus not applicable to real data. My original plan was just to develop a novel inference method, but finally ended with a very long paper [30]. To finish such a long paper, I collaborated with a friend in experimental biology, who wrote the biological part. My ambition was to make this paper useful to researchers with different backgrounds. Therefore, I managed to modularize different sections, and added a figure indicating which sections are dependent. Then mathematicians and biologists can skip different sections to reach what they are interested in.
When studying the inference for mutual regulation, I also found an idea to infer the existence of autoregulation. In a Markov chain model of gene expression, if there is no autoregulation, then for the expression level, the variance should be larger than the mean [24]. Thus if the mean is larger than the variance, there might be autoregulation. Since this model has infinitely many states, I failed to prove what I wanted. Therefore, I sought help from a friend in pure mathematics. We collaborated and finished all the proofs. As an applied probability paper, the results are not deep enough; as a mathematical biology paper, the inference methods are not implemented. A common and natural question from reviewers for a methodology paper is its application. Thus I had to implement my methods and applied them to some experimental data.
### Developmental biology
Consider tissue transplantation experiments, where a piece of donor tissue is transplanted to a host tissue. During the development of the host, we observe the fate of the transplanted tissue. The results can be normal/abnormal, host-like/donor-like. I got some summarized experimental results, and the goal is to develop some methods to infer the unknown experimental results. Since the data amount is too small, I tried several fancy models and failed. Finally, inspired by the energy function in the Ising model, I adopted a straightforward approach: we can estimate the similarities between experiments. Design a simple penalty function. For each guess for the unknown results, calculate the total penalty: a pair of similar experiments with different results (whether known or from guess) gets a higher penalty. The guess with the minimal penalty is the most probable guess [31]. After inferring unknown results, I considered this problem from the opposite direction: if we want to use this method to infer experimental results, and we can choose what experiments to perform, what is the optimal design that minimizes the cost? This problem turns out to be a coloring problem in \(n\) dimensional lattice. With the help of a friend in pure mathematics, I solved this experimental design problem.
I once took over an unfinished project on positional information in developmental biology. In a developing embryo, cells at different positions seem to know where they are and develop into different (yet organized) tissues. To explain this, biologists coined a term, positional information [35]. This notion is used by different biologists in different meanings. I analyzed carefully and proposed three criteria for defining what is positional information. I also proved that no criterion is redundant, since removing any one criterion allows some non-positional information being included. In this logical way, positional information is clearly defined. Finally, I finished a mathematically styled paper without any mathematics [25]. I also contributed to another project in developmental biology [26].
## 3 Applied probability
### Thermodynamics in stochastic processes
My Ph.D. group has a tradition of working on thermodynamics of stochastic processes [13, 38, 17, 18, 37, 4, 5]. I worked on a problem of comparing the thermodynamic properties of a Markov chain and its covering space. This problem is related to topology and group theory, which I did not know very well. After trying many different wrong directions, I found the right theorem that I should prove. That theorem looks correct, but I got stuck for a long time, and tried to seek advice from different mathematicians. Finally I realized that I was terrified by this problem, and a very loose inequality is enough.
Later, I planned to generalize this result to more general covering spaces. I got stuck, asked many mathematicians, took a related course. Finally, I accidentally learned that a previous paper [11] has covered all what I wanted to do, and even some results that I had
proved. Therefore, my plan of generalization failed disastrously. The authors of that paper gave a very different name to the object I studied, so that I did not find it when I was searching for references. Besides, this small field is mostly studied in Europe, so that the mathematicians I asked did not know this paper. Fortunately, their methods are different from mine, so that my paper finally got published [28]. The publication of this paper is also difficult. This paper is related to statistical physics, probability, and algebraic topology. A reviewer might be unfamiliar with one of these subjects and get lost. With my techniques in probability, I also proved theorems in finite Markov chains [27].
### Causal inference
Once my advisor (who does not work on causal inference) attended an interesting talk on quantifying causal effects, and later sent me some related papers. Traditional causal quantities have problems when two possible causes are very similar. There are some new methods that use the slight difference to enhance the performance [9, 39]. From their definitions, when two possible causes are identical, their quantities are not defined. I tried to use continuation, namely adding small perturbations on the joint distribution to make the quantities defined, and then decreasing the perturbations to zero. This opened a can of worms: near the distribution where those quantities are not defied, such quantities fluctuate violently. Therefore, when two possible causes are identical (or more generally, the Markov boundary is not unique), those quantities are essentially ill-defined. I tried to develop new causal quantities that avoid this problem but failed. Then I realized that the problem itself might be ill-posed. I proposed three criteria for a causal quantity, and proved that they are incompatible when the distribution has multiple Markov boundaries [29]. Since distributions with multiple Markov boundaries are problematic, I also designed fast algorithms to detect them from finite data. Before implementing these algorithms, I did some theoretical analyses and argued that one algorithm is the best. After implementing them, I found that another algorithm is the best, since I underestimated some factors in my theoretical analyses. Since neither my advisor nor myself has training in causal inference, I had to spend a long time to find a collaborator. In such highly mathematical fields, guidance from an expert is essential both for controlling the research direction and writing in the correct style.
### Reinforcement learning
Once a friend in operations research sent me some papers in online learning, where a player repeatedly chooses an action and receives a stochastic reward (also partially learns the system parameters) The goal of the player is to apply a wise policy to maximize the cumulated reward. I tried to understand a fancy paper, but realized that the measurement to evaluate the performance of a policy is problematic. The measurement only considers the performance in the worst case, which leads to some strange behaviors. Specifically, concrete
historical data cannot help in locating the optimal policy. This field is quite friendly to someone with the background in probability, and I just spent two months to work out this problem [32]. Nevertheless, such negative results are quite hard to get published. Later, I also studied a situation where the historical data are polluted [33]. The problem of this paper is that the model is too simple (so that the optimal policy can be explicitly calculated), but more complicated models can be only studied numerically, which does not provide enough insights.
## 4 Applied discrete mathematics
### Inheritance law
I have a paper in inheritance law [20], although I never take a course in law or even social science. This project has an unexpected initiation. Once I read a story about heritage succession on social media: Alice's father passed away before her paternal grandfather. Thus a portion of her father's heritage went to her paternal grandfather, and then (after the death of her paternal grandfather) went to her paternal uncles and aunts. If her father passed away after her paternal grandparents, then her uncles and aunts would not inherit from her father. The reason is that in China, the Civil Code regulates that parents, children, and spouse can equally split the heritage.
From this story, I learned that in China's inheritance law, different orders of death might lead to different inheritance results. If we regard death and inheritance as an operator on the wealth of different persons, then this operator is non-commutative. This leads to a natural question: what if multiple relatives died in the same disaster, and the order of death is unknown?
I asked a friend in legal studies, and learned that this situation has been considered in China's inheritance law. The order of death is determined by relative relations: if one dead person has no living inheritable relatives, this person is presumed to die first; if both dead persons have living inheritable relatives, and they are of the same generation, they are presumed to die simultaneously and do not inherit each other; otherwise, the one of the senior generation is presumed to die first. For example, sisters are presumed to die simultaneously, and son is presumed to die after father.
This stipulation of death order does not look very rigorous. First, the term "generation" might not be well-defined. For example, Nobel laureate Gabriel Garcia Marquez married his uncle's sister-in-law. In this case, how should one compare the generations of Gabriel, his wife, and his uncle?
Even without the trouble of "generation", the stipulation of death order still does not look right. Using some insights in discrete mathematics, I constructed a counterexample, where for multiple dead persons, the order of death is partially known, and the stipulated order for the unknown ones are contradicted with the known orders.
Although that counterexample is not natural and not appreciated by that friend in
legal studies, I started to find a way to assign the order of death, which does not lead to contradictions. After some trials, I found no good solution, and started to suspect whether such a solution exists. If we cannot assign the order of death, there is another approach: the need for assigning the order of death is from the fact that the inheritance law is non-commutative. I tried to construct a commutative inheritance law and also failed.
Since neither approach worked, I guessed that the order of death problem cannot be solved. I proposed some reasonable criteria for an inheritance law. Then I proved that under such criteria, no inheritance law is commutative, and any assignment of the order of death can lead to contradictions. The proofs just need some basic tricks in discrete mathematics. In sum, any inheritance law is incomplete. Proposing reasonable criteria and prove that they are incompatible is a standard approach to derive impossibility results, similar to Arrow's impossibility theorem in voting [14]. Certainly, in practice, the order is reversed: I first considered how to prove such impossibility results, then tried to propose necessary criteria and remove redundant criteria.
A commutative inheritance law is incompatible with the three criteria I proposed (gender equality, inheritance rights of children, and conditional inheritance rights of parents). Since assigning the order of death is impossible, I switched to construct a commutative inheritance law that only violates one criterion. I proved that there are only four commutative inheritance laws that only violate one criterion. Therefore, that criterion is the price for a well-defined inheritance law.
After proving this theorem, I started to write a paper. To fill in the introduction section, I checked the inheritance laws in different countries. In the French Civil Code, it is stipulated that if one cannot determine the order of death for two persons, they do not inherit each other. This means that the order of death is not really assigned (incomparable). After learning this, I realized an embarrassing fact: I only proved that assigning the order of death leads to contradictions, but this French approach does not introduce new order and leads to no contradiction. Thus it is a perfect solution to the order of death problem. Then I modified my proof to include this situation, and the final result becomes: the French approach is the only valid solution. The results that only four commutative inheritance laws only violate one criterion also become useless in practice. Since they are of some theoretical interests, I moved them to the appendix.
This paper is in an awkward situation: researchers in legal studies do not understand the mathematical proofs, and applied mathematicians are not interested in law. Therefore, it is extremely difficult to find a proper journal and proper reviewers (although finally succeeded). Also, it is hard to seek guidance or find a collaborator.
### Algorithm design
Although not well trained in computer science, I used my techniques in discrete mathematics to design algorithms and prove the correctness. One project considers finding transposons: given some annotated gene sequences from different individuals, such as
\(\{1,2,3,4,5,6\}\), \(\{1,2,5,3,4,6\}\), \(\{1,2,3,4,6,5\}\), how should one find the genes that change their locations? In this example, obviously gene 5 changes its location. However, we can also explain the difference by assuming genes \(3,4,6\) change their locations. Certainly, we can minimize the number of operations needed to transform all sequences into the same. I came up with a more concise solution: determine the longest common subsequence, and the complement is all the genes that translocate. With the help from a friend in computer science, I implemented a fast algorithm, which was used to compare the genes of bacteria [12]. Although determining the longest common subsequence is a classic problem in computer science, I also considered the problem with multiple longest common subsequences, which made a new paper [22].
Another algorithm design work needs to compare two rooted trees with possibly repeated labels, where there is no order for different children vertices. I read many related papers but found no method that can be directly applied to this situation. I first designed a "best match" metric, which calculates the minimal distance between all possible transformed trees. However, I thought this metric could not be calculated in polynomial time, and I switch to another metric, which is fast to calculate. Later, I found that the best match metric could be calculated in quadratic time. Therefore, I contained both metrics in my paper, although the best match metric is significantly better [21]. These metrics were used to compare the early development of plants.
|
2302.09447 | Logarithmic spirals in 2d perfect fluids | We study logarithmic spiraling solutions to the 2d incompressible Euler
equations which solve a nonlinear transport system on $\mathbb{S}$. We show
that this system is locally well-posed in $L^p, p\geq 1$ as well as for atomic
measures, that is logarithmic spiral vortex sheets. In particular, we realize
the dynamics of logarithmic vortex sheets as the well-defined limit of
logarithmic solutions which could be smooth in the angle. Furthermore, our
formulation not only allows for a simple proof of existence and bifurcation for
non-symmetric multi branched logarithmic spiral vortex sheets but also provides
a framework for studying asymptotic stability of self-similar dynamics.
We give a complete characterization of the long time behavior of logarithmic
spirals. We prove global well-posedness for bounded logarithmic spirals as well
as data that admit at most logarithmic singularities. This is due to the
observation that the local circulation of the vorticity around the origin is a
strictly monotone quantity of time. We are then able to show a dichotomy in the
long time behavior, solutions either blow up (either in finite or infinite
time) or completely homogenize. In particular, bounded logarithmic spirals
should converge to constant steady states. For logarithmic spiral sheets, the
dichotomy is shown to be even more drastic where only finite time blow up or
complete homogenization of the fluid can and does occur. | In-Jee Jeong, Ayman R. Said | 2023-02-19T00:29:57Z | http://arxiv.org/abs/2302.09447v2 | # Logarithmic spirals in 2d perfect fluids
###### Abstract
We study logarithmic spiraling solutions to the 2d incompressible Euler equations which solve a nonlinear transport system on \(\mathbb{S}\). We show that this system is locally well-posed in \(L^{p},p\geq 1\) as well as for atomic measures, that is logarithmic spiral vortex sheets. In particular, we realize the dynamics of logarithmic vortex sheets as the well-defined limit of logarithmic solutions which could be smooth in the angle. Furthermore, our formulation not only allows for a simple proof of existence and bifurcation for non-symmetric multi branched logarithmic spiral vortex sheets but also provides a framework for studying asymptotic stability of self-similar dynamics.
We give a complete characterization of the long time behavior of logarithmic spirals. We prove global well-posedness for bounded logarithmic spirals as well as data that admit at most logarithmic singularities. This is due to the observation that the local circulation of the vorticity around the origin is a strictly monotone quantity of time. We are then able to show a dichotomy in the long time behavior, solutions either blow up (either in finite or infinite time) or completely homogenize. In particular, bounded logarithmic spirals should converge to constant steady states. For logarithmic spiral sheets, the dichotomy is shown to be even more drastic where only finite time blow up or complete homogenization of the fluid can and does occur.
## 1 Introduction
### Logarithmic spirals
The vorticity equation for incompressible and inviscid fluids in \(\mathbb{R}^{2}\) is given by
\[\begin{cases}\partial_{t}\omega+u\cdot\nabla\omega=0,\\ \quad\quad u=\nabla^{\perp}\Delta^{-1}\omega,\end{cases} \tag{1.1}\]
where \(\omega(t,\cdot):\mathbb{R}^{2}\to\mathbb{R}\) and \(u(t,\cdot):\mathbb{R}^{2}\to\mathbb{R}^{2}\) denote the vorticity and velocity of the fluid, respectively. In this paper, we are concerned with solutions of (1.1) which are supported on _logarithmic spirals_; in other words, vorticities \(\omega\) which are _invariant_ under the following group of transformations of \(\mathbb{R}^{2}\) parametrized by \(\lambda>0\)
\[(r,\theta)\mapsto(\lambda r,\theta+\beta\ln\lambda),\]
for some nonzero real constant \(\beta\). Here, \((r,\theta)\) denotes the polar coordinates in \(\mathbb{R}^{2}\). Under the above invariance, we can take a periodic function \(h(t,\cdot)\) of one variable such that
\[\omega(t,r,\theta)=h(t,\theta-\beta\ln r) \tag{1.2}\]
holds for all \(t,r,\theta\). Then, the two-dimensional PDE (1.1) reduces to the one-dimensional transport equation in terms of \(h\):
\[\partial_{t}h+2H\partial_{\theta}h=0 \tag{1.3}\]
coupled with the elliptic problem
\[4H-4\beta H^{\prime}+(1+\beta^{2})H^{\prime\prime}=h \tag{1.4}\]
defined on \(\mathbb{S}=\mathbb{R}/(2\pi\mathbb{Z})\). To derive (1.3)-(1.4), we take the following ansatz for the stream function
\[\Psi(t,r,\theta)=r^{2}H(t,\theta-\beta\ln r), \tag{1.5}\]
and note that the corresponding velocity field \(u=u^{r}e^{r}+u^{\theta}e^{\theta}\) is given by
\[\begin{cases}u^{r}(t,r,\theta)=-r^{-1}\partial_{\theta}\Psi=-rH^{ \prime}(t,\theta-\beta\ln r),\\ u^{\theta}(t,r,\theta)=\partial_{r}\Psi=2rH(t,\theta-\beta\ln r)-\beta rH^{ \prime}(t,\theta-\beta\ln r),\end{cases} \tag{1.6}\]
which gives that \(u\cdot\nabla\omega=2H\partial_{\theta}h\). Furthermore, (1.4) follows from \(\omega=\nabla\times u=\frac{1}{r}(\partial_{r}(ru^{\theta})-\partial_{\theta} u^{r})\) and (1.6). As we shall explain below, the special case \(\beta=0\) corresponds to the system for \(0\)-homogeneous vorticity studied in [7, 10]. In this paper, we shall study dynamical properties of the system (1.3)-(1.4) for \(\beta\neq 0\).
### Main results
Let us present the main results of this paper, which come in two categories: well-posedness issues and long-time dynamics.
**Well-posedness of logarithmic vortex.** To begin with, we have local and global existence and uniqueness of the system (1.3)-(1.4) for \(h\in L^{p}\) and \(L^{\infty}\), respectively.
**Theorem 1.1** (Local well-posedness).: _We have the following well-posedness results for the initial value problem of (1.3)-(1.4): if \(h_{0}\in L^{p}(\mathbb{S})\) for some \(1\leq p\leq\infty\), there exist \(T\geq c\|h_{0}\|_{L^{1}}^{-1}\) and a unique local solution \(h\) belonging to \(C([0,T);L^{p}(\mathbb{S}))\) for \(p<\infty\) and \(C_{*}([0,T);L^{\infty}(\mathbb{S}))\) for \(p=\infty\), \(c>0\) is a universal constant. For any \(p\), the \(L^{p}\) solution blows up at \(T^{*}\) if and only if_
\[\int_{0}^{T^{*}}\|h(t,\cdot)\|_{L^{1}}dt=\infty.\]
_Remark 1.2_.: Higher regularity (e.g. \(C^{k,\alpha},W^{k,p}\) for \(k\geq 0\)) of \(h\) propagates in time as long as \(\|h(t,\cdot)\|_{L^{1}}\) remains bounded.
In the case of bounded, or even logarithmically singular data, we have global regularity:
**Theorem 1.3** (Global well-posedness).: _The local solution in Theorem 1.1 is global in time for initial data \(h_{0}\in\cap_{p<\infty}L^{p}\) satisfying_
\[\sup_{p\geq 1}\frac{\|h_{0}\|_{L^{p}}}{p}<+\infty. \tag{1.7}\]
_In particular, \(L^{\infty}\) solutions are global in time._
_Remark 1.4_.: The above can be extended to data satisfying \(\|h_{0}\|_{L^{p}}\lesssim p\ln(10+p)\), \(\lesssim p\ln(10+p)\ln(10+\ln(10+p))\), and so on.
Next, we consider the space \(\mathcal{D}(\mathbb{S})\) of atomic measures on \(\mathbb{S}\), i.e. \(g\in\mathcal{D}\) has a representation \(g=\sum_{j\geq 0}I_{j}\delta_{\theta_{j}}\) with \(\sum_{j\geq 0}|I_{j}|<\infty\). Assuming that \(\theta_{j}\) are distinct, we set \(\|g\|_{\mathcal{D}}:=\sum_{j\geq 0}|I_{j}|\). As we shall discuss in more detail later, the following result gives a well-posedness for \(h\) which are atomic measures, which corresponds to _logarithmic vortex sheets_ in \(\mathbb{R}^{2}\).
**Theorem 1.5** (Logarithmic vortex sheets).: _Let \(h_{0}\in\mathcal{D}(\mathbb{S})\) satisfy \(h_{0}=\sum_{j\geq 0}I_{j,0}\delta_{\theta_{j,0}}\). Take some compactly supported family of mollifiers \(\varphi^{\varepsilon}(\theta)=\frac{1}{\varepsilon}\varphi(\frac{\theta}{ \varepsilon})\). Then, there exists some \(T>0\) such that for the sequence of mollified initial data \(h_{0}^{\varepsilon}:=\varphi^{\varepsilon}*h_{0}\), the corresponding unique global solutions \(\left\{h^{\varepsilon}(t,\cdot)\right\}_{\varepsilon>0}\) converge in the sense of measures to_
\[h(t,\cdot)=\sum_{j\geq 0}I_{j}(t)\delta_{\theta_{j}(t)}\]
_for all \(t\in[0,T)\). Here, \(\left\{I_{j},\theta_{j}\right\}_{j\geq 0}\) is the unique local solution to the ODE system_
\[\begin{cases}\dot{I}_{j}(t)=2H^{\prime}(t,\theta_{j}(t))I_{j}(t),\\ \dot{\theta}_{j}(t)=2H(t,\theta_{j}(t)),\end{cases} \tag{1.8}\]
_with initial data \(I_{j}(0)=I_{j,0}\) and \(\theta_{j}(0)=\theta_{j,0}\), where \(H\) is the unique Lipschitz solution to (1.4). Specifically, we have that_
\[H(t,\theta_{j}(t))=\sum_{\ell\geq 0}I_{\ell}(t)K(\theta_{j}(t)- \theta_{\ell}(t)) \tag{1.9}\]
_and_
\[H^{\prime}(t,\theta_{j}(t))=\sum_{\ell\geq 0}I_{\ell}(t)K^{ \prime}(\theta_{j}(t)-\theta_{\ell}(t)),\qquad K^{\prime}(0):=\lim_{\epsilon \to 0}\frac{K^{\prime}(\epsilon)+K^{\prime}(-\epsilon)}{2} \tag{1.10}\]
_with \(K\) being the fundamental solution to (1.4) given explicitly in (A.3). In this sense, the time-dependent Dirac measure \(h(t,\cdot)\) is the unique solution to (1.3)-(1.4) with initial data \(h_{0}\) in \([0,T)\)._
The above convergence statement does not follow directly from some norm estimate (the main problem being \(K\notin C^{1}(\mathbb{S})\); this is why \(K^{\prime}(0)\) needs to be defined as the limit in (1.10)) but instead one needs to use a crucial cancellation coming from the fact that the non-continuous part of \(K^{\prime}\) is _odd_. In a sense, this proof is similar (although simpler) to the proof that the sequence of de-singularized vortex patch solutions to the 2D Euler equation converges to the solution of the point vortex system in the singular limit ([17]). In the statement, we could have regularized each Dirac delta by either a patch or some other \(L^{\infty}\) functions as well.
Figure 1: Evolution of the vortex patch supported between two logarithmic spirals (gray region).
_Remark 1.6_.: One may extend the above statement to get local existence in the broader class \(\mathcal{M}:=\mathcal{D}+L^{1}\).
The following proposition verifies that the solutions to the one-dimensional system (1.3)-(1.4) obtained in such a class give rise to actual weak solutions to the two-dimensional Euler equations.
**Proposition 1.7**.: _Let \(h\in C_{*}([0,T);\mathcal{M}(\mathbb{S}))\) be a weak solution of (1.3) with initial data \(h_{0}\). Then, \((\omega,u)\) defined via (1.2)-(1.6) provides an integral weak solution to 2D Euler equations (1.1) with initial data \(\omega_{0}\)._
**Long time dynamics and singularity formation.** Given the above local well-posedness results, it is natural to study the long-time dynamics with initial data either in \(L^{p}\) or in \(\mathcal{D}\). It turns out that there is a _monotone_ quantity for solutions to (1.2)-(1.6), which is nothing but the local circulation:
\[\Gamma(R):=\iint_{|x|\leq R}\omega(t,x)dx=\frac{R^{2}}{2}\int h(t,\theta)d\theta. \tag{1.11}\]
Unless \(\omega\) is a constant, we have that \(\Gamma(R)\) is _strictly decreasing_ (resp. increasing) for \(\beta>0\) (resp. \(\beta<0\)), see Lemma 2.3. This is in stark contrast with the case of \(0\)-homogeneous vorticity studied in [7, 9] and one can see from the proof the specific nature of logarithmic spirals is reflected in the evolution of \(\Gamma\). As an immediately corollary, we obtain that the only steady states to (1.2)-(1.6) in \(\mathcal{M}\) are constants. This provides the basis for obtaining long time dynamics of solutions.
In the case of bounded data, we can show long-time convergence to a constant steady state:
**Theorem 1.8** (Convergence for bounded data).: _For \(h_{0}\in L^{\infty}(\mathbb{S})\), there exist constants \(\mathbf{I}_{\pm}=\mathbf{I}_{\pm}(h_{0})\) satisfying \(|\mathbf{I}_{\pm}|\leq\|h_{0}\|_{L^{\infty}}\) such that the global-in-time solution \(h(t,\cdot)\) corresponding to \(h_{0}\) satisfies_
\[h(t,\cdot)\longrightarrow\mathbf{I}_{\pm},\qquad t\to\pm\infty\]
_in \(H^{-a}(\mathbb{S})\) for any \(a>0\)._
**Example 1.9**.: In the simple case when \(h_{0}\) is the characteristic function of an interval, i.e. \(h_{0}=\mathbf{1}_{J}\) for some \(J\subset\mathbb{S}\), we have that \((\mathbf{I}_{+},\mathbf{I}_{-})=(0,1)\) and \((\mathbf{I}_{+},\mathbf{I}_{-})=(1,0)\), depending on whether \(\beta>0\) or \(\beta<0\), respectively. See Figure 1 for an illustration: for \(\beta<0\), if the initial vorticity in \(\mathbb{R}^{2}\) is the patch supported in the gray region (Figure 1, center), then as \(t\to\infty\), the support of vorticity occupies the entire \(\mathbb{R}^{2}\) (Figure 1, right), while as \(t\to-\infty\), the vorticity locally decays to \(0\) (Figure 1, left).
**Theorem 1.10** (Trichotomy for \(L^{p}\) data).: _Consider \(1\leq p<+\infty\), \(h_{0}\in L^{p}\) and \(h(t,\cdot)\in L^{p}\) the unique local solution to (1.3)-(1.4) on a maximal positive interval of existence \((0,T_{*})\) with \(T_{*}\in(0,+\infty]\). Then one of the following three scenarios must occur._
* _Either_ \(T_{*}=+\infty\) _and there exists_ \(\mathbf{I}_{+}\in\mathbb{R}\) _such that_ \(h(t,\cdot)\underset{t\to\infty}{\longrightarrow}\mathbf{I}_{+}\) _in_ \(H^{-a}\) _for_ \(a<\min\left(0,\frac{1}{2}-\frac{1}{p}\right)\)_,_
* _the solution blows up in finite time:_ \(T_{*}<+\infty\)_,_
* _or the solution blows up in infinite time:_ \(\liminf_{t\to+\infty}\|h(t,\cdot)\|_{L^{p}}=+\infty\)_._
In the case of Dirac initial data, we can provide a simple criterion which guarantees finite time singularity formation.
**Theorem 1.11** (Singularity for Dirac measure data).: _Assume that the Dirac delta initial data \(h_{0}=\sum_{j=0}^{N-1}I_{j}\delta_{\theta_{j}}\) satisfies_
\[\beta\sum_{j=0}^{N-1}I_{j}<0.\]
_Then, the corresponding Dirac solution blows up in finite time; \(\beta\sum_{j=0}^{N-1}I_{j}(t)\to-\infty\) as \(t\to T^{*}\) for some \(T^{*}<\infty\). Otherwise if \(\beta\sum_{j=0}^{N-1}I_{j}\geq 0\) then \(\beta\sum_{j=0}^{N-1}I_{j}(t)\to 0\) as \(t\to+\infty\)._
_Remark 1.12_.: Finite time singularity formation for logarithmic vortex sheets could seem paradoxical, in view of global well-posedness of bounded logarithmic vortex solutions and the convergence statement of Theorem 1.8. Singularity for the sheets can be interpreted as a form of strong instability for patches: initial data \(h_{0}\) given by the characteristic function on an interval of length \(0<\varepsilon\ll 1\) will grow to become length \(O(1)\) after an \(O(1)\) time which is independent of \(\varepsilon\).
### Background material
To put the consideration of logarithmic spiral vorticities and our main results into context, let us discuss various relevant topics for the incompressible Euler equations.
**Long-time dynamics for Euler**. The global well-posedness of smooth enough solutions to (1.1) is now a well established fact. The long time behavior picture of such solutions is far from being complete. Indeed (1.1) is a non local, non linear transport equation modeling a perfect fluid. Both physical and numerical experiments suggest that most solutions relax in infinite time to simpler dynamics, i.e they experience a major contraction in phase space. This can be summarised by the following informal conjecture (see [25] and [23] respectively, [7] and also the review articles [6, 15]) regarding the long time behavior of solutions to the 2d Euler equation:
**Conjecture 1.13**.: _1. As \(t\to\pm\infty,\) generic solutions experience loss of compactness._
_2. The (weak) limit set of generic solutions consists only of solutions lying on compact orbits._
The only rigorous (and important) proofs of those conjectures are in the perturbative regimes around special steady states in the ground breaking work of Bedrossian and Masmoudi [2] and later extensions by Ionescu and Jia [12, 13] and Masmoudi and Zhao [18]. The only exception to this is the recent work [7] where the conjecture is proven in full generality and in particular away from equilibrium but only for the subset of scale invariant m-fold symmetric (\(m\geq 4\)) solutions.
The results in this paper can thus be put in the larger picture of the long time behaviour of perfect fluids as follows. We give a rigorous construction of a special class of weak solutions of the Euler equation that is invariant under the flow, the class of _logarithmic spirals_ solutions first introduced in [8], in Theorems 1.1, 1.3, 1.5 and Proposition 1.7. Moreover we rigorously prove Conjecture 1.13 in this setting in full generality and again away from equilibrium in Theorems 1.8, 1.10 and 1.11.
**Vortex sheets on logarithmic spirals**. While not stated in the current generality, such spiraling weak solutions (\(h_{0}\in\mathcal{D}(\mathbb{S})\)) to the Euler equations, in it's Birkhoff-Rott formulation [3, 21], were first introduced in physics, in the special case when they are _self-similar_ (see the formulas and discussion in Appendix B). For \(j=1\) such solutions are called Prandtl Spirals and were first introduced in [19]. For \(j\geq 2\) with rotational symmetry, they were considered by Alexander [1]. It is a highly non-trivial task to verify that such formulas give rise to actual weak solutions to the Euler equations (see for instance Saffman [22, Section 8.3], Kambe [14], and Pullin [20]). The mathematical proof of this was done in Elling-Gnann in the \(m\)-fold symmetric case with \(m\geq 3\)[11], using special cancellation which is directly related with the well-posedness theory of 2D Euler under \(m\)-fold symmetry which we shall explain below. Without any symmetry hypothesis, the proof was done very recently by Cieslak-Kokocki-Ozanski in [4]. The same authors proved the existence of (a variety of) non-symmetric self similar logarithmic vortex spirals in [5]. On the other hand, [11] contains many numerical computations which exhibit various bifurcation phenomena of non-symmetric spirals.
Regarding the study of logarithmic spiral vortex sheets, our formulation based on (1.2) not only gives a very efficient way of accessing their dynamics, but also realize the vortex sheet evolution as the well-defined limit of more smooth objects, namely vorticities whose level sets are logarithmic spirals. The latter solutions can be completely smooth except at the origin. Furthermore, we emphasize that the spirals of Prandtl and Alexander are simply very specific solutions to the ODE system (1.8) that we have obtained in this work, and this general approach provides a framework for studying the _asymptotic stability_ of self-similar singularity
formation. To illustrate this, we recover some recent results from [4, 5, 11] on existence and bifurcation of self-similar logarithmic spiral vortex sheets using our formulation in Section 4.
**Symmetries and well-posedness of 2D Euler**. An unfortunate fact about non-trivial _logarithmic spirals_ solutions is that they cannot decay at spatial infinity which means they fail to belong to the standard well-posedness class for vorticity for 2d Euler \(L^{1}(\mathbb{R}^{2})\cap L^{\infty}(\mathbb{R}^{2})\) given by Yudovich theory [26]. Indeed the space of \(L^{1}\cap L^{\infty}\) vorticity is stable under the flow of Euler and defines through the Biot-Savart law a log-Lipschitz velocity \(u\) that in turn defines by the standard Osgood theory for ODEs a unique flow map
\[\frac{d}{dt}\Phi(t,x)=u\left(t,\Phi(t,x)\right)\text{ and }\Phi(0,x)=x.\]
A key observation from [10] is that one can use the following discrete symmetry, which is preserved by the Euler flow, to drop the \(L^{1}\) constraint on the vorticity. For \(m\in\mathbb{N}\) a function \(\omega:\mathbb{R}^{2}\to\mathbb{R}\) is said to be \(m\)-fold symmetric if \(\omega(\mathcal{O}_{m}x)=\omega(x)\), for all \(x\in\mathbb{R}^{2}\), where \(\mathcal{O}_{m}\in SO(2)\) is the matrix corresponding to a counterclockwise rotation by angle \(\frac{2\pi}{m}.\) Indeed in [10] it is shown that for \(\omega\in L^{\infty}_{m}(\mathbb{R}^{2})\) with \(m\geq 3\) then \(\omega\) defines through the Biot-Savart law a log-Lipschitz velocity \(u\). Thus \(m-\)fold bounded symmetric _logarithmic spirals_ are stable subset of solutions of a uniqueness class of solutions for the \(2d\) Euler equations for which the spiraling motion induces an arrow of time and a strong relaxation mechanism in infinite time towards a completely homogenized fluid, Theorem 1.8.
Under the \(m\)-fold symmetric assumption with \(m\geq 3\), the logarithmic spiraling dynamics can be realized as the dynamics at the origin of some compactly supported, finite energy solution solutions of the 2d Euler equation on \(\mathbb{R}^{2}\) by the cut-off procedure given in the proof of Corollary 3.14 of [10]. Indeed, if \(h_{0}\in W^{1,\infty}(\mathbb{S})\) is m-fold symmetric then for any \(\omega_{0}^{2D}\in C^{0,1}_{m}\left(\mathbb{R}^{2}\right)\), there exists a unique global in time solution to the two dimensional Euler equation \(\omega\in\hat{C}^{0,1}\) such that
\[\omega^{2D}(t)=\omega(t)-h(t)\in C^{0,1}\left(\mathbb{R}^{2}\right)\text{ for all time},\]
and \(h\) is the solution of (1.3)-(1.4) with initial data \(h_{0}\). We note that \(\omega_{0}^{2D}\) can be chosen in such a fashion that \(\omega_{0}\) is compactly supported.
**Dynamics of 0-homogeneous vorticity**. The special case \(\beta=0\) corresponds to the system for 0-homogeneous vorticity studied in [7, 10]. A first major difference between between the \(\beta=0\) and \(\beta\neq 0\) cases is that for \(\beta=0\) it is necessary to assume \(m-\)fold (\(m\geq 3\)) symmetry on vorticity in order to ensure the well posedness of (1.4). Within symmetry, the same techniques used here can be used to extend the local well-posedness results for bounded \(m-\)fold symmetric scale invariant solutions in [10] and get analogous results to Theorems 1.1, 1.3 and 1.5. In [7], with the help of a monotone quantity quantifying the number of particles exiting the origin, it was possible to show that \(BV\)\(m-\)fold (\(m\geq 4\)) symmetric data relax in infinite time to states with finitely many jumps. This entropy found in [7] is much weaker than the monotonicity of the local circulation (1.11) exhibited here; indeed for \(\beta=0\), (1.11) is conserved in time. This weaker entropy thus leaves the room for steady states that are not identically constant and cannot handle data that is not in the closure of \(BV\) in \(L^{\infty}\), let alone in \(L^{p}\) or \(\mathcal{D}(\mathbb{S})\).
Finally, on any finite time interval where both the \(m-\)fold symmetric \(m\geq 3\) solutions of the 0-homogeneous equations, (1.3) with \(\beta=0\), \(h^{0}(t)\) and the logarithmic spiraling equations \(h^{\beta}(t)\) start from the same initial data \(h^{0}(0)=h^{\beta}(0)=h_{0}\in L^{p},1\leq p<+\infty\) then \(h^{\beta}(t)\to h_{0}(t)\) in \(L^{p}\) when \(\beta\) goes to 0. For \(h_{0}\in L^{\infty}\) we get convergence in the weak topology and for \(h_{0}\in\mathcal{D}(\mathbb{S})\) we get convergence in the sense of measures. Indeed this follows from the observation that the kernel associated to (1.4) \(K^{m}_{\beta}\) converges in \(W^{1,\infty}\) to \(K^{m}\) when \(\beta\) goes to 0, which in turn follows from the Cauchy-Lipschitz theorem with parameters applied to \(K^{m}_{\beta}-K^{m}\) and the observation from Remark A.2 that \(K^{m}_{\beta}(0)\) and \(K^{m^{\prime}}_{\beta}(0)\) converge to \(K^{m}(0)\) and \(K^{m^{\prime}}(0)\), respectively.
### Organization of the paper
The rest of the paper is organized as follows. In Section 2, we obtain some simple properties of the kernel of the elliptic problem (1.4). Then, the main well-posedness results are proved in Section 3. Section 4 contains
results pertaining to the long time dynamics of solutions, as well as some case studies of Dirac deltas. In particular, we recover the existence and bifurcation of symmetric and non-symmetric self similar logarithmic spiral vortex sheets. The explicit form of the kernel is given in the Appendix.
**Acknowledgments.** IJ has been supported by the National Research Foundation of Korea grant No. 2022R1C1C1011051. AS acknowledges funding from the NSF grants DMS-2043024 and DMS-2124748. We are very grateful for Tarek Elgindi for various helpful discussions and in particular for suggesting the proof of global well-posedness under the assumption (1.7).
## 2 Preliminaries
### Properties of the kernel
In this section, we deal with the elliptic equation (1.4). For any \(h\in L^{p}\) with \(p\geq 1\), the unique solution \(H\in W^{2,p}\) is given by the convolution \(H=K*h\), where the kernel \(K\) is defined by the unique solution to the ODE
\[\begin{split} 4K-4\beta K^{\prime}+(1+\beta^{2})K^{\prime\prime}= \delta(\theta),\quad 0<\theta<2\pi,\\ K(0)=K(2\pi),\quad K^{\prime}(0)=K^{\prime}(2\pi).\end{split} \tag{2.1}\]
In the Appendix, we derive the explicit form of the kernel based on Fourier series. We record a few simple properties of \(K\) below.
**Lemma 2.1**.: _We have the formula_
\[\int(K^{\prime}(\theta))^{2}d\theta+\int\frac{4}{1+\beta^{2}}K^{2}(\theta)d \theta=\frac{1}{1+\beta^{2}}K(0). \tag{2.2}\]
Proof.: We start with
\[\begin{split}\int K^{\prime}(\theta)K^{\prime}(\theta)d\theta= \lim_{\varepsilon\to 0}\int_{|\theta|>\varepsilon}K^{\prime}(\theta)K^{ \prime}(\theta)d\theta\\ =-\lim_{\varepsilon\to 0}\int_{|\theta|>\varepsilon}K^{ \prime\prime}(\theta)K(\theta)d\theta+\lim_{\varepsilon\to 0}(K^{\prime}( \varepsilon)K(\varepsilon)-K^{\prime}(-\varepsilon)K(-\varepsilon)).\end{split}\]
We note that
\[\lim_{\varepsilon\to 0}(K^{\prime}(\varepsilon)K(\varepsilon)-K^{\prime}(- \varepsilon)K(-\varepsilon))=\frac{1}{1+\beta^{2}}K(0).\]
Next,
\[\begin{split}\lim_{\varepsilon\to 0}\int_{|\theta|> \varepsilon}K^{\prime\prime}(\theta)K(\theta)d\theta=\lim_{\varepsilon\to 0} \int_{|\theta|>\varepsilon}\frac{1}{1+\beta^{2}}\left(-4K(\theta)+4\beta K^{ \prime}(\theta)\right)K(\theta)d\theta\\ =\lim_{\varepsilon\to 0}\int_{|\theta|>\varepsilon}-\frac{4}{1+ \beta^{2}}K^{2}(\theta)+\frac{2\beta}{1+\beta^{2}}\partial_{\theta}(K^{2}( \theta))d\theta=\int-\frac{4}{1+\beta^{2}}K^{2}(\theta)d\theta.\end{split}\]
This gives (2.2).
**Lemma 2.2**.: _We have_
\[K^{\prime}(0)=-4\beta\int(K^{\prime}(\theta))^{2}d\theta. \tag{2.3}\]
_For any \(0\neq\alpha\in\mathbb{S}\),_
\[(K^{\prime}(\alpha)+K^{\prime}(-\alpha))=-4\beta\int K^{\prime}(\theta)\left( K^{\prime}(\theta+\alpha)+K^{\prime}(\theta-\alpha)\right)d\theta. \tag{2.4}\]
Proof.: We multiply both sides of (2.1) by \(K^{\prime}\) in the region \(|\theta|>\varepsilon\) to obtain
\[\int_{|\theta|>\varepsilon}KK^{\prime}d\theta-4\beta\int_{|\theta|>\varepsilon} (K^{\prime})^{2}d\theta+(1+\beta^{2})\int_{|\theta|>\varepsilon}K^{\prime\prime }K^{\prime}d\theta=0.\]
Using that
\[\int_{|\theta|>\varepsilon}KK^{\prime}d\theta=\frac{1}{2}\left(K^{2}( \varepsilon)-K^{2}(-\varepsilon)\right)\to 0\]
and
\[\int_{|\theta|>\varepsilon}K^{\prime\prime}K^{\prime}d\theta=\frac{1}{2} \left((K^{\prime})^{2}(\varepsilon)-(K^{\prime})^{2}(-\varepsilon)\right)= \frac{1}{1+\beta^{2}}\frac{K^{\prime}(\varepsilon)+K^{\prime}(-\varepsilon)} {2}\rightarrow\frac{1}{1+\beta^{2}}K^{\prime}(0)\]
as \(\varepsilon\to 0\), we conclude (2.3). The proof of (2.4) is similar. We multiply both sides of (2.1) by \(K^{\prime}(\theta+\alpha)+K^{\prime}(\theta-\alpha)\) and integrate in the region \(A:=\mathbb{S}\backslash(\{|\theta-\alpha|<\varepsilon\}\cup\{|\theta+\alpha|<\varepsilon\})\). Then
\[\int_{A}K(\theta)(K^{\prime}(\theta+\alpha)+K^{\prime}(\theta- \alpha))d\theta=-\int_{A}K^{\prime}(\theta)(K(\theta+\alpha)+K(\theta-\alpha) )d\theta+O(\varepsilon),\]
which shows that
\[2\int_{A}K(\theta)(K^{\prime}(\theta+\alpha)+K^{\prime}(\theta- \alpha))d\theta=O(\varepsilon).\]
Similarly, as \(\varepsilon\to 0\), it can be shown using integration by parts that
\[(1+\beta^{2})\int_{A}K^{\prime\prime}(\theta)(K^{\prime}(\theta+ \alpha)+K^{\prime}(\theta-\alpha))d\theta\to K^{\prime}(\alpha)+K^{ \prime}(-\alpha).\]
This finishes the proof.
### Monotonicity
**Lemma 2.3**.: _Any \(L^{p}\) solution to (1.3) satisfies_
\[\frac{d}{dt}\int hd\theta=-8\beta\int(H^{\prime})^{2}d\theta. \tag{2.5}\]
_For \(h=\sum_{i=1}^{N}I_{i}(t)\delta(\theta-\theta_{i}(t))\),_
\[\frac{d}{dt}\sum_{i=1}^{N}I_{i}=-8\beta\int(H^{\prime})^{2}d\theta \tag{2.6}\]
_where_
\[H^{\prime}(t,\theta)=\sum_{i=1}^{N}I_{i}(t)K^{\prime}(\theta- \theta_{i}(t)).\]
Proof.: Assuming that \(h\) is smooth,
\[\frac{d}{dt}\int hd\theta =-2\int H\partial_{\theta}hd\theta=2\int H_{\theta}(4H-4\beta H_{ \theta}+(1+\beta^{2})H_{\theta\theta})d\theta\] \[=-8\beta\int H_{\theta}^{2}d\theta+\int\partial_{\theta}\left(4H^ {2}+(1+\beta^{2})(H_{\theta})^{2}\right)d\theta=-8\beta\int H_{\theta}^{2}d\theta.\]
The case of \(h\in L^{p}\) follows from an approximation argument. In the case of Dirac deltas, we have
\[\frac{1}{2}\frac{d}{dt}\sum_{i=1}^{N}I_{i}=\sum_{i,j}\int I_{i}(t)I_{j}(t)K^{ \prime}(\theta_{i}(t)-\theta_{j}(t)).\]
On the other hand,
\[-4\beta\int H_{\theta}^{2}d\theta=-4\beta\sum_{i,j}\int I_{i}(t)I_{j}(t)K^{ \prime}(\theta-\theta_{j}(t))K^{\prime}(\theta-\theta_{i}(t))d\theta.\]
Then (2.6) follows from (2.3) and (2.4).
## 3 Well-posedness issues
### Proof of well-posedness results
Proof of Theorem 1.1.: We divide the proof into three parts.
**1. Local existence in \(L^{p}\)**. We obtain a priori estimates in \(L^{p}\) for any \(p\geq 1\). For this, assume that we are given a smooth solution \(h,H\) to (1.3)-(1.4). Then,
\[\frac{d}{dt}\int|h|^{p}d\theta=-p\int|h|^{p-1}(2Hh^{\prime}){\rm sgn}\,(h)d \theta=\int 2H^{\prime}|h|^{p}d\theta\leq 2\|H^{\prime}\|_{L^{\infty}}\int|h| ^{p}d\theta. \tag{3.1}\]
Using that \(\|H^{\prime}\|_{L^{\infty}}\leq C\|h\|_{L^{p}}\) holds for any \(p\geq 1\),
\[\frac{d}{dt}\|h\|_{L^{p}}\leq\frac{C}{p}\|h\|_{L^{p}}^{2}.\]
In the case \(p=\infty\), we obtain
\[\frac{d}{dt}\|h\|_{L^{\infty}}=0.\]
This gives that \(\|h\|_{L^{\infty}([0,T^{*}];L^{p})}\leq 2\|h_{0}\|_{L^{p}}\) for \(T^{*}>0\) depending only on \(p\) and \(\|h_{0}\|_{L^{p}}\). Based on this a priori estimate, proving the existence of an \(L^{p}\) solution can be done by the method of mollification. Given any initial data \(h_{0}\in L^{p}\), consider the sequence of mollified data \(h_{0}^{\varepsilon}\) converging to \(h_{0}\) in \(L^{p}\). For each \(h_{0}^{\varepsilon}\), one can construct a corresponding local smooth solution \((h^{\varepsilon},H^{\varepsilon})\) to (1.3)-(1.4), for example by an iteration scheme. The sequence of solutions \(h^{\varepsilon}\) remains smooth in the time interval \([0,T^{*}]\) and satisfies the uniform bound \(\|h^{\varepsilon}\|_{L^{\infty}([0,T^{*}];L^{p})}\leq 2\|h_{0}\|_{L^{p}}\). Appealing to the Aubin-Lions lemma, this gives a weak-\(L^{p}\) limit \(h\in L^{\infty}([0,T^{*}];L^{p})\) as \(\varepsilon\to 0\), by passing to a subsequence if necessary. The corresponding limit \(H^{\varepsilon}\to H\) is strong in \(W^{1,p}\), and this shows that \((h,H)\) gives a weak solution to (1.3)-(1.4). (This argument is parallel, and only easier, compared to the well-known proof of existence for \(L^{p}\) vorticity weak solutions to 2D incompressible Euler equations, see [16].)
**2. Uniqueness in \(L^{1}\)**. We prove that there is at most one solution in the class \(L^{\infty}([0,T];L^{1})\). The case \(p>1\) is only easier. Given an initial datum \(h_{0}\in L^{1}\), we assume that there are two associated solutions \(h\) and \(\tilde{h}\) belonging to \(L^{\infty}([0,T];L^{1})\) for some \(T>0\). We denote \(H={\bf K}h\) and \(\tilde{H}={\bf K}\tilde{h}\). By taking \({\bf K}\) to the equation for \(h\), we may derive the evolution equations satisfied by \(H\) and \(H^{\prime}\):
\[\partial_{t}H+2H\partial_{\theta}H=-{\bf K}(8\beta-3(1+\beta^{2}) \partial_{\theta})\left[(H^{\prime})^{2}\right] \tag{3.2}\] \[\partial_{t}H^{\prime}+2H\partial_{\theta}H^{\prime}=(H^{\prime} )^{2}-4{\bf K}(3-\beta\partial_{\theta})\left[(H^{\prime})^{2}\right]. \tag{3.3}\]
Denoting \(D=(H-\tilde{H})\), we see that it satisfies
\[\partial_{t}D+2H\partial_{\theta}D+2D\partial_{\theta}\tilde{H}=- \mathbf{K}(8\beta-3(1+\beta^{2})\partial_{\theta})\left[(H^{\prime}+\tilde{H}^{ \prime})D^{\prime}\right]. \tag{3.4}\]
Here, \(H^{\prime},\tilde{H}^{\prime}\in L^{\infty}([0,T];W^{1,1})\subset L^{\infty}([0, T];C^{0})\). The operator \(-\mathbf{K}(8\beta-3(1+\beta^{2})\partial_{\theta})\) is convolution type with a bounded kernel. We consider the estimate of \(\mathbf{K}\partial_{\theta}(gD^{\prime})\), where \(g\) is a \(W^{1,1}\) function:
\[\mathbf{K}\partial_{\theta}(gD^{\prime})(\theta)=\int K^{\prime} (\theta-\theta^{\prime})g(\theta^{\prime})D^{\prime}(\theta^{\prime})d\theta^ {\prime}.\]
Note that for any \(\theta\), \(K^{\prime}(\theta-\theta^{\prime})g(\theta^{\prime})\) is differentiable in the sense of distributions with respect to \(\theta^{\prime}\), with
\[\partial_{\theta^{\prime}}\left(K^{\prime}(\theta-\theta^{ \prime})g(\theta^{\prime})\right)=K^{\prime}(\theta-\theta^{\prime})g^{\prime }(\theta^{\prime})-\frac{\delta(\theta-\theta^{\prime})}{1+\beta^{2}}g(\theta ^{\prime})+\frac{4\beta}{1+\beta^{2}}K^{\prime}(\theta-\theta^{\prime})g( \theta^{\prime})-\frac{4}{1+\beta^{2}}K(\theta-\theta^{\prime})g(\theta^{ \prime}).\]
Since \(g\) and \(D\) are continuous functions, this allows us to rewrite
\[\mathbf{K}\partial_{\theta}(gD^{\prime})(\theta)=\int A(\theta- \theta^{\prime})g^{\prime}(\theta^{\prime})D(\theta^{\prime})d\theta^{\prime}+ \frac{1}{1+\beta^{2}}g(\theta)D(\theta),\]
where \(A\) is a bounded function. In particular, we obtain the estimate
\[\|\mathbf{K}\partial_{\theta}(gD^{\prime})\|_{L^{2}}\leq C\|g\|_ {L^{\infty}}\|D\|_{L^{2}}.\]
Similarly,
\[\|\mathbf{K}(gD^{\prime})\|_{L^{2}}\leq C\|g\|_{L^{\infty}}\|D\|_ {L^{2}}.\]
We then estimate, by multiplying (3.4) against \(D\) and integrating,
\[\frac{d}{dt}\|D\|_{L^{2}}^{2} \leq C(\|H^{\prime}\|_{L^{\infty}}+\|\tilde{H}\|_{L^{\infty}}) \|D\|_{L^{2}}^{2}+\|\mathbf{K}(8\beta-3(1+\beta^{2})\partial_{\theta})((H^{ \prime}+\tilde{H}^{\prime})D^{\prime})\|_{L^{2}}\|D\|_{L^{2}}\] \[\leq C(\|h\|_{L^{1}}+\|\tilde{h}\|_{L^{1}})\|D\|_{L^{2}}^{2}.\]
Integrating in time, we obtain
\[\|D\|_{L^{2}}^{2}\leq\|D_{0}\|_{L^{2}}^{2}\exp\left(C(\|h\|_{L^{ \infty}_{t}L^{1}}+\|\tilde{h}\|_{L^{\infty}_{t}L^{1}})\right)\]
which gives uniqueness.
**3. Blow-up criterion and global existence in \(L^{\infty}\)**. From the \(L^{p}\) a priori estimate, we have that
\[\|h(t,\cdot)\|_{L^{p}}\leq\|h_{0}\|_{L^{p}}\exp\left(\frac{2}{p} \int_{0}^{t}\|H^{\prime}(\tau,\cdot)\|_{L^{\infty}}d\tau\right).\]
Therefore, the local unique \(L^{p}\) solution blows up in \(L^{p}\) at \(T^{*}\) if and only if
\[\int_{0}^{T^{*}}\|H^{\prime}(\tau,\cdot)\|_{L^{\infty}}d\tau=\infty. \tag{3.5}\]
Then for each time, \(\|H^{\prime}(\tau,\cdot)\|_{L^{\infty}}\) is equivalent to \(\|h(\tau,\cdot)\|_{L^{1}}\), from which the claimed blow-up criterion follows. In the case \(p=\infty\), we have that as long as the solution exists, \(\|h(t,\cdot)\|_{L^{\infty}}=\|h_{0}\|_{L^{\infty}}\). In turn, this gives \(\|H^{\prime}(t,\cdot)\|_{L^{\infty}}\leq C\|h_{0}\|_{L^{\infty}}\). This implies that the \(L^{\infty}\) solutions are global in time, in view of (3.5).
### Global well-posedness
Proof of Theorem 1.3.: The goal is to show the propagation of the constraint \(\lim_{p\to+\infty}\frac{\left\|h(t,\cdot)\right\|_{L^{p}}}{p}<+\infty\) for all \(t\). Starting back from (3.1) we get for \(1\leq p<+\infty\)
\[\frac{d}{dt}\left\|h\right\|_{L^{p}}\leq\frac{2}{p}\left\|\partial_{\theta}H \right\|_{L^{\infty}}\left\|h\right\|_{L^{p}}.\]
By the Gronwall lemma we get
\[\left\|h(t,\cdot)\right\|_{L^{p}}\leq e^{\frac{2\int_{0}^{t}\left\|\partial_{ \theta}H\right\|_{L,\infty}}{p}}\left\|h_{0}\right\|_{L^{p}},\]
thus for \(p\) such that \(\frac{2\int_{0}^{t}\left\|\partial_{\theta}H\right\|_{L^{\infty}}}{p}\leq\ln(2)\) we have
\[\left\|h(t,\cdot)\right\|_{L^{p}}\leq 2\left\|h_{0}\right\|_{L^{p}}\leq 2C_{0}p.\]
Now, so long as the solution exists, choose
\[p=\max\{1,4\int_{0}^{t}\left\|\partial_{\theta}H\right\|_{L^{\infty}}\}.\]
Note that
\[\left\|\partial_{\theta}H\right\|_{L^{\infty}}\leq C\left\|h\right\|_{L^{1}}, \qquad\left\|h\right\|_{L^{1}}\leq C\left\|h\right\|_{L^{p}},\]
for any \(p\geq 1\) and for \(C>0\) universal. It follows then that
\[\left\|h(t,\cdot)\right\|_{L^{1}}\leq C\max\{1,\int_{0}^{t}\left\|h\right\|_{ L^{1}}\}.\]
It now follows by the Gronwall Lemma that
\[\left\|h(t,\cdot)\right\|_{L^{1}}\leq C\exp(Ct).\]
which gives the desired result on the bound on the constraint and global existence.
### Convergence to logarithmic spiral sheets
Proof of Theorem 1.5.: Let \(h_{0}=\sum_{j\geq 0}I_{j,0}\delta_{\theta_{j,0}}\) satisfy the assumptions of Theorem 1.5 and \(h\) be the corresponding local in time solution given by (1.8). We let \(h^{\varepsilon}\) as the sequence of smooth and global-in-time solutions with mollified initial data \(h_{0}^{\varepsilon}:=\varphi^{\varepsilon}*h_{0}\).
To begin with, note that as long as the solution to (1.8) does not blow up, we have \(\theta_{j}(t)\neq\theta_{k}(t)\) for \(j\neq k\). We fix such a time interval \([0,T]\) and then for each \(j\) we can take some \(\eta_{j}>0\) such that \(h(t,\cdot)\mathbf{1}_{A_{j}(t)}=I_{j}(t)\delta_{\theta_{j}(t)}\), where \(A_{j}(t):=(\theta_{j}(t)-\eta_{j},\theta_{j}(t)+\eta_{j})\) is the open interval.
Since the sequence of data \(h_{0}^{\varepsilon}\) is uniformly \(L^{1}\), from the \(L^{1}\) estimate given in the previous section, we have that on \([0,T]\) (by taking \(T>0\) smaller if necessary) \(h^{\varepsilon}(t,\cdot)\) is uniformly \(L^{1}\) in \(t\) and \(\varepsilon\), and \(H^{\varepsilon}(t,\cdot)\) is uniformly Lipschitz continuous in \(t\) and \(\varepsilon\). (Here, all the relevant norms are bounded in terms of \(\left\|h_{0}\right\|_{\mathcal{D}}\).)
Note that \(\operatorname{supp}\left(h_{0}^{\varepsilon}\right)\subset B(\operatorname{ supp}\left(h_{0}\right),C\varepsilon)\) where we define for simplicity \(B(K,\eta)\) as the \(\eta\)-neighborhood of some set \(K\). For each fixed \(j\), by taking \(\varepsilon>0\) sufficiently small, we can ensure that \(\operatorname{supp}\left(h_{0}^{\varepsilon}\right)\) has a unique connected component intersecting \(A_{j}(t=0)\). By continuity, there is a small time interval on which \(\operatorname{supp}\left(h^{\varepsilon}(t,\cdot)\right)\) still has this property for all sufficiently small \(\varepsilon>0\). Then, the quantities \(\{\theta_{j}^{\varepsilon}(t),I_{j}^{\varepsilon}(t)\}\) are well-defined by the following equations:
\[\theta_{j}^{\varepsilon}(t)\int_{A_{j}(t)}h^{\varepsilon}(t,\theta)d\theta= \int_{A_{j}(t)}\theta h^{\varepsilon}(t,\theta)d\theta,\]
\[I_{j}^{\varepsilon}(t)=\int_{A_{j}(t)}h^{\varepsilon}(t,\theta)d\theta.\]
Note that uniform Lipschitz continuity of \(H^{\varepsilon}\) implies that the length of \(\mathrm{supp}\,(h^{\varepsilon})\cap A_{j}\leq C\varepsilon\) with \(C\) depending only on the Lipschitz norm. This in particular guarantees that
\[\int_{A_{j}(t)}|\theta-\theta^{\varepsilon}_{j}(t)|h^{\varepsilon}(t,\theta)d \theta\leq C\varepsilon I^{\varepsilon}_{j}(t),\]
simply because \(\theta^{\varepsilon}_{j}(t)\in\mathrm{supp}\,(h^{\varepsilon}(t,\cdot))\cap A _{j}\). Now, differentiating in time the above relations (and using that \(h^{\varepsilon}\) is smooth and vanishes on \(\partial A_{j}\)),
\[\frac{d}{dt}I^{\varepsilon}_{j}(t)=\int_{A_{j}(t)}2(H^{\varepsilon})^{\prime }(t,\theta)h^{\varepsilon}(t,\theta)d\theta, \tag{3.6}\]
\[\theta^{\varepsilon}_{j}(t)\frac{d}{dt}I^{\varepsilon}_{j}(t)+I^{\varepsilon }_{j}(t)\frac{d}{dt}\theta^{\varepsilon}_{j}(t)=\int_{A_{j}(t)}2H^{ \varepsilon}(t,\theta)h^{\varepsilon}(t,\theta)d\theta+\int_{A_{j}(t)}2 \theta(H^{\varepsilon})^{\prime}(t,\theta)h^{\varepsilon}(t,\theta)d\theta. \tag{3.7}\]
We use (3.6) to rewrite (3.7) as follows:
\[\frac{d}{dt}\theta^{\varepsilon}_{j}(t) =2H^{\varepsilon}(t,\theta^{\varepsilon}_{j}(t))+\frac{2}{I^{ \varepsilon}_{j}(t)}\int_{A_{j}(t)}(H^{\varepsilon}(t,\theta)-H^{\varepsilon}( t,\theta^{\varepsilon}_{j}(t)))h^{\varepsilon}(t,\theta)d\theta\] \[\qquad+\frac{2}{I^{\varepsilon}_{j}(t)}\int_{A_{j}(t)}(\theta- \theta^{\varepsilon}_{j}(t))(H^{\varepsilon})^{\prime}(t,\theta)h^{ \varepsilon}(t,\theta)d\theta.\]
Estimating
\[\left|\int_{A_{j}(t)}(H^{\varepsilon}(t,\theta)-H^{\varepsilon}(t,\theta^{ \varepsilon}_{j}(t)))h^{\varepsilon}(t,\theta)d\theta\right|\leq\|(H^{ \varepsilon})^{\prime}\|_{L^{\infty}}\int_{A_{j}(t)}|\theta-\theta^{ \varepsilon}_{j}(t)|h^{\varepsilon}(t,\theta)d\theta\leq C\varepsilon I^{ \varepsilon}_{j}(t)\]
and similarly the other term, we obtain that
\[\left|\frac{d}{dt}\theta^{\varepsilon}_{j}(t)-2H^{\varepsilon}(t,\theta^{ \varepsilon}_{j}(t))\right|\leq C\varepsilon. \tag{3.8}\]
Let us now derive a similar estimate for (3.6). To this end, we first decompose \(H^{\varepsilon}=H^{\varepsilon}_{j}+H^{\varepsilon}_{\neq j}\); \(H^{\varepsilon}_{j}\) is simply defined as the solution to (1.4) with right hand side \(h^{\varepsilon}_{j}:=h^{\varepsilon}\mathbf{1}_{A_{j}}\). We write
\[\int_{A_{j}(t)}2(H^{\varepsilon})^{\prime}(t,\theta)h^{\varepsilon}(t,\theta )d\theta=\int_{A_{j}(t)}2(H^{\varepsilon}_{j})^{\prime}(t,\theta)h^{ \varepsilon}(t,\theta)d\theta+\int_{A_{j}(t)}2(H^{\varepsilon}_{\neq j})^{ \prime}(t,\theta)h^{\varepsilon}(t,\theta)d\theta.\]
Then note that \(H^{\varepsilon}_{\neq j}\) is smooth on \(A_{j}\) from the support property, and this gives by writing \((H^{\varepsilon}_{\neq j})^{\prime}(t,\theta)=(H^{\varepsilon}_{\neq j})^{ \prime}(t,\theta^{\varepsilon}_{j})+O(|\theta-\theta^{\varepsilon}_{j}|)\)
\[\int_{A_{j}(t)}2(H^{\varepsilon}_{\neq j})^{\prime}(t,\theta)h^{\varepsilon} (t,\theta)d\theta=2(H^{\varepsilon}_{\neq j})^{\prime}(t,\theta^{\varepsilon} _{j}(t))I^{\varepsilon}_{j}(t)+O(\varepsilon).\]
Next, we have that
\[\int_{A_{j}(t)}2(H^{\varepsilon}_{j})^{\prime}(t,\theta)h^{ \varepsilon}(t,\theta)d\theta=\int_{\mathbb{S}}2(\partial_{\theta}K*h^{ \varepsilon}_{j})(\theta)h^{\varepsilon}_{j}(\theta)d\theta \tag{3.9}\]
where \(K\) is the kernel for the elliptic problem (1.4) (see Appendix for its explicit form). We consider \(K\) as defined on \([-\pi,\pi]\), and then we can decompose \(K=K^{e}+K^{o}\) where \(K^{e}\) and \(K^{o}\) are even and odd parts of \(K\) around \(\theta=0\), respectively. Using the ODE satisfied by \(K\), we can derive the relation
\[(1+\beta^{2})(K^{o})^{\prime\prime}-4\beta(K^{e})^{\prime}+K^{o}=0,\]
and since \((K^{e})^{\prime}\in L^{\infty}(\mathbb{S})\), we have that \(K^{o}\in W^{2,\infty}(\mathbb{S})\) (we only need it to be strictly better than Lipschitz). Alternatively, \(K^{o}\in W^{2,\infty}(\mathbb{S})\) can be checked using the explicit formula (A.3) given in the Appendix. Returning to (3.9) and observing that \((K^{o})^{\prime}\) and \((K^{e})^{\prime}\) are even and odd respectively,
\[\int_{\mathbb{S}}2(\partial_{\theta}K*h^{e}_{j})(\theta)h^{e}_{j} (\theta)d\theta =\iint_{\mathbb{S}\times\mathbb{S}}2(K^{o}+K^{e})^{\prime}(\theta -\theta^{\prime})h^{e}_{j}(\theta)h^{e}_{j}(\theta^{\prime})d\theta d\theta^{\prime}\] \[=\iint_{\mathbb{S}\times\mathbb{S}}2(K^{o})^{\prime}(\theta- \theta^{\prime})h^{e}_{j}(\theta)h^{e}_{j}(\theta^{\prime})d\theta d\theta^{\prime}\] \[=\int_{\mathbb{S}}2(\partial_{\theta}K^{o}*h^{e}_{j})(\theta)h^{ e}_{j}(\theta)d\theta\]
which can be estimated as
\[=\int_{\mathbb{S}}2(\partial_{\theta}K^{o}*h^{e}_{j})(\theta^{e }_{j}(t))h^{e}_{j}(\theta)d\theta+\int_{\mathbb{S}}2((\partial_{\theta}K^{o}* h^{e}_{j})(\theta)-(\partial_{\theta}K^{o}*h^{e}_{j})(\theta^{e}_{j}(t)))h^{e}_{j}( \theta)d\theta\] \[=2(\partial_{\theta}K^{o}*h^{e}_{j})(\theta^{e}_{j}(t))I^{e}_{j}( t)+O(\varepsilon),\]
where we have used that \(h^{e}_{j}\) is \(L^{1}\) uniformly in \(\varepsilon\) and \(\partial_{\theta}K^{o}\) is Lipschitz. This gives that
\[\left|\frac{d}{dt}I^{e}_{j}(t)-2(\tilde{H}^{e}_{j})^{\prime}(t, \theta^{e}_{j}(t))I^{e}_{j}(t)\right|\leq C\varepsilon, \tag{3.10}\]
where by definition,
\[(\tilde{H}^{e}_{j})^{\prime}:=(H^{\varepsilon}_{\neq j})^{\prime}+\partial_{ \theta}K^{o}*h^{e}_{j},\]
which is Lipschitz continuous in \(A_{j}\). Note that under the assumption (which we can bootstrap upon) of \(h^{e}\to h\) in the sense of distributions, we have \((\tilde{H}^{e}_{j})^{\prime}(t,\theta^{e}_{j}(t))\to H^{\prime}(t,\theta_{j}(t))\) where the latter is defined in (1.10).
Using this observation together with (3.8)-(3.10) and noting that \(I^{e}_{j}(t=0)=I_{j,0}\), \(\theta^{e}_{j}(t=0)=\theta_{j,0}\) for all \(\varepsilon\) sufficiently small (possibly depending on \(j\)) gives that for any fixed \(N\geq 1\) and some \(T>0\) small, we have convergence \(\{I^{e}_{j}(t),\theta^{e}_{j}(t)\}\to\{I_{j}(t),\theta_{j}(t)\}\) for \(t\in[0,T]\) and \(j\leq N\). This gives the desired convergence in the sense of distributions.
### Proof of Proposition 1.7
The goal is to show that \(h\in C_{*}\left([0,T);\mathcal{M}\left(\mathbb{S}\right)\right)\) solution of (1.3)-(1.4) defines a weak solution to the 2d Euler equations through (1.2)-(1.6) in both vorticity and velocity forms.
#### 3.4.1 In vorticity form
To do so we write the 2d Euler equations (1.1) in polar coordinates
\[\partial_{t}\omega+u^{r}\partial_{r}\omega+\frac{1}{r}u^{\theta} \partial_{\theta}\omega=0.\]
Thus a weak solution of the 2d Euler equation on an interval \([0,T)\subset\mathbb{R}\) solves for \(\phi\in C_{c}^{\infty}\left(I\times\mathbb{R}^{2}\right)\)
\[\int\limits_{(0,T)\times\mathbb{R}^{2}}\omega\partial_{t}\phi+ \omega\underbrace{\left(\partial_{r}u^{r}+\frac{1}{r}\partial_{\theta}u^{ \theta}\right)}_{=0}\phi+\omega u^{r}\partial_{r}\phi+\frac{1}{r}u^{\theta} \omega\partial_{\theta}\phi=-\int\limits_{\mathbb{R}^{2}}\omega_{0}\phi_{0}.\]
We now work with
\[w(t,r,\theta)=h(t,\theta-\beta\ln r)\in C_{*}\left([0,T],\mathcal{M}\right),\ u(t,r,\theta)=\begin{pmatrix}-r\partial_{\theta}H(t, \theta-\beta\ln(r))\\ 2rH(t,\theta-\beta\ln(r))-r\beta\partial_{\theta}H(t,\theta-\beta\ln(r))\end{pmatrix},\]
and \(H\) solves (1.4). The key observation is that for such \(\omega\), \(u\in C_{*}\left([0,T],C^{0}(\mathbb{R}^{2})\right)\) thus \(\omega u\) is well defined and so is the weak Euler equations for such data and by construction they are a weak solution.
#### 3.4.2 In velocity form
We write the 2d Euler equations on the velocity in polar coordinates
\[\begin{cases}\partial_{t}u^{r}+u^{r}\partial_{r}u^{r}+\frac{1}{r}u^{\theta} \partial_{\theta}u^{r}=-\partial_{r}p\\ \partial_{t}u^{\theta}+u^{r}\partial_{r}u^{\theta}+\frac{1}{r}u^{\theta} \partial_{\theta}u^{\theta}=-\frac{1}{r}\partial_{\theta}p\end{cases}.\]
Thus a weak solution of the 2d Euler equation on an interval \([0,T)\subset\mathbb{R}\) solves for \((\phi^{r},\phi^{\theta})\in C^{\infty}_{c}\left(I\times\mathbb{R}^{2}\right)\)
\[\begin{cases}\int\limits_{(0,T)\times\mathbb{R}^{2}}u^{r}\partial_{t}\phi^{r}+ (u^{r})^{2}\partial_{r}\phi^{r}+\frac{1}{r}u^{\theta}u^{r}\partial_{\theta} \phi^{r}=-\int\limits_{\mathbb{R}^{2}}u_{0}^{r}\phi_{0}^{r}-\int\limits_{(0,T )\times\mathbb{R}^{2}}p\partial_{r}\phi^{r}\\ \int\limits_{(0,T)\times\mathbb{R}^{2}}u^{\theta}\partial_{t}\phi^{\theta}+u^{ r}u^{\theta}\partial_{r}\phi^{\theta}+\frac{1}{r}(u^{\theta})^{2}\partial_{ \theta}\phi^{\theta}=-\int\limits_{\mathbb{R}^{2}}u_{0}^{\theta}\phi_{0}^{ \theta}-\int\limits_{(0,T)\times\mathbb{R}^{2}}p\frac{1}{r}\partial_{\theta} \phi^{\theta}\end{cases},\]
where we again used the incompressibility condition \(\partial_{r}u^{r}+\frac{1}{r}\partial_{\theta}u^{\theta}=0\). The ansatz for the pressure is \(p=r^{2}P(t,\theta-\beta\ln r)\) and solves
\[(1+\beta^{2})P^{\prime\prime}-4\beta P^{\prime}+4P=-\nabla\cdot(u\cdot\nabla u )=\left(\partial_{\theta}H\right)^{2}+\beta^{2}\left(\partial_{\theta}^{2}H \right)^{2}-4\beta\partial_{\theta}^{2}H\partial_{\theta}H.\]
which again gives the desired result.
## 4 Long time dynamics and singularity formation
### Convergence for bounded solutions
Proof of Theorem 1.8.: Recall that we are assuming \(\beta>0\). For \(h_{0}\in L^{\infty}\), we have that for all \(t>0\), the solution \(h(t,\cdot)\) satisfies
\[-\|h_{0}\|_{L^{\infty}}\leq h(t,\theta)\leq\|h_{0}\|_{L^{\infty}}.\]
In particular,
\[I(t):=\int h(t,\theta)d\theta\geq-2\pi\|h_{0}\|_{L^{\infty}}\]
and since the left hand side is strictly decreasing in time (unless \(h_{0}\) is a constant),
\[I(t)\longrightarrow\mathbf{I}_{+}\]
for some constant \(\mathbf{I}_{+}<I(0)\) as \(t\to\infty\). Then, integrating (2.5) in time,
\[8\beta\int_{0}^{\infty}\int(H^{\prime})^{2}d\theta dt=I(0)-\mathbf{I}_{+},\]
which gives \(H\in L^{2}([0,\infty);\dot{H}^{1}(\mathbb{S}))=:L^{2}_{t}\dot{H}^{1}_{\theta}\). Next, from the equation (3.2) for \(H\)
\[\|\partial_{t}H\|_{L^{2}} \leq 2\|HH^{\prime}\|_{L^{2}}+\|\mathbf{K}(8\beta-3(1+\beta^{2}) \partial_{\theta})\left[(H^{\prime})^{2}\right]\|_{L^{2}}\] \[\leq C\|H\|_{W^{1,\infty}}\|H^{\prime}\|_{L^{2}}\leq C\|h\|_{L^{ \infty}}\|H^{\prime}\|_{L^{2}}\leq C\|h_{0}\|_{L^{\infty}}\|H^{\prime}\|_{L^{ 2}}.\]
This gives \(\partial_{t}H\in L^{2}_{t}H^{1}_{\theta}\) as well. We have that \(\int Hd\theta=\frac{1}{4}\int hd\theta\to\frac{1}{4}\mathbf{I}_{+}\). Applying Aubin-Lions lemma to the sequence of functions \(\{H(\cdot+t_{n},\cdot)\}\) defined on \([0,\infty)\times\mathbb{S}\) (here, \(t_{n}\geq 0\) is an arbitrary increasing sequence), we obtain a convergent subsequence in \(L^{2}_{t}H^{1}_{\theta}\). The limit must be equal to the constant \(\frac{1}{4}\mathbf{I}_{+}\), and therefore is independent of the choice of a subsequence; \(H(t,\cdot)\to\frac{1}{4}\mathbf{I}_{+}\) in \(L^{2}(\mathbb{S})\). Since \(h\in L^{\infty}(\mathbb{S})\) uniformly in time, the convergence holds in \(H^{-a}(\mathbb{S})\) for any \(a>0\) in terms of \(h\)
### Trichotomy for \(L^{p}\) data
Proof of Theorem 1.10.: The trichotomy of behavior follows from the analysis of \(I(t)\) when \(h_{0}\in L^{p}\). We work again with \(\beta>0\) and \(h_{0}\) not identically constant, then \(I(t)\) is a strictly decreasing function of time and one the following three scenarios must occur. There exists \(T^{*}\in(0,+\infty]\) such that either
* \(T_{*}=+\infty\) and there exists \(\mathbf{I}_{+}\in\mathbb{R}\) such that \(I(t)\underset{t\to+\infty}{\rightarrow}\mathbf{I}_{+}\),
* \(T_{*}<+\infty\),
* \(T_{*}=+\infty\) and \(I(t)\underset{t\to+\infty}{\rightarrow}-\infty\).
The only point requiring more analysis is the first one. Indeed as for the case \(h_{0}\in L^{\infty}\) we get that \(H\in L^{2}_{t}\dot{H}^{1}_{\theta}\). Now \(I(t)\) being bounded combined with \(H\in L^{2}_{t}\dot{H}^{1}_{\theta}\) implies that \(h\in L^{\infty}_{t}L^{1}_{\theta}\). Again from (3.2)
\[\left\|\partial_{t}H\right\|_{L^{2}}\leq C\left\|H\right\|_{W^{1,\infty}} \left\|\partial_{\theta}H\right\|_{L^{2}}\leq C\left\|h\right\|_{L^{\infty}_{ (0,+\infty)}L^{1}_{\theta}}\left\|\partial_{\theta}H\right\|_{L^{2}},\]
and the proof follows as in the previous paragraph.
### Singularity formation for Diracs
Proof of Theorem 1.11.: We assume \(\beta>0\). To begin with, we note that for each fixed \(N\), if \(h=\sum_{i=1}^{N}I_{i}\delta(\theta-\theta_{i})\) then \(|\sum_{i=1}^{N}I_{i}|\) is controlled \(\|H^{\prime}\|_{L^{2}}\).1 This is clear in the case \(N=1\). To see this for \(N=2\), note that \(\|H^{\prime}\|_{L^{2}}\) is a bilinear form of \(I_{1}\) and \(I_{2}\) with coefficients depending only on \(\theta_{1}-\theta_{2}\). Then, it suffices to observe that this bilinear form is strictly positive when \(I_{1}+I_{2}\neq 0\) and converges to a strictly positive quadratic form of \((I_{1}+I_{2})\) as \(\theta_{1}-\theta_{2}\to 0\). The general case follows from a continuity argument and induction in \(N\). That is,
Footnote 1: On the other hand, it is not correct for the norm \(\sum_{i=1}^{N}|I_{i}|\).
\[\left\|H^{\prime}\right\|_{L^{2}}\geq c_{N}|\sum_{i=1}^{N}I_{i}|\]
for some \(c_{N}>0\) depending on \(N\). Applying (2.6), we see that
\[\frac{d}{dt}(\sum_{i=1}^{N}I_{i})\leq-8\beta c_{N}^{2}(\sum_{i=1}^{N}I_{i})^{ 2}. \tag{4.1}\]
In particular, if \((\sum_{i=1}^{N}I_{i})<0\) for some \(t_{0}\), then \((\sum_{i=1}^{N}I_{i}(t))\to-\infty\) as \(t\) approaches some \(T>t_{0}\). Otherwise, assume that there is no singularity formation. In particular, \(\sum_{i=1}^{N}I_{i}(t)\geq 0\) is required for all \(t\geq 0\). We can exclude the case \(=0\) since the quantity \(\sum_{i=1}^{N}I_{i}(t)\) is _strictly_ decreasing (see (2.6)), unless \(h\) is trivial. Then, we have that \(\sum_{i=1}^{N}I_{i}(t)>0\) for all \(t\geq 0\) and (4.1) tells us that \(\sum_{i=1}^{N}I_{i}(t)\to 0\) as \(t\to\infty\).
### Case study of \(m\) symmetric Dirac deltas
In this section, we revisit the case of \(m\)-fold symmetric Dirac deltas where \(m\geq 1\) is an integer. Namely, we consider the dynamics of the solution of the form
\[h(t,\cdot)=I_{0}(t)\sum_{j=0}^{m-1}\delta_{\theta_{j}(t)} \tag{4.2}\]
where \(\theta_{j}(t)=\theta_{0}(t)+2\pi j/m\) for \(j=1,\cdots,m-1\). The \(m\)-fold symmetry is preserved in time, and the solution is characterized by \((I_{0},\theta_{0})\). It is then natural to introduce the \(m\)-fold symmetric kernel \(K^{m}\) by
\[K^{m}(\theta)=\sum_{j=0}^{m-1}K(\theta+2\pi j/m).\]
We give an explicit form of this symmetrized kernel in the Appendix. Furthermore, we can simply take the spatial domain to be \(\mathbb{S}^{m}\) which is \((0,2\pi/m)\) with endpoints identified with each other. The system of equations for \((I_{0},\theta_{0})\) reads
\[\frac{d}{dt}\theta_{0}(t)=2H(t,\theta_{0}(t))=2K^{m}(0)I_{0}(t) \tag{4.3}\]
\[\frac{d}{dt}I_{0}(t)=2(K^{m})^{\prime}(0)(I_{0}(t))^{2}. \tag{4.4}\]
We see that the equation for \(I_{0}\) does not involve the other variable \(\theta_{0}\), and the solution is simply
\[I_{0}(t)=\frac{I_{0}(0)}{1-2(K^{m})^{\prime}(0)I_{0}(0)t}.\]
Depending on the sign of \(2(K^{m})^{\prime}(0)I_{0}(0)\), we have either finite-time blow up or decay of rate \(1/t\) as \(t\) becomes large. The constant \((K^{m})^{\prime}(0)\) can be explicitly determined as a function of \(m,\beta\) and is given in Remark A.2. Assume that \((K^{m})^{\prime}(0)I_{0}(0)>0\), so that the Dirac solution blows up at some \(T^{*}>0\). It is an interesting exercise to see what happens to the sequence of patch regularizations of (4.2):
\[h_{0}^{\varepsilon}:=\frac{I_{0}(0)}{2\varepsilon}\sum_{j=0}^{m-1}\mathbf{1}_ {[\theta_{j}(0)-\varepsilon,\theta_{j}(0)+\varepsilon]}. \tag{4.5}\]
While there is a global solution associated with \(h_{0}^{\varepsilon}\), one can show that as \(t\to T^{*}\), the support of \(h^{\varepsilon}(t)\) occupies almost all of the spatial domain, so that in particular \(\|h^{\varepsilon}(T^{*})\|_{L^{1}}=O(\varepsilon^{-1})\), which blows up as \(\varepsilon\to 0^{+}\).
### Case study of two non-symmetric Diracs
In this section, we study the evolution of two Dirac deltas in the case \(m=1\). For simplicity, we shall assume that they evolve in a self-similar fashion: their distance in \(\mathbb{S}\) is fixed while the amplitudes are proportional to \(1/t\). To this end, we recall the system (1.8) for two Diracs:
\[I_{1}^{\prime} =2K^{\prime}(0)I_{1}^{2}+2K^{\prime}(\theta_{1}-\theta_{2})I_{1}I _{2},\] \[I_{2}^{\prime} =2K^{\prime}(0)I_{2}^{2}+2K^{\prime}(\theta_{2}-\theta_{1})I_{1}I _{2},\] \[\theta_{1}^{\prime} =2I_{1}K(0)+2I_{2}K(\theta_{1}-\theta_{2}),\] \[\theta_{2}^{\prime} =2I_{2}K(0)+2I_{1}K(\theta_{2}-\theta_{1}),\]
and assume a solution of the form
\[I_{1}(t)=A_{1}t^{-1},\quad I_{2}(t)=A_{2}t^{-1},\quad\theta_{1}(t)-\theta_{2} (t)=d\]
where \(A_{1},A_{2},d\) are constants. We may assume further that \(0<d\leq\pi\). Under these assumptions, the ODE reduces to the following system of algebraic equations
\[-1 =2A_{1}K^{\prime}(0)+2A_{2}K^{\prime}(d),\] \[-1 =2A_{2}K^{\prime}(0)+2A_{1}K^{\prime}(-d), \tag{4.6}\] \[0 =(A_{1}-A_{2})K(0)+A_{2}K(d)-A_{1}K(-d).\]
Assuming that \((K^{\prime}(0))^{2}-K^{\prime}(d)K^{\prime}(-d)\neq 0\), \(A_{1}\) and \(A_{2}\) are uniquely determined from the first two equations in terms of \(d\), and we are left with the single equation
\[K(0)(K^{\prime}(-d)-K^{\prime}(d))+K(d)(K^{\prime}(0)-K^{\prime}(-d))+K(-d)(K^{ \prime}(d)-K^{\prime}(0))=0. \tag{4.7}\]
Even when \((K^{\prime}(0))^{2}-K^{\prime}(d)K^{\prime}(-d)=0\), a solution of (4.7) gives (infinitely many) solutions to (4.6). We clearly see that \(d=\pi\) solves (4.7), which simply corresponds to the symmetric self-similar blow up of two Dirac deltas. We now consider the function
\[F(\beta,d)=K(0)(K^{\prime}(-d)-K^{\prime}(d))+K(d)(K^{\prime}(0)-K^{\prime}(-d ))+K(-d)(K^{\prime}(d)-K^{\prime}(0)),\]
then computing
\[K_{\beta}(\theta)=-\frac{\sin(2\theta)}{8\pi\beta}+O(\beta)\text{ and }K^{ \prime}_{\beta}(\theta)=-\frac{\cos(2\theta)}{4\pi\beta}+O(\beta)\]
we get
\[F(\beta,d)\underset{\beta\to 0}{\sim}\frac{1}{16\pi^{2}\beta^{2}}\sin(2d)(1- \cos(2d)).\]
Thus considering the function \(\tilde{F}=16\pi^{2}\beta^{2}F\) we observe that \(\tilde{F}\) is \(C^{1}\) in neighborhood of \((0,\frac{\pi}{2})\), \(\tilde{F}(0,\frac{\pi}{2})=0\) and \(\partial_{d}\tilde{F}(0,\frac{\pi}{2})=-4\) thus an application of the implicit function theorem immediately yields the existence and uniqueness of a continuum of (non symmetric) solutions in the form \((\beta,d(\beta))\) in a neighborhood of \((0,\frac{\pi}{2})\). This exactly the content of [5, Theorem 1] in the case \(M=2\).
_Remark 4.1_.: The case of 2 Diracs is not special indeed an analogous reasoning for \(M\) Diracs yields a system of \(1+M-1\) equations analogous to (4.7). It can be then observed that the problem reduces to finding the zeros of another function \(F\) of \(1+M-1\) variables, \(\beta\) and the \(M-1\) differences of angles, into \(\mathbb{R}^{M-1}\) for which the implicit function theorem can applied near the point \(\beta=0\) and \((\frac{\pi}{M},\cdots,\frac{(M-1)}{M}\pi)\) if the differential is not singular in the \(M-1\) variables. This is the content of Theorem 1 of [5] where the cases \(M\in\{2,3,5,7,9\}\) are covered.
We get back to the case \(M=2\) and study the behavior of \(F\) in the limit \(\beta\) goes to infinity. In this case, it is not very difficult to verify that
\[K_{\beta}(\theta)=\frac{1}{8\pi}+\frac{(2\pi-\theta)\theta}{4\pi\beta^{2}}+O( \beta^{-3}),\qquad K^{\prime}_{\beta}(\theta)=\frac{\pi-\theta}{2\pi\beta^{2 }}+O(\beta^{-3}).\]
This gives that, as \(\beta\to\infty\),
\[F(\beta,d)\sim\frac{1}{4\pi^{2}\beta^{4}}(2\pi-d)(\pi-d)d+O(\beta^{-6}).\]
This limit can be shown to be uniform in \(C^{1}\) on \(0\leq d\leq\pi\). From this we deduce that for \(0<d<\pi\), there are no zeros of \(F(\beta,d)\) as long as \(\beta\) is taken to be sufficiently large. Combining this computation with the previous one, we can arrive at the following bifurcation result in the case of two Dirac deltas.
**Proposition 4.2**.: _There exist some \(\beta_{0},\beta_{1}>0\) such that the following holds:_
* _for all_ \(0<\beta<\beta_{0}\)_, there is only one zero of_ \(F(\beta,d)=0\) _in_ \(0<d<\pi\)_. This unique zero converges to_ \(\pi/2\) _as_ \(\beta\to 0\)_, and_
* _for all_ \(\beta>\beta_{1}\)_, there are no zeroes of_ \(F(\beta,d)=0\) _in_ \(0<d<\pi\)_._
_In particular, there exists at least one bifurcation point of non-symmetric solutions from the symmetric one._
### Some open questions
The following observations would be interesting to investigate further potentially shedding some light on some new phenomena emerging in the long time dynamics for solutions of the 2d Euler equations.
1. Based on numerical simulations it seems like \(\beta_{0}=\beta_{1}=\beta_{b}\) and the two Dirac s system exhibits a unique bifurcation.
2. For \(\beta>\beta_{b}\), the symmetric configuration seems to be stable and is the unique global attractor of the blow up dynamics exhibited in Theorem 1.11.
3. For \(\beta<\beta_{b}\), bifurcation occurs, the new configuration becomes the stable and unique global attractor of the blow up and the symmetric configuration becomes unstable.
4. It would be interesting to investigate further this bifurcation phenomena for three or more Diracs.
5. One can investigate the role of \(m\)-fold symmetry in the bifurcation of non-symmetric solutions.
## Appendix A Derivation of the kernel
In this section, we give an explicit form of the kernel.
**Lemma A.1**.: _Let \(h,H\) solve the elliptic equation (1.4) for some \(\beta\neq 0\) and suppose moreover they are \(m\)-fold symmetric in \(\theta\) for \(m\geq 1\). Then for \(\theta\in[0.\frac{2\pi}{m})\) we have_
\[H(\theta)=\int_{0}^{2\pi}h(\theta^{\prime})K_{\beta}^{m}(\theta-\theta^{ \prime})d\theta,\]
_with_
\[K_{\beta}^{m}(\theta)=\frac{1}{4}\mathrm{Re}\left[\frac{e^{\frac{2(\beta-i)} {m(1+\beta^{2})}(m\theta-\pi)}}{\sin\left(\frac{2\pi(1+i\beta)}{m(1+\beta^{2} )}\right)}\right].\]
_Remark A.2_.: We record the values
\[K_{\beta}^{m}(0)=\frac{1}{4}\mathrm{Re}\left[\cot\left(\frac{2\pi(1+i\beta)}{ m(1+\beta^{2})}\right)\right],\]
and
\[K_{\beta}^{m\,\prime}(0)=\frac{\mathrm{Re}\left[(\beta-i)\cot\left(\frac{2 \pi(1+i\beta)}{m(1+\beta^{2})}\right)\right]}{2(1+\beta^{2})}=\frac{4\beta K_ {\beta}^{m}(0)+\mathrm{Im}\left[\cot\left(\frac{2\pi(1+i\beta)}{m(1+\beta^{2} )}\right)\right]}{2(1+\beta^{2})}.\]
Proof.: Using the Fourier series
\[h(\theta)=\sum_{n\in\mathbb{Z}}\hat{h}_{n}e^{in\theta},\qquad H(\theta)=\sum_ {n\in\mathbb{Z}}\hat{H}_{n}e^{in\theta},\]
we obtain the relation
\[\hat{H}_{n}=\frac{1}{4-4\beta in-(1+\beta^{2})n^{2}}\hat{h}_{n}=\left[\frac{ 1}{\frac{2i}{i-\beta}-n}+\frac{1}{\frac{2i}{i+\beta}+n}\right]\frac{\hat{h}_ {n}}{4}.\] (A.1)
Thus
\[K_{\beta}(\theta) =\frac{1}{8\pi}\sum_{n\in\mathbb{Z}}\left[\frac{1}{\frac{2i}{i-\beta }-n}+\frac{1}{\frac{2i}{i+\beta}+n}\right]e^{in\theta}=\frac{1}{8\pi}\sum_{n \in\mathbb{Z}}\left[\frac{1}{-\frac{2i(i+\beta)}{1+\beta^{2}}-n}+\frac{1}{\frac {2i(-i+\beta)}{1+\beta^{2}}+n}\right]e^{in\theta}\] \[=\frac{1}{8\pi}\sum_{n\in\mathbb{Z}}\left[\frac{1}{\frac{2(1+i \beta)}{1+\beta^{2}}+n}-\frac{1}{\frac{2(-1+i\beta)}{1+\beta^{2}}+n}\right]e^{ in\theta}.\] (A.2)
We use the following standard Fourier series computation found for example in [24].
**Lemma A.3**.: _For \(\alpha\in\mathbb{C}\setminus\mathbb{N}\) and \(\theta\in\mathbb{R}\)_
\[\sum_{n\in\mathbb{N}}\frac{e^{in\theta}}{n+\alpha}=\frac{\pi}{\sin(\pi\alpha)} e^{i(\pi-\theta)\alpha}.\]
Applying the previous lemma to (A.2) we get for \(\beta\neq 0\)
\[K_{\beta}(\theta)=\frac{1}{8}\left[\frac{e^{i\frac{2(1+i\beta)}{1+\beta^{2}}( \pi-\theta)}}{\sin\left(\frac{2\pi(1+i\beta)}{1+\beta^{2}}\right)}-\frac{e^{i \frac{2(-1+i\beta)}{1+\beta^{2}}(\pi-\theta)}}{\sin\left(\frac{2\pi(-1+i\beta )}{1+\beta^{2}}\right)}\right]=\frac{1}{4}\mathrm{Re}\left[\frac{e^{\frac{2( \beta-1)}{1+\beta^{2}}(\theta-\pi)}}{\sin\left(\frac{2\pi(1+i\beta)}{1+\beta^ {2}}\right)}\right].\] (A.3)
Noting that the \(m\)-fold symmetric Kernel \(m\geq 1\) is given by
\[K_{\beta}^{m}(\theta) =\frac{1}{8\pi}\sum_{n\in\mathbb{Z}}\left[\frac{1}{\frac{2i}{i- \beta}-mn}+\frac{1}{\frac{2i}{i+\beta}+mn}\right]e^{imn\theta}=\frac{1}{8\pi} \sum_{n\in\mathbb{Z}}\left[\frac{1}{\frac{2(1+i\beta)}{1+\beta^{2}}+mn}- \frac{1}{\frac{2(-1+i\beta)}{1+\beta^{2}}+mn}\right]e^{imn\theta}\] \[=\frac{1}{8\pi m}\sum_{n\in\mathbb{Z}}\left[\frac{1}{\frac{2(1+i \beta)}{m(1+\beta^{2})}+n}-\frac{1}{\frac{2(-1+i\beta)}{m(1+\beta^{2})}+n} \right]e^{in\times m\theta},\]
Thus making the change of variables \(\theta\leftrightarrow m\theta\) and \((1+\beta^{2})\leftrightarrow m(1+\beta^{2})\) we get for \(\theta\in(0,\frac{2\pi}{m})\) the desired result.
**Example A.4**.: Let us take \(\beta=1\) and \(m=1\). Then, on the interval \((0,2\pi)\), we explicitly have
\[K_{1}^{1}(\theta)=\frac{1}{2}\frac{\sin(\theta)e^{\theta}}{1-e^{2\pi}},\quad( K_{1}^{1})^{\prime}(\theta)=\frac{1}{2}\frac{(\sin(\theta)+\cos(\theta))e^{ \theta}}{1-e^{2\pi}}.\]
The plot is given in Figure 2.
Self-similar logarithmic vortex sheets
In this section, we show how our formulation of logarithmic vortex sheets corresponds to the traditional one which goes back to the work of Prandtl. We follow the notation of [4, 5]: in the case of one branch, they write
\[Z(t,\theta) =t^{\mu}\exp(i\theta+(\theta-\theta_{0})/\beta),\] \[\Gamma(t,\theta) =gt^{2\mu-1}\exp(2(\theta-\theta_{0})/\beta)\]
where \(Z\) and \(\Gamma\) correspond to the location and circulation of the spiral in \(\mathbb{R}^{2}\) parametrized by \(\theta\in\mathbb{R}\), respectively. Here \(\theta_{0},g,\mu\) are constants. The vorticity is then given by the sheet supported on the set \(\Sigma(t)=\{Z(t,\theta):\theta\in\mathbb{R}\}\), characterized by local circulation
\[R^{-2}\iint_{|x|\leq R}\omega(t,x)dx=gt^{-1}.\]
Taking \(t=1\) to be the initial time for simplicity, the above ansatz corresponds to taking our \(h\) function to be
\[h(t=1,\theta)=2g\delta(\theta-\theta_{0}).\]
(Here, the factor 2 comes from comparing the circulation formula with (1.11).) Using our formulation, for the Dirac delta solution corresponding to \(h(t=1,\cdot)\) to be self-similar, we need to have
\[h(t,\theta)=\frac{2g}{t}\delta(\theta-\Theta(t,\theta_{0})),\quad\Theta(t, \theta_{0})=\theta_{0}-\beta\mu\ln(t)\]
and then we obtain two consistency equations by comparing these with the ODE system (1.8) at \(t=1\), which are nothing but exactly the ones given in [4, Corollary 1.3]. One may similarly consider the case of \(m\) branch spiral sheets as well.
|
2301.10446 | Scattering in algebraic approach to quantum theory. Jordan algebras | Using geometric approach we formulate quantum theory in terms of Jordan
algebras. We analyze the notion of (quasi)particle (=elementary excitation of
translation-invariant stationary state) and the scattering of (quasi)particles
in this framework. | Albert Schwarz | 2023-01-25T07:45:59Z | http://arxiv.org/abs/2301.10446v1 | # Scattering in algebraic approach to quantum theory. Jordan algebras
###### Abstract
Using geometric approach we formulate quantum theory in terms of Jordan algebras. We analyze the notion of (quasi)particle (=elementary excitation of translation-invariant stationary state) and the scattering of (quasi)particles in this framework.
## 1 Introduction
In algebraic approach physical observables correspond to self-adjoint elements of \(*\)-algebra \(\mathcal{A}.\) The vector space \(\mathcal{B}\) of self-adjoint elements of \(\mathcal{A}\) is not closed with respect to the operation of multiplication, but it is closed with respect to the operation \(a\circ b=\frac{1}{2}(ab+ba).\) It was suggested long ago [1] that more natural algebraic approach should be based on axiomatization of the operation \(a\circ b.\) This idea led to the notion of Jordan algebra defined as a commutative algebra over \(\mathbb{R}\) with multiplication \(a\circ b\) obeying
\[(x\circ y)(x\circ x)=x\circ(y\circ(x\circ x)). \tag{1}\]
(Defining by \(R_{a}\) the operator of multiplication by \(a\) we can express this identity saying that operators \(R_{a}\) and \(R_{a\circ a}\) commute.)
The space \(\mathcal{B}\) is closed also with respect to the linear operator \(Q_{a}\) transforming \(x\in\mathcal{B}\) into \(axa\) (here \(a\in\mathcal{B}\)). Another approach to Jordan algebras is based on axiomatization of this operator. The operator \(Q_{a}\) is quadratic with respect to \(a,\) hence it can be extended to operator \(Q_{\tilde{a},a}\) that is symmetric and bilinear with respect
to \(\tilde{a},a.\) Imposing some conditions on \(Q_{a}\) we obtain the notion of quadratic Jordan algebra. Starting with original definition of Jordan algebra we obtain a quadratic Jordan algebra taking \(Q_{a}=2R(a)^{2}-R(a\circ a).\) Conversely starting with quadratic Jordan algebra \({\cal B}\) we obtain a family of products obeying (1) by the formula
\[a\circ_{x}b=Q_{a,b}(x).\]
In what follows we work with unital topological Jordan algebra \({\cal B}\) specified by a product \(a\circ b\). It seems, however, that quadratic Jordan algebras are more convenient to relate Jordan algebras to geometric approach [4]-[7]. This relation is based on the remark that one can define a cone \({\cal B}_{+}\) in \({\cal B}\) as the smallest convex closed set, invariant with respect to operators \(Q_{a}\) and containing the unit element. One can consider also the dual cone \({\cal B}_{+}^{\vee}\) consisting of linear functionals on \({\cal B}\) that are non-negative on \({\cal B}_{+}\). ( See Section 2 for more details).
In the case when \({\cal B}\) consists of self-adjoint elements of \(C^{*}\)-algebra \({\cal A}\) elements of \({\cal B}_{+}^{\vee}\) can be identified with positive linear functionals on \({\cal A}\) (states). This means that applying geometric approach to Jordan algebras we generalize the algebraic approach based on \(*\)-algebras. Notice, that Jordan algebras can be regarded as the natural framework of algebraic approach. This statement is prompted by the following theorem: Cones of states of two \(C^{*}\)-algebras are isomorphic iff corresponding Jordan algebras are isomorphic ( Alfsen - Shultz).
One says that \({\cal B}\) is a JB-algebra if it is equipped with Banach norm obeying
\[||x\circ y||\leq||x||\cdot||y||,||x^{2}||=||x||^{2},||x^{2}||\leq||x^{2}+y^{2 }||.\]
For such an algebra the cone \({\cal B}_{+}\) consists of squares. ( For any Jordan algebra an element \(a\circ a=Q_{a}(1)\) belongs to the cone; for JB-algebras all elements of the cone have this form.) It follows that for JB-algebras the cones are homogeneous: automorphism groups of cones act transitively on the interior of the cone.
It is natural to use homogeneous cones in geometric approach, hence it seems that JB-algebras can lead to interesting models.
The appearance of cones in the theory of Jordan algebras allows us to apply general constructions of geometric approach to these algebras. In the Section 3 we consider (quasi)particles in the framework of Jordan algebras. In Sections 4 and 5 we consider scattering of (quasi)particles. In Section 6 we define generalized Green functions and show that (inclusive) scattering matrix can be expressed in terms of these functions. Section 7 is devoted to some generalizations of Jordan algebras and of above results.
We do not discuss here numerous papers where the theory of Jordan algebras is related to physics (see, in particular, [9]-[23]). A short review of the theory of Jordan
algebras and Jordan pairs as well as a review of various relations between Jordan algebras and physics is given in a companion paper [24].
## 2 Jordan algebras
Let us consider a topological Jordan algebra over \(\mathbb{R}\) denoted \(\mathcal{B}.\) (Recall that Jordan algebra is defined as a commutative algebra with multiplication obeying the identity \((x\circ y)(x\circ x)=x\circ(y\circ(x\circ x)).\) In what follows we consider unital Jordan algebras. If \(\mathcal{B}\) is a complete topological vector space and the multiplication is continuous we say that \(\mathcal{B}\) is a topological Jordan algebra.) The most important class of topological Jordan algebras consists of Jordan Banach algebras (\(JB\)-algebras). The Banach norm in JB-algebra should obey
\[||x\circ y||\leq||x||\cdot||y||,||x^{2}||=||x||^{2},||x^{2}||\leq||x^{2}+y^{2} ||.\]
(See [3] for a review of the theory of operator Jordan algebras.)
In finite-dimensional case the class of JB-algebras coincides with the class of Euclidean Jordan algebras classified in the famous paper by Jordan, von Neumann and Wigner [2]. The most natural simple finite-dimensional JB-algebras \(\mathfrak{h}_{n}(\mathbb{R}),\mathfrak{h}_{n}(\mathbb{C}),\mathfrak{h}_{n}( \mathbb{H})\) consist of Hermitian matrices with real, complex or quaternion entries. One more series of simple finite-dimensional JB-algebras is the series of spin factors (or Jordan algebras of Clifford type). These algebras are generated by elements \(1,e_{1},..,e_{n}\) with relations \(e_{i}\circ e_{i}=1,e_{i}\circ e_{j}=0\) for \(i\neq j\). The last simple finite dimensional JB-algebra is Albert algebra \(\mathfrak{h}_{3}(\mathbb{O})\) that can be realized as an algebra of \(3\times 3\) Hermitian matrices with octonion entries. This 27-dimensional algebra is exceptional ( it cannot be embedded into matrix Jordan algebra with the operation \(a\circ b=\frac{1}{2}(ab+ba).\)).
The structure semigroup \(Str(\mathcal{B})\) is defined as a semigroup generated by automorphisms of \(\mathcal{B}\) and operators \(Q_{a}\) that can be expressed in terms of Jordan triple product \(\{a,x,b\}\) by the formula \(Q_{a}(x)=\{a,x,a\}.\) (Jordan triple product is defined by the formula \(\{a,x,b\}=(a\circ x)\circ b+(x\circ b)\circ a-(a\circ b)\circ x.\)) The operator \(Q_{a}\) is quadratic with repect to \(a\); we use the notation \(Q_{\tilde{a},a}\) for the corresponding bilinear operator: \(Q_{\tilde{a},a}=\{\tilde{a},x,a\}.\)
An element \(B\in Str(\mathcal{B})\) ( a structural transformation) obeys
\[Q_{Ba}=BQ_{a}B^{t} \tag{2}\]
where \(B\to B^{t}\) denotes an involution in the structure semigroup transforming every operator \(Q_{a}\) into itself and every automorphism into inverse automorphism. The
structure group \(Strg({\cal B})\) is generated by automorphisms of \({\cal B}\) and invertible operators \(Q_{a}.\)
The inner structure semigroup \(iStr({\cal B})\) is generated by operators \(Q_{a}\). The inner structure group \(iStr({\cal B})\) consists of invertible elements of inner structure semigroup.
We define the positive cone in Jordan algebra as the smallest closed convex subset of \({\cal B}\) containing the unit element and invariant with respect to operators \(Q_{a}.\)1 One can say also that the positive cone is the smallest closed convex subset of \({\cal B}\) that contains unit element and is invariant with respect to inner structure semigroup. Positive cone is invariant with respect to automorphisms, hence it is invariant also with respect to the structure semigroup. It is obvious that the cone contains all squares; this follows from \(Q_{a}(1)=a\circ a.\) For JB-algebras all elements of the cone can be represented as squares and the structure group acts transitively on the interior of the cone.
Footnote 1: We define a cone in topological vector space as a closed convex subset, that is invariant with respect to dilations \(x\rightarrow\lambda x\) where \(\lambda>0.\) We do not impose any further restrictions, hence in our terminology a vector space is a cone.
The positive cone is denoted by \({\cal B}_{+}\) and the dual cone is denoted by \({\cal B}_{+}^{\vee}.\)
Let us suppose that the Jordan algebra \({\cal B}\) is obtained from associative algebra \({\cal A}\) as a set of self-adjoint elements with respect to involution \({}^{*};\) this set is equipped with the operation \(a\circ b=\frac{1}{2}(ab+ba).\) Then \(Q_{a}(x)=axa,Q_{\tilde{a},a}=\frac{1}{2}(\tilde{a}xa+ax\tilde{a}).\) It follows that the structure semigroup of \({\cal B}\) contains all maps \(x\to A^{*}xA\) where \(A\) is a self-adjoint or unitary element of \({\cal A}\) (for self-adjoint element we get a map \(Q_{A},\) for unitary element we get an automorphism).
If \({\cal A}\) is a \(C^{*}\)-algebra then every element of the form \(A^{*}A\) can be represented as a square of self-adjoint element, hence the cone in \({\cal B}\) is a smallest closed convex set containing all elements of the form \(A^{*}A\) where \(A\in{\cal A}.\) The dual cone consists of all linear functionals on \({\cal B}\) that are non-negative on the elements of the form \(A^{*}A\) (in agreement with the standard definition of positive linear functional on associative algebra with involution).
We consider also complexifications of Jordan algebras considered as complex Jordan algebras with involution. ( If we start with \(JB\)-algebra then the complexification is called \(JB^{*}\)-algebra.)The cones associated with these algebras can be defined as the cones of their real parts. The Jordan triple product \(\{a,x^{*},b\}\) is defined as an operation antilinear with respect to the middle arguments and linear with respect to other arguments. We introduce the notation \(Q_{a}(x)=\{a,x,a^{*}\}\) where \(x\) is real. It is easy to check that \(Q_{a}\) belongs to the structure semigroup (it is equal to \(Q_{\alpha}+Q_{\beta}\) where \(\alpha\) and \(\beta\) are real and imaginary parts of \(a\)). It follows that in the case when \(x\) is real and \(b=a^{*}\) the triple product belongs to the positive cone. Notice that the
map \(Q_{a}\) is Hermitian with respect to \(a.\)
In geometric approach to scattering theory [7] we are starting with a cone of states \({\cal C}\subset{\cal L}\) and a group \({\cal U}\) consisting of automorphisms of the cone. ( Here \({\cal L}\) denotes a complete topological vector space.)
If we take as a starting point a Jordan algebra \({\cal B}\) we can take as \({\cal C}\) either the cone \({\cal B}_{+}\) or the dual cone. The group \({\cal U}\) can be identified with the structure group \(Strg({\cal B}).\) Sometimes it is convenient to fix a semiring \({\cal W}\) consisting of endomorphisms of the cone; if we are starting with Jordan algebra \({\cal B}\) the semiring \({\cal W}\) can be defined as the smallest semiring containing the structure semigroup \(Str({\cal B}).\)
## 3 (Quasi)particles
To define (quasi)particles and their scattering we should specify time translations \(T_{\tau}\) and spatial translations \(T_{\bf x}\) as elements of the structure group \(Strg({\cal B})\). In other words we should fix a homomorphism of the commutative translation group \({\cal T}\) to \(Strg({\cal B})\). The translation group acts also on the cones. We are using the same notations \(T_{\tau},T_{\bf x}\) for time and spatial translations of the cones. As usual we denote \(T_{\tau}T_{\bf x}\alpha\) as \(\alpha(\tau,{\bf x}).\) As in [5], [7] we define (quasi)particles as elementary excitations of translation-invariant stationary state.
Applying the general definition of geometric approach we can say that an elementary excitation of stationary translation- invariant state \(\omega\) is a quadratic or Hermitian map \(\sigma\) of the "elementary space" \(\mathfrak{h}\) into the cone of states \({\cal C}\). This map should commute with spatial and time translations. In addition one should fix a map of \(\mathfrak{h}\) into the space \(End({\cal L})\) of endomorphisms of \({\cal L}\) such that \(\sigma(f)=L(f)\omega\)[7]. In the framework of Jordan algebras \(\omega\) is an element of the positive cone \({\cal B}_{+}\) or of the dual cone \({\cal B}_{+}^{\vee}.\) For definiteness we assume that \(\omega\) is an element of the dual cone; then \({\cal L}\) should be identified with \({\cal B}^{\vee}.\) (Recall that that the elementary space \(\mathfrak{h}\) is defined as a pre Hilbert space of smooth fast decreasing functions depending on \({\bf x}\in{\mathbb{R}}^{d}\) and discrete variable \(i\in{\cal I}.\) The spatial translations act as shifts by \({\bf a}\in{\mathbb{R}}^{d}\), the time translations commute with spatial translations. We can consider real-valued or complex-valued functions; we should consider quadratic maps in the first case and Hermitian maps in the second case.)
Let us start with linear map \(\rho:\mathfrak{h}\rightarrow{\cal B}\) commuting with translations. Let us fix translation-invariant stationary state \(\omega\in{\cal B}_{+}^{\vee}\) obeying additional condition \(T^{t}\omega=\omega\) for all \(T\in{\cal T}\). ( Here \(T\to T^{t}\) stands for the involution in structure group entering (2).) Then _the map \(L:\mathfrak{h}\to End({\cal B}^{\vee})\) transforming \(x\in\mathfrak{h}\) into \(Q_{\rho(x)}\) specifies an elementary excitation of \(\omega\) as the map \(\sigma:x\to L(x)\omega\)_. To verify this statement we notice that the map \(\sigma\) is quadratic or Hermitian because \(L\) is quadratic (if the
is a Jordan algebra over \(\mathbb{R}\)) or Hermitian (if \(\mathcal{B}\) is a complex Jordan algebra with involution), the formula (2) implies that \(\sigma\) commutes with translations. In what follows we consider mostly elementary excitations constructed this way. Notice, however, that we can start with arbitrary linear map \(\rho:\mathfrak{h}\rightarrow\mathcal{B}\) and impose a weaker condition that the map \(\sigma:x\to L(x)\omega\) commutes with translations. Then \(\sigma\) can be regarded as elementary excitation. (Here again \(L\) transforms \(x\in\mathfrak{h}\) into \(Q_{\rho(x)}.\))
Let us consider as an example a Jordan algebra defined as a set of of self-adjoint elements of Weyl algebra. We define Weyl algebra corresponding to real pre Hilbert space \(\mathfrak{h}\) as a unital \(*\)- algebra generated by elements \(a(f),a^{+}(g)\) depending linearly of \(f,g\in\mathfrak{h}\) and obeying canonical commutation relations
\[[a(f),a(g)]=[a^{+}(f),a^{+}(g)]=0,[a(f),a^{+}(g)]=\langle f,g\rangle.\]
We define a map \(\rho\) of \(\mathfrak{h}\) into this Jordan algebra as a map sending \(f\in\mathfrak{h}\) to \(a(f)+a^{+}(f).\)
Let us assume that \(\mathfrak{h}\) is an elementary space. Then the translations in \(\mathfrak{h}\) induce translations in Weyl algebra and in the corresponding Jordan algebra. The map \(\rho\) commutes with translations. Taking as \(\omega\) the state corresponding to the Fock vacuum in positive cone or a dual cone we obtain an example of elementary excitation of this state.
The same construction works for Clifford algebra specified by canonical anticommutation relations.
Notice that the Jordan algebra corresponding to Clifford algebra can be regarded as \(JB\)-algebra, but starting with Weyl algebra we obtain a topological Jordan algebra, more precisely a Frechet Jordan algebra. (Frechet vector space is a complete topological vector space where the topology is specified by a countable family of seminorms. In what follows we are talking about \(JB\)-algebras, but our results can be generalized to Frechet analogs of these algebras.)
Let us discuss the relation of the above constructions to the construction of elementary excitations in the approach based on consideration of associative algebra \(\mathcal{A}\) with involution (\(*\)-algebra). The set of self-adjoint elements of such an algebra can be regarded as Jordan algebra \(\mathcal{B}\) over \(\mathbb{R};\) the complexification \(\mathbb{C}\mathcal{B}\) of this Jordan algebra can be identified with \(\mathcal{A}\) considered as complex Jordan algebra with involution. The elements of dual cones of these Jordan algebras can be identified with not necessarily normalized states of \(\mathcal{A}.\) Let us assume that spatial and time translations act on \(\mathcal{A}\) as automorphisms; this action generates an action of translations on the cones of Jordan algebras. Let us fix a translation invariant stationary state \(\omega\) of algebra \(\mathcal{A};\) corresponding elements of dual cones of Jordan algebras are denoted by the same symbol. Excitations of \(\omega\) can be regarded as elements of pre Hilbert space
\({\cal H}\) obtained by means of GNS construction applied to \(\omega,\) an elementary excitation is an isometric embedding \(\Phi(f)\) of elementary space \(\mathfrak{h}\) into \({\cal H}.\) Following [7] we represent \(\Phi(f)\) in the form \(B(f)\theta\) where \(B\) is a linear map \(\mathfrak{h}\rightarrow{\cal A}\) and \(\theta\in{\cal H}\) is a vector corresponding to the state \(\omega\). If elements \(B(f)\) are self-adjoint we can apply the above construction of elementary excitation of \(\omega\) considered as an element of a dual cone of the Jordan algebra \({\cal B}\) of self-adjoint elements of \({\cal A}\) taking \(\rho=B\). Then the state \(Q_{\rho(f)}\omega\) corresponds to the vector \(B(f)\theta.\) If \(B^{*}(f)=0\) a similar statement can be proved for \(\mathbb{C}{\cal B}\) ( for Jordan algebra with involution obtained by complexification of \({\cal B}\)).
## 4 Scattering
To analyze scattering in the framework of Jordan algebras we are starting with linear map \(\rho:\mathfrak{h}\rightarrow{\cal B}\) (not necessarily commuting with translations). We fix translation-invariant stationary state \(\omega\in{\cal B}_{+}^{\vee}\). The map \(L:\mathfrak{h}\to End({\cal B}^{\vee})\) transforming \(x\in\mathfrak{h}\) into \(Q_{\rho(x)}\) specifies an elementary excitation of \(\omega\) if the map \(\sigma:x\to L(x)\omega\) commutes with translations.
We define ( following the general theory of [7]) the operator
\[L(f,\tau)=T_{\tau}(L(T_{-\tau}f))=T_{\tau}L(T_{-\tau}f)T_{-\tau}.\]
where \(f\in\mathfrak{h}.\) This operator is quadratic or Hermitian with respect to \(f\), therefore we can consider also the operator \(L(\tilde{f},f,\tau)\) that is linear with respect to \(\tilde{f}\) and linear or antilinear with respect to \(f;\) it coincides with \(L(f,\tau)\) for \(\tilde{f}=f^{*}.\) ( Notice that in the case of real vector spaces \(f^{*}=f.\)) Using the notation \(Q_{\tilde{a},a}x=\{\tilde{a},x,a^{*}\}\) we can write
\[L(\tilde{g},g,\tau)=T_{\tau}Q_{T_{-\tau}\rho(\tilde{g}),T_{-\tau}\rho(g)}T_{- \tau}=L(\tilde{g},g)T_{-\tau}^{t}\]
where \(L(\tilde{g},g)=Q_{\rho(\tilde{g}),\rho(g)}\) is a bilinear (or sesquilinear) form corresponding to the quadratic (or Hermitian) form \(L(g)=Q_{\rho(g)}.\)
The state
\[\Lambda(f_{1},\cdots,f_{n}|-\infty)=\lim_{\tau_{1}\rightarrow-\infty,\cdots, \tau_{n}\rightarrow-\infty}L(f_{1},\tau_{1}),...L(f_{n},\tau_{n})\omega \tag{3}\]
describes the collision of (quasi)particles with wave functions \(f_{1},...,f_{n}.\) We say that (3) is a scattering state (or, more precisely, an \(in\)-state).
The result of the collision can be characterized by the number
\[\lim_{\tau^{\prime}\rightarrow+\infty,\tau\rightarrow-\infty}\langle\alpha|L( g_{1},\tau^{\prime})...L(g_{m},\tau^{\prime})L(f_{1},\tau)...L(f_{n},\tau)|\omega\rangle \tag{4}\]
where \(\alpha\) is a stationary translation-invariant point of the cone \({\cal B}_{+}\) or of the larger cone \({\cal B}_{+}^{\vee\vee}\) (we use bra-ket notations). Comparing with the formulas of [7] we see that this number can be interpreted as a generalization of inclusive scattering matrix. More generally we can consider a functional
\[\sigma(\tilde{g}_{1}^{\prime},g_{1}^{\prime},...,\tilde{g}_{n^{\prime}}^{ \prime},g_{n^{\prime}}^{\prime},\tilde{g}_{1},g_{1},...,\tilde{g}_{n},g_{n})=\]
\[\langle\alpha|\lim_{\tau_{i}^{\prime}\rightarrow+\infty,\tau_{j}\rightarrow- \infty}L(\tilde{g}_{1}^{\prime},g_{1}^{\prime},\tau_{1}^{\prime})...L(\tilde{ g}_{n^{\prime}}^{\prime},g_{n^{\prime}}^{\prime},\tau_{n^{\prime}}^{\prime})L( \tilde{g}_{1},g_{1},\tau_{1})...L(\tilde{g}_{n},g_{n},\tau_{n})|\omega\rangle \tag{5}\]
that is linear or antilinear with respect to all of its arguments. It also can be regarded as inclusive scattering matrix.
It was proven in [7] that the limit (3) exists for \(f_{1},\cdots,f_{n}\) in a dense open subset of \(\mathfrak{h}\times\cdots\times\mathfrak{h}\) if
\[||[T_{\alpha}(L(\phi)),L(\psi)]||\leq\int d{\bf x}d{\bf x}^{\prime}D^{ab}({\bf x }-{\bf x}^{\prime})|\phi_{a}({\bf x})|\cdot|\psi_{b}({\bf x}^{\prime})| \tag{6}\]
_where \(D^{ab}({\bf x})\) tends to zero faster than any power as \({\bf x}\rightarrow\infty\) and \(\alpha\) runs over a finite interval._
Less formally we can formulate this condition as the requirement that the commutator \([T_{\alpha}(L(\phi)),L(\psi)]\) is small if the essential supports of functions \(\phi\) and \(\psi\) in coordinate representation are far away.
Similar conditions can be formulated for the existence of limits (4), (5) (for the existence of inclusive scattering matrix). Later we will formulate more concrete conditions for the existence of the above limits.
Notice that the linear map \(\rho:\mathfrak{h}\rightarrow{\cal B}\) can be regarded as a multicomponent generalized function \(\rho({\bf x})\) in coordinate representation or \(\rho({\bf k})\) in momentum representation. (This means that we formally represent \(\rho(\phi)\) as \(\int d{\bf x}\phi({\bf x})\rho({\bf x})\) or as \(\int d{\bf k}\phi({\bf k})\rho({\bf k}).\) Discrete indices are omitted in these formulas and in what follows.) We assume that the generalized functions \(\rho({\bf x}),\rho({\bf k})\) correspond to continuous functions denoted by the same symbols.
The expression (5) is linear with respect to its arguments, therefore it can be regarded as a generalized function
\[\sigma(\tilde{\bf x}_{1}^{\prime},{\bf x}_{1}^{\prime},...,\tilde {\bf x}_{n^{\prime}}^{\prime},{\bf x}_{n^{\prime}}^{\prime},\tilde{\bf x}_{1}, {\bf x}_{1},...,\tilde{\bf x}_{n},{\bf x}_{n})=\] \[\lim_{\tau_{i}^{\prime}\rightarrow+\infty,\tau_{j}\rightarrow- \infty}\langle\alpha|L(\tilde{\bf x}_{1}^{\prime},{\bf x}_{1}^{\prime},\tau_{1 }^{\prime})...L(\tilde{\bf x}_{n^{\prime}}^{\prime},{\bf x}_{n^{\prime}}^{ \prime},\tau_{n^{\prime}}^{\prime})L(\tilde{\bf x}_{1},{\bf x}_{1},\tau_{1})...L (\tilde{\bf x}_{n},{\bf x}_{n},\tau_{n})|\omega\rangle \tag{7}\]
or, in momentum representation,
\[\sigma(\tilde{\bf k}_{1}^{\prime},{\bf k}_{1}^{\prime},...,\tilde{\bf k}_{n^{ \prime}}^{\prime},{\bf k}_{n^{\prime}}^{\prime},\tilde{\bf k}_{1},{\bf k}_{1},...,\tilde{\bf k}_{n},{\bf k}_{n})=\]
\[||[\rho(\phi),\rho(\psi)]_{\mp}||<\int d{\bf x}d{\bf x}^{\prime}D^{ab}({\bf x}-{ \bf x}^{\prime})|\phi_{a}({\bf x})|\cdot|\psi_{b}({\bf x}^{\prime})| \tag{10}\]
where \(D^{ab}({\bf x})\) tends to zero faster than any power as \({\bf x}\to\infty\).
The operators \(L_{\phi}=Q_{\rho(\phi)}\) transform \(b\in{\cal B}\) into \(\rho(\phi)b\rho(\phi).\) It is easy to check that these operators obey (6), hence the limits we are interested in exist and we can consider the scattering of particles. In the case of asymptotic commutativity (anticommutativity) we are dealing with bosons (fermions).
For Jordan algebra \({\cal B}\) coming from associative algebra \({\cal A}\) with involution it is easy to formulate sufficient conditions for commutativity of operators \(Q_{a}\) and \(Q_{b}.\) It is obvious that in the case when \(a\) and \(b\) commute in \({\cal A}\) or \(a\) and \(b\) anticommute in \({\cal A}\) (equivalently \(a\circ b=0\) in \({\cal B}\)) we have \(Q_{a}Q_{b}=Q_{b}Q_{a}.\) Similar statements are
correct in any \(JB\)-algebra \({\cal B}\):_if operators \(R_{a}\) and \(R_{b}\) commute or \(a\circ b=0\) then the operators \(Q_{a},Q_{b}\) commute_[27], [25], [26]. (Here \(a,b\in{\cal B},\)\(R_{a}\) stands for the operator of multiplication by \(a\) in \({\cal B}\). If \(R_{a}\) and \(R_{b}\) commute one says that \(a\) and \(b\) operator commute.)
It is natural to conjecture that these statements can be generalized in the following way:
If \(a\circ b\) is small, then the operators \(Q_{a}\) and \(Q_{b}\) almost commute.
If the operators \(R_{a}\) and \(R_{b}\) almost commute, then the operators \(Q_{a}\) and \(Q_{b}\) almost commute.
The first of these conjectures is proven in [8]. More precisely,
_If the norm of the Jordan product \(a\circ b\) of two elements of JB-algebra \({\cal B}\) is \(\leq\epsilon\) then_
\[||[Q_{a},Q_{b}]||\leq k(||a||,||b||)\sqrt{\epsilon}. \tag{11}\]
_Here \(\epsilon\geq 0\) and \(k\) is a polynomial function_
This statement allows us to give conditions for the existence of limits (3), (4), (5).
We impose the condition
\[||\rho({\bf x}+{\bf a})\circ\rho({\bf a})||<\frac{C_{n}}{1+||{\bf x}||^{n}} \tag{12}\]
for every natural number \(n.\) Then
\[||\rho(\phi)\circ\rho(\psi)||<\int d{\bf x}d{\bf x}^{\prime}D^{ab}({\bf x}-{ \bf x}^{\prime})|\phi_{a}({\bf x})|\cdot|\psi_{b}({\bf x}^{\prime})| \tag{13}\]
where \(D^{ab}({\bf x})\) tends to zero faster than any power as \({\bf x}\rightarrow\infty.\) Applying (11) we obtain (6) that implies the existence of limits (3), (4), (5) for dense sets of families of functions in the arguments of these expressions.
It is not clear whether the second conjecture is true. However, the identity \(Q_{a}=2R_{a}^{2}-R_{a^{2}}\) immediately implies the following weaker statement: if \(a\) and \(a^{2}\) almost operator commute with \(b\) and \(b^{2}\) then \(Q_{a}\) and \(Q_{b}\) almost commute:
\[||[Q_{a},Q_{b}]||\leq 8||a||\cdot||b||\cdot||[R_{a},R_{b}]||+4||a||\cdot||[R_{a},R_{b^{2}}||+4||b||\cdot||[R_{b},R_{a^{2}}]||+||[R_{a^{2}},R_{b^{2}}]||.\]
Using the identity \(Q_{e^{a}}=e^{2R_{a}}\) one can conclude that operators \(Q_{e^{a}}\) and \(Q_{e^{b}}\) almost commute if \(a\) and \(b\) almost operator commute.
One can use these statements to give conditions for the existence of scattering states and inclusive scattering matrix.
Green functions
Let us fix translation-invariant elements \(\alpha\in\mathcal{B}_{+},\)\(\omega\in\mathcal{B}_{+}^{\vee},\) where \(\mathcal{B}_{+}\) is a positive cone in Jordan algebra \(\mathcal{B}\) and \(\mathcal{B}_{+}^{\vee}\) denotes the dual cone. The quadratic operators \(Q_{A}\) act in both cones, the bilinear operators \(Q_{\tilde{A},A}\) act in \(\mathcal{B}\) and in the dual space \(\mathcal{B}^{\vee}.\) (Here \(A,\tilde{A}\) are elements of \(\mathcal{B}.\))
Let us fix elements \(\tilde{A}_{1},A_{1},...,\tilde{A}_{n},A_{n}\in\mathcal{B}.\) We introduce the notation \(Q_{i}(\tilde{\mathbf{x}},\tilde{\tau},\mathbf{x},\tau)\) for \(Q_{\tilde{A}_{i}(\tilde{\mathbf{x}},\tilde{\tau}),A_{i}(\mathbf{x},\tau)}.\)
We define (generalized) Green functions by the formula
\[G_{n}(\tilde{\mathbf{x}}_{1},\tilde{\tau}_{1},\mathbf{x}_{1},\tau_{1},..., \tilde{\mathbf{x}}_{n},\tilde{\tau}_{n},\mathbf{x}_{n},\tau_{n})=\]
\[\langle\alpha|T(Q_{1}(\tilde{\mathbf{x}}_{1},\tilde{\tau}_{1},\mathbf{x}_{1}, \tau_{1})...Q_{n}(\tilde{\mathbf{x}}_{n},\tilde{\tau}_{n},\mathbf{x}_{n},\tau _{n})|\omega\rangle\]
where \(T\) stands for the chronological ordering with respect to \(\tau_{i}^{\prime}=\frac{1}{2}(\tilde{\tau}_{i}+\tau_{i}).\)
Omitting chronological ordering in this formula we obtain a definition of correlation functions. Notice that in the case when Jordan algebra \(\mathcal{B}\) is constructed as a set of self-adjoint elements of \(*\)-algebra \(\mathcal{A}\) we have
\[Q(\tilde{\mathbf{x}},\tilde{\tau},\mathbf{x},\tau)a=\frac{1}{2}(A(\mathbf{x},\tau)a\tilde{A}(\tilde{\mathbf{x}},\tilde{\tau})+\tilde{A}(\tilde{\mathbf{x} },\tilde{\tau})aA(\mathbf{x},\tau)).\]
Using this remark we can express correlations functions for Jordan algebra \(\mathcal{B}\) in terms of correlation functions for \(*\)- algebra \(\mathcal{A}.\)
We defined GGreen functions in \((\mathbf{x},\tau)\)-representation; as always taking Fourier transforms we obtain GGreen functions in \((\mathbf{k},\tau)\)- and \((\mathbf{k},\varepsilon)\)-representations.
One can show ( under some conditions) that the inclusive scattering matrix can be calculated in terms of asymptotic behavior of GGreen functions in \((\mathbf{k},\tau)\)-representation or in terms of poles and residues in \((\mathbf{k},\varepsilon)\)-representation. The proof is similar to the proof of analogous statement in [6]. It is based on formula (8).
## 7 Generalizations
Let us consider a \(\mathbb{Z}_{2}\)-graded algebra \(\mathcal{A}.\) We denote by \(\mathcal{A}_{\Lambda}\) the set of even elements of tensor product \(\mathcal{A}\otimes\Lambda\) where \(\Lambda\) is a Grassmann algebra. This set can be considered as an algebra; if for every \(\Lambda\) the algebra \(\mathcal{A}_{\Lambda}\) is a Jordan algebra we say that \(\mathcal{A}\) is a Jordan superalgebra. (See, for example, [28] for more standard definition of Jordan superalgebra and for main facts of the theory of Jordan superalgebras.) Similarly, if the algebra \(\mathcal{A}_{\Lambda}\) is a Lie algebra one says that \(\mathcal{A}\) is a Lie superalgebra. One can say that a Jordan superalgebra \(\mathcal{A}\) specifies a functor defined on the category of Grassmann
algebras and taking values in the category of Jordan algebras. Analogously, a Lie superalgebra specifies a functor with values in Lie algebras and a supergroup can be regarded as a functor with values in groups (all functors are defined on Grassmann algebras.
Starting with a differential algebra (\(=\mathbb{Z}_{2}\)-graded algebra equipped with an od derivation \(d\) obeying \(d^{2}=0\)) we can define a differential Jordan superalgebra. The simplest way to construct a differential Jordan superalgebra is to take tensor product of Jordan algebra \(\mathcal{B}\) by a differential supercommutative algebra \(\mathcal{E}\) (for example, one can take as \(\mathcal{E}\) a free Grassmann algebra with the differential that calculates cohomology of Lie algebra).
Using these definitions one can generalize the statements above to Jordan superalgebras and (in the framework of BRST-formalism) to differential Jordan superalgebras.
These generalizations can be used to analyze interesting examples.
In particular, one can construct Poincare invariant theories starting with simple exceptional Jordan algebra ( Albert algebra \(\mathfrak{h}_{3}(\mathbb{O})\)). Namely, one should notice that the structure group of this algebra contains a subgroup isomorphic to \(SO(1,9)\) (this subgroup was used in [9], [10]). Assuming that translations act trivially we obtain an action of ten-dimensional Poincare group on Albert algebra. Taking a tensor product of Albert algebra and (diffferental) supecommutative algebra where Poincare group acts by automorphisms we obtain a (differential) Jordan superalgebra with action of Poincare group (and in general non-trivial action of translations).
|
2302.11208 | KS-DETR: Knowledge Sharing in Attention Learning for Detection
Transformer | Scaled dot-product attention applies a softmax function on the scaled
dot-product of queries and keys to calculate weights and then multiplies the
weights and values. In this work, we study how to improve the learning of
scaled dot-product attention to improve the accuracy of DETR. Our method is
based on the following observations: using ground truth foreground-background
mask (GT Fg-Bg Mask) as additional cues in the weights/values learning enables
learning much better weights/values; with better weights/values, better
values/weights can be learned. We propose a triple-attention module in which
the first attention is a plain scaled dot-product attention, the second/third
attention generates high-quality weights/values (with the assistance of GT
Fg-Bg Mask) and shares the values/weights with the first attention to improve
the quality of values/weights. The second and third attentions are removed
during inference. We call our method knowledge-sharing DETR (KS-DETR), which is
an extension of knowledge distillation (KD) in the way that the improved
weights and values of the teachers (the second and third attentions) are
directly shared, instead of mimicked, by the student (the first attention) to
enable more efficient knowledge transfer from the teachers to the student.
Experiments on various DETR-like methods show consistent improvements over the
baseline methods on the MS COCO benchmark. Code is available at
https://github.com/edocanonymous/KS-DETR. | Kaikai Zhao, Norimichi Ukita | 2023-02-22T08:48:08Z | http://arxiv.org/abs/2302.11208v2 | # KS-DETR: Knowledge Sharing in Attention Learning for Detection Transformer
###### Abstract
Scaled dot-product attention applies a softmax function on the scaled dot-product of queries and keys to calculate weights and then multiplies the weights and values. In this work, we study how to improve the learning of scaled dot-product attention to improve the accuracy of DETR. Our method is based on the following observations: using ground truth foreground-background mask (GT Fg-Bg Mask) as additional cues in the weights/values learning enables learning much better weights/values; with better weights/values, better values/weights can be learned. We propose a triple-attention module in which the first attention is a plain scaled dot-product attention, the second/third attention generates high-quality weights/values (with the assistance of GT Fg-Bg Mask) and shares the values/weights with the first attention to improve the quality of values/weights. The second and third attentions are removed during inference. We call our method knowledge-sharing DETR (KS-DETR), which is an extension of knowledge distillation (KD) in the way that the improved weights and values of the teachers (the second and third attentions) are directly shared, instead of minicked, by the student (the first attention) to enable more efficient knowledge transfer from the teachers to the student. Experiments on various DETR-like methods show consistent improvements over the baseline methods on the MS COCO benchmark. Code is available at [https://github.com/edocanonymous/KS-DETR](https://github.com/edocanonymous/KS-DETR).
## 1 Introduction
Detection transformer (DETR [3]), built on a transformer encoder-decoder architecture, greatly simplified the object detection pipeline of traditional object detection methods. It views object detection as a set prediction problem by bipartite matching to enforce unique predictions and outputs a fixed number of object classes and locations given a fixed set of learnable object queries.
Scaled dot-product attention, applied in the self-attention module in the encoder/decoder layers and cross-attention module in the decoder layers, is an essential component for DETR. Given input features \(X\), it first conducts linear projections on \(X\) to obtain queries \(Q\), keys \(K\) and values \(V\). Then it applies a softmax function on the scaled dot-product of \(Q\) and \(K\) to calculate weights (or attention map \(A\)) and then multiplies \(A\) and \(V\) to obtain the output features \(Y\).
DETR-like methods have improved the attention learning of DETR significantly by, for instance, using multi-scale features with deformable attention [25], decoupling the attention to content attention and spatial attention [15], improving the design of the learnable object queries [23, 13, 11]. In this paper, we provide a new perspective on improving attention learning. Our work is closely related to attention distillation.
Attention distillation [17, 10, 18, 22], as an application of knowledge distillation (KD [2, 9]), has been used to improve attention learning to force a student model to mimic the attention maps of a teacher model. The attention distillation loss between the attention maps of the teacher \(A^{T}\) and the student \(A^{S}\) is typically defined by
\[L=\frac{1}{H}\sum_{h=1}^{H}\mathrm{KL}(A_{h}^{T}\|A_{h}^{S}), \tag{1}\]
where \(\mathrm{KL}\) represents the Kullback-Leibler divergence loss
Figure 1: The DETR architecture (top) and our KS-DETR architecture (bottom). We replace the scaled dot-product attention of DETR with our triple-attention in the last encoder layer. Note that the position encoding in the encoder and decoder, and learnable object queries in the decoder are skipped in the architectures for clarity.
and \(H\) is the number of attention maps. The motivation for conducting attention distillation is that if the student can learn better attention maps by mimicking the teacher's attention map, it is also likely to learn better output features \(Y\).
However, there are two issues for attention distillation. First, a large trained teacher model is needed to provide the teacher attention map, while training a large teacher model is time-consuming. Second, attention distillation ignores the gap in the representation ability between the teacher values and student values. The improvement in the output features can be difficult to achieve if the student has low-quality values due to low model capability, even if the student successfully learns the essential part of the attention maps of the teacher.
To address the first issue, we propose to use the ground truth foreground-background mask (GT Fg-Bg Mask) as additional cues to learn good teacher attention and values inside the student model. The GT Fg-Bg Mask is obtained by assigning 1 to pixels inside a ground truth bounding box and 0 otherwise. In object detection tasks, we need to decide the class label and exact boundaries for a predicted object. The GT Fg-Bg Mask will assist the attention learning for both the localization and classification tasks as it clearly identifies which pixel is foreground/background. With the GT Fg-Bg Mask, we can obtain the teacher attention by building separate branches which learn student attention and teacher attention separately, as shown in Fig. 2 b, c and d.
For the second issue, we propose a knowledge-sharing strategy for more efficient knowledge transfer in attention learning. Our proposal is based on the following observations. The quality of _A/V_ improves as \(V\)/\(A\) improves as one has to adapt the other toward learning good output features through backpropagation. Based on these observations, we design a triple-attention module in which the first attention is a plain scaled dot-product attention, the second attention generates high-quality \(A\) (with the assistance of GT Fg-Bg Mask) and shares \(V\) with the first attention to improve the quality of \(V\), and the third attention generates high-quality \(V\) and shares \(A\) with the first attention to improve the quality of \(A\).
The differences between attention distillation and our knowledge sharing are shown in Fig. 2. The knowledge (attention maps and values) learned in the teacher-attention in our method is directly shared, instead of mimicked as in attention distillation, by the student. As a result, the gap in the representation ability of \(V\) between the teacher and student is eliminated.
Our contributions are as follows:
* We show that high-quality attention maps and values can be learned with the assistance of the GT Fg-Bg Mask without training a large teacher model.
* We propose a triple-attention module to exploit the high-quality attention maps and values to improve the learning of scaled dot-product attention and obtained consistent improvements over various DETR-like methods.
## 2 Related Work
### DETR methods
Followups of DETR [3] have improved the attention learning of DETR significantly. Deformable DETR [25] replaces the global dense attention with deformable one and limits the number of keys the query can attend to by sparsely sampling the key points. SMCA [6] constraints the cross-attention to focus more on locations that are likely to contain objects by using a Gaussian-weighted spatial map for predicted object centers and scales. DAB-DETR [13] uses box coordinates as queries in the decoder and restricts the regions of interest for cross-attention learning. Conditional DETR [15] decouples the cross attention in the decoder to content attention and spatial attention. Our work is complementary to the above methods as our method adds two teacher attentions to assist the learning of normal attention.
### KD and attention distillation
KD [2, 9] was first introduced to compress the knowledge of a large teacher model to a small student model for classification tasks. The student is forced to not only predict the normal hard labels but also mimic the predicted category probability (soft labels) of the teacher, as the soft labels contain rich information that could not be encoded in the hard labels, e.g., the similarity of the output categories. Later works have extended KD to mimic, for instance, features and attention maps [7].
Early works of attention distillation for transformer architectures are from natural language processing. TinyBERT [10] applies \(\ell_{2}\) attention transfer loss to capture the attention knowledge from the teacher (BERT [5]) to the student (TinyBERT). MobileBERT [18] minimizes the KL-divergence between the attention maps of the teacher and student. MiniLM [22] proposes to mimic not only the distribution of the attention maps (i.e., the scaled dot-product of queries-keys) but also the relation between values (i.e., the scaled dot-product of values-values). However, MiniLM requires the student and teacher to have the same number of attention heads for distribution mimicking. To solve this problem, MiniLMv2 [21] proposes to completely replace the distribution mimicking with relation mimicking for the queries and keys. AttnDistill [20] applies attention distillation in a self-supervised vision transformer for classification tasks. It conducts interpolation and aggregation when the student and teacher have different numbers of attention heads and sizes of attention maps, respectively. The most related work to us
is [17]. It conducts attention distillation for detection transformers. The attention maps of the last encoder layer from a large teacher network are distilled to the same location of a student network with fewer encoder and decoder layers. However, all the above methods require a large pretrained teacher model and ignore the representation gap of the values as we discussed in Sec. 1. We address these two issues in our method. Moreover, the attention maps of student and teacher in our method always have the same dimensions (cf. Sec. 3.2) and thus our method is more general than attention distillation.
### Attention or feature learning with GT
Ground truth (GT) is typically used as the target in supervision tasks to assist the training of a model by imposing a loss on the predictions of the model to be the same as the target. In anchor-based two-stage detection methods such as Faster R-CNN [16], one trick to improve the accuracy is to directly add GT boxes as high-quality proposals to increase the diversity of the input proposals in the second stage for better feature learning. The high-quality proposals are typically difficult to obtain if we only rely on the proposals made in the first stage. DN-DETR [11] and DINO-DETR [24] adopted similar ideas to DETR, they add GT boxes (with random noise in object boundary and class label) as object queries to increase the learning of the decoder features. Our method differs from the above two methods in the way that we directly use GT boxes as additional features, instead of box priors, to improve attention learning and value learning.
## 3 Our Method
We first review DETR [3] and then introduce how we build our method upon DETR.
### Detr
The DETR [3] architecture consists of a (CNN) backbone, an encoder-decoder transformer, and a feed-forward network (FFN) as shown at the top of Fig. 1. The backbone generates features \(f\in\mathbb{R}^{H\times W\times C}\) from input image \(I\in\mathbb{R}^{H_{0}\times W_{0}\times 3}\)
Figure 2: Difference between attention distillation and our knowledge (attention maps \(A\) and values \(V\)) sharing framework with triple-attention. Dual-attentions in (b) and (c) are two variants of our triple-attention, obtained by removing a different teacher-attention module each time from our triple-attention. Attention distillation requires training a large teacher model to provide the teacher attention map \(A^{t}\), while our method learns \(A^{t}\) inside the student model by using GT Fg-Bg Mask \(M\) as additional cues. Here \(X^{s}\) is the input feature of the scaled dot-product attention of the student attention. We first obtain the teacher feature \(X^{t}\) by fusing \(X^{s}\) with \(M\) (details are given in Sec. 3.2). Then we derive \(A^{t}\) and teacher values \(V^{t}\) from \(X^{t}\). Note that \(Y^{t1}\), \(Y^{t2}\) in (d) are the outputs of the scaled dot-production of our two teacher attentions.
Before these features are fed to the encoder, they are first passed to \(1\times 1\) convolution to reduce the channel dimension from \(C\) to \(d\) and then flattened to tokens \(X\in\mathbb{R}^{HW\times d}\) along the spatial dimension. \(X\) is further processed by a sequence of identical encoder layers in the encoder to obtain encoder features \(f^{e}\in\mathbb{R}^{HW\times d}\) and a sequence of identical decoder layers in the decoder to obtain decoder features \(f^{d}\in\mathbb{R}^{N\times d}\) with respect to \(N\) object queries. The object queries are learnable input embedding and \(N\) is a hyperparameter of DETR. Finally, the FFN predicts the box coordinates and class label for each object query.
Each encoder layer consists of a self-attention and position-wise FFN. Without loss of generality, the forward propagation of the self-attention and FFN are given by
\[\begin{split} Z^{{}^{\prime}}&=X+\mathrm{LN}( \mathrm{MHA}(Q,K,V))\\ Z&=Z^{{}^{\prime}}+\mathrm{LN}(\mathrm{FFN}(Z^{{}^{ \prime}})),\end{split} \tag{2}\]
where \(Q\), \(K\) and \(V\) are learned from \(X\) and are the query, key and value, respectively, \(\mathrm{LN}\) indicates layer normalization [1] and \(\mathrm{MHA}\) indicates multi-head attention [19]. \(\mathrm{MHA}\) splits the input tokens \(X\) into \(h\) groups \(X_{1},...,X_{h}\) (e.g., \(h=8\)) along the channel dimension, conducts scaled dot-product attention on each group separately and then applies a linear projection to the outputs of \(h\) heads to generate the final output. The linear projection is given by
\[\mathrm{MHA}(Q,K,V)=Concat(head_{1},...,head_{h})W^{O}, \tag{3}\]
where \(head_{i}=Attention(X_{i}W_{i}^{Q},X_{i}W_{i}^{K},X_{i}W_{i}^{V})\) represents the output of a single head and is estimated by
\[Attention(Q,K,V)=softmax\left(\frac{QK^{T}}{\sqrt{d_{k}}}\right)V. \tag{4}\]
Here \(softmax\left(\frac{QK^{T}}{\sqrt{d_{k}}}\right)\) is often referred to as the attention map \(A\). As the transformer architecture is permutation-invariant, positional encoding (\(PE\)) is added to \(X\) to obtain \(Q\), \(K\) and \(V\) in each attention layer, i.e.,
\[[Q;K;V]=[(X+PE)W^{Q};(X+PE)W^{K};XW^{V}] \tag{5}\]
where \(W^{Q}\in\mathbb{R}^{d_{model}\times d_{q}},W^{K}\in\mathbb{R}^{d_{model}\times d _{k}}\), and \(W^{V}\in\mathbb{R}^{d_{model}\times d_{v}}\) are the parameters of the scaled dot-product attention, and \(d_{model}=d/h\).
### Ks-Detr
Our motivation is to improve the output features \(Y\) of the scaled dot-product attention by improving both the attention
Figure 3: Details of the triple-attention used in our method (right) and the normal scaled dot-product attention (left). The \(M\) in the red block represents the GT Fg-Bg Mask.
maps \(A\) and values \(V\) by knowledge-sharing. Our hypothesis is that if _A/V_ is replaced with a high-quality one (i.e., \(A^{t}\)/_V\({}^{t}\)_), then _V/A_ should also be improved as they have to adapt themselves to fit \(A^{t}\)/_V\({}^{t}\)_ through backpropagation. As a result, the improved _V/A_ can be directly exploited by the normal scaled dot-product attention.
We verify the effectiveness of our knowledge-sharing idea in the encoder due to its simplicity. The encoder layer does not need to learn the encoder-decoder attention (or cross-attention) as in decoder layers. Our method is designed to be general for DETR-like methods that use an encoder-decoder architecture. The framework of KS-DETR is shown at the bottom of Fig. 1. We replace the scaled dot-product attention in the last encoder layer with our triple-attention with the assistance GT Fg-Bg Mask.
**GT Fg-Bg Mask generation.** We first separate the foreground and background in the image space by
\[M_{I}(i,j)=\begin{cases}1&\text{if }(i,j)\in\text{GT Boxes}\\ 0&\text{otherwise}\end{cases} \tag{6}\]
where \(i\) and \(j\) represent the image coordinates on the horizontal and vertical directions, respectively, and GT Boxes represent the ground truth bounding boxes. We then bilinearly interpolate \(M_{I}\) to the size of the output feature maps of the backbone. Finally, we flatten the interpolated mask along the spatial dimension and obtain the binary mask \(M\in\mathbb{R}^{HW\times 1}\).
**Triple-attention.** Our triple-attention module consists of a student attention and two teacher attentions. It is derived from three groups of (\(Q,K,V\)) with repeated elements as shown in Fig. 3. The three groups are: 1) (\(Q,K,V\)) for the first plain attention \(Attn1\), 2) (\(Q,K,V^{t}\)) for the second attention \(Attn2\) (\(Q\) and \(K\) are shared with \(Attn1\)), and 3) (\(Q^{t},K^{t},V\)) for the third attention \(Attn3\) (\(V\) is shared with \(Attn1\)). Here the superscript \(t\) indicates _teacher_, a term borrowed from knowledge distillation. Note that there are other combinations of (\(Q,K,V\)), for instance, (\(Q^{t},K^{t},V^{t}\)), (\(Q^{t},K,V\)) and (\(Q,K^{t},V\)). However, (\(Q^{t},K^{t},V^{t}\)) cannot share \(A\) or \(V\) with \(Attn1\), and (\(Q^{t},K^{t},V\)) for \(Attn3\) is favored over (\(Q^{t},K,V\)) and (\(Q,K^{t},V\)) because both \(Q^{t}\) and \(K^{t}\) contribute to the learning of high-quality \(A^{t}\).
As with the student model that is equal to the original DETR shown in Eq. 5, \(Q^{t},K^{t},V^{t}\) are obtained from the linear projections of the teacher feature \(X^{t}\) (explained below) by
\[\begin{split}&=[(X^{t}+PE)W^{Q^{t}};\\ &\quad(X^{t}+PE)W^{K^{t}};X^{t}W^{V^{t}}].\end{split} \tag{7}\]
**Teacher feature generation.** The teacher feature \(X^{t}\) is the output of the fusion of the input token \(X\) and the GT Fg-Bg Mask \(M\). We could generate \(X^{t}\) by simply concatenating \(X\) and \(M\) alone the channel dimension
\[X^{t}=concat(X,M),X\in\mathbb{R}^{HW\times d},M\in\mathbb{R}^{HW\times 1}. \tag{8}\]
However, the concatenation will change the feature dimension and cause the resulting number of channels to be non-divisible when calculating multi-head attention.
Here we propose to conduct sparse MLP (sMLP) that applies a positional-wise linear projection followed by a ReLU activation to only foreground tokens by
\[\begin{split} X^{t}&=sMLP(X,M)\\ &=X\odot(1-M)+Relu(XW^{X})\odot M,\end{split} \tag{9}\]
where \(W^{X}\in d\times d\) represents the parameters introduced by the positional-wise sMLP operation and \(\odot\) represents the element-wise multiplication.
The increase of parameters for the sMLP is \(d(d+1)\), where \(d\) is the embedding dimension (e.g., 256). The ReLU activation is necessary to change the feature distributions of the foreground tokens so that the foreground tokens always have non-negative values to encode the foreground/background information.
**Outputs of triple-attention.** With the three groups of \((Q,K,V)\), we can generate the outputs \(Z^{Attn_{1}}\), \(Z^{Attn_{2}}\) and \(Z^{Attn_{3}}\) for \(Attn1\), \(Attn2\) and \(Attn3\), respectively, by sharing the subsequent modules (e.g., \(\mathrm{MHA}\) and \(\mathrm{FFN}\)) in the same encoder layer with Eq. 2. The workflow of how to obtain \(Z^{Attn_{1}}\), \(Z^{Attn_{2}}\) and \(Z^{Attn_{3}}\) is shown in Fig. 3. These outputs are further processed separately by the shared subsequent decoder layers to keep the training of each attention independent. The second and third attentions as well as the sMLP are removed during inference so no additional parameters and computation overhead are introduced after training.
**Loss function.** Our loss function is a combination of the default loss of the original DETR applied to the predictions using the outputs \(Z^{Attn_{1}}\), \(Z^{Attn_{2}}\) and \(Z^{Attn_{3}}\) of the triple-attentions:
\[L=L_{det}^{Attn_{1}}+L_{det}^{Attn_{2}}+L_{det}^{Attn_{3}}. \tag{10}\]
## 4 Experiments
### Main results
We use the COCO dataset [12] to evaluate our method and report the AP on the COCO 2017 validation set. We compare our method with the following baseline methods: Conditional-DETR [15], DAB-DETR [13], DN-DETR [11], Deformable-DETR [25] and DN-Deformable-DETR [11]. Among them, Deformable-DETR [25] and DN-Deformable-DETR [11] use multi-scale features, and the rest use single-scale features. We use ResNet50 (R50 [8]) and ResNet101
(R101 [8]) as the backbones. For DAB-DETR [13], we further test the transformer backbone Swin-T [14]. We follow the standard training procedures of DETR for the 12-epoch and 50-epoch training with batch size 16.
The results are shown in Table 1. We see that our method consistently improves all baseline methods for all tested backbones. There are two interesting patterns. First, the improvement for a powerful backbone tends to be relatively large than a weak backbone. Take DAB-DETR [13] as an example, the improvements over the baseline are 1.9, 1.3 and 0.6 AP for Swin-T, R101 and R50, respectively. Second, the training with short training schedules (e.g., 12 epochs) generally exhibits larger improvements than that with long training schedules (e.g., 50 epochs), suggesting that our method speeds up the training. For instance, the improvements for Deformable-DETR-R101 [25] at 12, 24 and 50 epochs are 1.6, 1.4 and 0.9 AP, respectively. The fast convergence of our method can also be seen in Fig. 4 by comparing the baseline with \(Attn1\) (the plain attention in our triple-attention module).
\begin{table}
\begin{tabular}{l l l l l l l l} \hline Model & \multicolumn{1}{c}{\#Epochs} & \multicolumn{1}{c}{AP} & \multicolumn{1}{c}{AP\({}_{50}\)} & \multicolumn{1}{c}{AP\({}_{75}\)} & \multicolumn{1}{c}{AP\({}_{S}\)} & \multicolumn{1}{c}{AP\({}_{M}\)} & \multicolumn{1}{c}{AP\({}_{L}\)} \\ \hline Conditional-DETR-R50 & 50 & 41.3 & 62.5 & 43.6 & 21.0 & 44.6 & 59.6 \\ KS-Conditional-DETR-R50 & 50 & 42.1 (+0.8) & 63.4 & 44.8 & 21.4 & 45.9 & 60.6 \\ Conditional-DETR-R101 [15] & 50 & 42.8 & 63.7 & 46.0 & 21.7 & 46.6 & 60.9 \\ KS-Conditional-DETR-R101 & 50 & 43.4 (+0.6) & 64.8 & 46.7 & 23.5 & 47.2 & 62.3 \\ \hline DAB-DETR-R50 & 50 & 43.0 & 63.5 & 45.8 & 22.4 & 46.6 & 61.3 \\ KS-DAB-DETR-R50 & 50 & 43.9 (+0.9) & 64.2 & 46.8 & 23.9 & 48.0 & 62.2 \\ DAB-DETR-R101 [4] & 50 & 44.0 & 62.9 & 47.6 & 23.8 & 48.4 & 61.8 \\ KS-DAB-DETR-R101 & 50 & 45.3 (+1.3) & 65.4 & 48.8 & 24.3 & 49.6 & 63.6 \\ DAB-DETR-Swin-T [4] & 50 & 45.2 & 66.8 & 47.8 & 24.2 & 49.0 & 64.8 \\ KS-DAB-DETR-Swin-T & 50 & 47.1 (+1.9) & 68.3 & 50.2 & 27.1 & 51.3 & 66.5 \\ \hline DN-DETR-R50 & 50 & 44.7 & 64.8 & 47.5 & 23.4 & 48.9 & 63.7 \\ KS-DN-DETR-R50 & 50 & 45.2 (+0.5) & 64.9 & 48.2 & 24.4 & 49.3 & 63.0 \\ DN-DETR-R101 & 50 & 45.6 & 65.9 & 49.0 & 24.4 & 49.9 & 64.0 \\ KS-DN-DETR-R101 & 50 & 46.5 (+0.9) & 66.2 & 49.8 & 25.7 & 50.8 & 65.2 \\ \hline Other Multi-scale DETR variants & & & & & & & \\ \hline Deformable-DETR-R50 & 12 & 35.3 & 51.8 & 38.2 & 19.2 & 39.1 & 47.2 \\ KS-Deformable-DETR-R50 & 12 & 36.4 (+1.1) & 53.5 & 39.5 & 20.1 & 39.5 & 48.2 \\ Deformable-DETR-R101 & 12 & 36.8 & 54.2 & 40.0 & 21.1 & 40.3 & 49.2 \\ KS-Deformable-DETR-R101 & 12 & 38.4 (+1.6) & 55.9 & 41.8 & 21.5 & 42.3 & 51.6 \\ DN-Deformable-DETR-R50 [11] & 12 & 43.4 & 61.9 & 47.2 & 24.8 & 46.8 & 59.4 \\ KS-DN-Deformable-DETR-R50 & 12 & 46.5 (+2.1) & 63.9 & 50.4 & 28.8 & 49.5 & 61.5 \\ \hline Deformable-DETR-R101 & 24 & 41.6 & 59.6 & 45.3 & 24.3 & 45.2 & 55.6 \\ KS-Deformable-DETR-R101 & 24 & 43.0 (+1.4) & 61.1 & 47.1 & 24.9 & 46.6 & 57.0 \\ \hline Deformable-DETR-R50 [25] & 50 & 44.1 & 62.6 & 47.7 & 26.4 & 47.1 & 58.0 \\ KS-Deformable-DETR-R50 & 50 & 44.8 (+0.7) & 62.9 & 48.7 & 26.9 & 48.4 & 58.9 \\ Deformable-DETR-R101 & 50 & 45.1 & 63.5 & 49.1 & 27.4 & 48.8 & 59.9 \\ KS-Deformable-DETR-R101 & 50 & 46.0 (+0.9) & 64.3 & 50.1 & 28.9 & 49.7 & 60.3 \\ \hline \end{tabular}
\end{table}
Table 1: Results of DETR baseline methods and our KS-DETR.
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline Model & Exp. & Sharing \(V\) & Sharing \(A\) & \(\mathrm{AP}\) & \(\mathrm{AP}_{50}\) & \(\mathrm{AP}_{75}\) & \(\mathrm{AP}_{S}\) & \(\mathrm{AP}_{M}\) & \(\mathrm{AP}_{L}\) \\ \hline \multirow{4}{*}{DAB-DETR-R50} & Baseline & & & 43.0 & 63.5 & 45.8 & 22.4 & 46.6 & 61.3 \\ & Dual-attention & ✓ & & 43.4 & 63.5 & 46.0 & 23.4 & 47.2 & 61.3 \\ & Dual-attention & & ✓ & 43.6 & 63.8 & 46.4 & 24.6 & 47.6 & 61.8 \\ & Triple-attention & ✓ & ✓ & 43.9 & 64.2 & 46.8 & 23.9 & 48.0 & 62.2 \\ \hline \multirow{4}{*}{DAB-DETR-R101} & Baseline & & & 44.0 & 62.9 & 47.6 & 23.8 & 48.4 & 61.8 \\ & Dual-attention & ✓ & & 45.0 & 65.3 & 47.8 & 25.6 & 48.9 & 63.3 \\ \cline{1-1} & Dual-attention & & ✓ & 45.2 & 65.4 & 48.4 & 25.9 & 49.4 & 62.7 \\ \cline{1-1} & Triple-attention & ✓ & ✓ & 45.3 & 65.4 & 48.8 & 24.3 & 49.6 & 63.6 \\ \hline \end{tabular}
\end{table}
Table 2: Ablation results of our KS-DETR built upon DAB-DETR-R50 and DAB-DETR-R101. All the models are trained with 50 epochs.
### Effects of GT Fg-Bg Mask on learning of teacher attention and values
The design of our method is to first learn high-quality \(A\) and \(V\) and then drive the shared \(V\) and \(A\) to a higher level of quality. Here we verify if we have learned high-quality \(A\) and \(V\) for the two teacher attentions. We plot the accuracies of the predictions for each attention (\(Attn1\), \(Attn2\) and \(Attn3\)) of our triple-attention in Fig. 4. We see that the two teacher attentions (\(Attn2\) and \(Attn3\)) outperform the student attention \(Attn1\) with a large margin (10+ AP) with the assistance of GT Fg-Bg Mask.
All the parameters of the model used for making predictions for \(Attn1\) and \(Attn2\) are shared, except the ones used for estimating their attention maps. Thus it is clear that \(Attn2\) has learned the much higher quality of attention maps compared with \(Attn1\). If we replace the attention map of \(Attn1\) with that of \(Attn2\), we can immediately obtain the same high accuracy of \(Attn2\). Similarly, we confirm that high-quality values are learned for the third attention by comparing the accuracies of \(Attn1\) and \(Attn3\).
### Effects of knowledge-sharing in triple-attention
We verify the effectiveness of the knowledge-sharing strategy in our second and third attentions. We compare our triple-attention with two dual-attention modules: 1) a plain attention and a second attention with high-quality values (with the assistance of GT Fg-Bg Mask) but sharing \(A\) with the plain attention (Fig. 2b), and 2) a plain attention and a second attention with high-quality attention maps but sharing \(V\) with the plain attention (Fig. 2c).
We use DAB-DETR-R50 and DAB-DETR-R101 as the baseline methods. The results are shown in Table. 2. We see that dual-attentions (sharing \(V\) or sharing \(A\)) outperform the baseline (single attention) and triple-attention exhibits the biggest improvements. The experiments demonstrate the effectiveness of the strategy of sharing \(V\) and \(A\) in attention learning and justify our design of triple-attention. However, the improvements for triple-attention over dual-attentions are tiny, this is an issue we will address in our future work.
## 5 Conclusions
In this paper, we propose a triple-attention module to improve the learning of scaled dot-product attention in the detection transformer. We use GT Fg-Bg Mask as additional cues to learn good teacher attention maps and values to eliminate the need of training a large teacher model. We design two teacher attentions to improve the learning of the attention maps and values of the plain student attention by sharing attention maps and values. Our method exhibits consistent improvements over various DETR-like baseline methods.
|
2308.04138 | Large Language Model Prompt Chaining for Long Legal Document
Classification | Prompting is used to guide or steer a language model in generating an
appropriate response that is consistent with the desired outcome. Chaining is a
strategy used to decompose complex tasks into smaller, manageable components.
In this study, we utilize prompt chaining for extensive legal document
classification tasks, which present difficulties due to their intricate
domain-specific language and considerable length. Our approach begins with the
creation of a concise summary of the original document, followed by a semantic
search for related exemplar texts and their corresponding annotations from a
training corpus. Finally, we prompt for a label - based on the task - to
assign, by leveraging the in-context learning from the few-shot prompt. We
demonstrate that through prompt chaining, we can not only enhance the
performance over zero-shot, but also surpass the micro-F1 score achieved by
larger models, such as ChatGPT zero-shot, using smaller models. | Dietrich Trautmann | 2023-08-08T08:57:01Z | http://arxiv.org/abs/2308.04138v1 | # Large Language Model Prompt Chaining for
###### Abstract
Prompting is used to guide or steer a language model in generating an appropriate response that is consistent with the desired outcome. Chaining is a strategy used to decompose complex tasks into smaller, manageable components. In this study, we utilize prompt chaining for extensive legal document classification tasks, which present difficulties due to their intricate domain-specific language and considerable length. Our approach begins with the creation of a concise summary of the original document, followed by a semantic search for related exemplar texts and their corresponding annotations from a training corpus. Finally, we prompt for a label - based on the task - to assign, by leveraging the in-context learning from the few-shot prompt. We demonstrate that through prompt chaining, we can not only enhance the performance over zero-shot, but also surpass the micro-F1 score achieved by larger models, such as ChatGPT zero-shot, using smaller models.
Prompt Chaining, Prompt Engineering, Long Legal Documents, Legal NLP, Legal AI +
Footnote †: CEUR Workshop Proceedings (CEUR-WS.org)
## 1 Introduction
The legal domain, with its often challenging tasks and complex long documents, is an important field of study for natural language processing (NLP) and machine learning [1, 2]. Long legal document text classification tasks can be challenging due to several factors, including large size, complex language and specific vocabulary, highly specialized content structure, imbalanced data (many common cases vs. a long-tail of peculiar ones), subjectivity (open to interpretation and debate), and the need for expensive, manual annotations from subject matter experts.
The recent surge in the utilization of legal benchmarks has stimulated a proliferation of innovative solutions harnessing pre-trained language models [3]. Conventionally, these methodologies necessitate an intensive annotation process (though some utilize metadata annotations), followed by a costly fine-tuning process for the models [3, 4].
The advent of large-scale pre-training of large language models (LLMs) has presented an opportunity to leverage them directly through natural language prompting [5], circumventing the need for additional task-dependent fine-tuning. Prompting involves providing a specific instruction, query, or question to an LLM to generate a specific output or response. The input, or prompt, steers the system towards producing a response meaningfully related to it.
The technique of prompt chaining [6, 7] has shown promise in NLP, sequentially linking multiple prompts to guide the generation process (Fig. 1). Through the utilization of consecutive prompts, the system can produce more contextually relevant responses for each step and more complex responses for the overall task.
Prompt chaining proves particularly advantageous in long legal document classification, improving task performance, efficiency, flexibility, and consistency via the inspection of individual steps in the chain [8]. The technique enhances the interpretability of the overall classification and permits debugging of complex reasoning tasks upon failure [9].
Figure 1: Prompt Chaining for Legal Document Classification
Overall, prompt chaining is a valuable tool in the classification of long legal documents, helping to improve the performance and efficiency of the classification process. Prompt chaining allows language models to build on their previous outputs and provide more nuanced classification results, and it can be customized to meet the specific needs of the legal document classification task.
Our contributions in this work are:
* We show that we can successfully chain prompts for legal document classification tasks (visualized in Fig. 1).
* We apply prompt chaining on one binary classification task and on one multi-class text classification task.
* We improve the results over zero-shot prompting with our chaining approach.
* Our prompt chaining approach even outperforms zero-shot ChatGPT prompting on the micro-f1 score.
## 2 Related Work
In terms of related literature we focus on the legal document classification work and on current prompting approaches, as well as the combination of the two fields.
### Legal Document Classification
Documents, with their characteristic long textual data, often pose significant challenges for automated machine learning methods in processing and classification tasks [10, 11]. These challenges become more pronounced in the legal domain due to additional complexities such as intricate grammar, nested sentences, domain-specific vocabulary, and extensive use of abbreviations [12].
The _LexGLEU_ benchmark [3] represents a comprehensive consolidation of recent datasets involving long legal documents, exclusively in the English language. It includes legal documents related to EU & US Law, as well as contracts with tasks pertaining to multi-label and multi-class classification and multiple choice question-answering. In [13], the authors evaluated a hierarchical approach for modeling long documents, and [14] investigated strategies to augment the context-window of transformers for domain-specific tasks, including the aforementioned _LexGLEU_ benchmark. While benchmarks in other languages do exist, such as _MultiEURLEX_[4], our work will also focus solely on the English language.
### Prompting
Several noteworthy projects aim to consolidate, evaluate, and standardize prompting approaches across diverse tasks and domains. The two most substantial projects include OpenPrompt [15] and PromptSource [16].
OpenPromptprovides a user-friendly and research-friendly toolkit for prompt-learning in pre-trained language models (PLMs). The advent of prompt-learning in natural language processing sparked a need for a standardized implementation framework. OpenPrompt caters to this need by delivering a modular and extendable toolkit that accommodates various PLMs, task formats, and prompting modules in a unified paradigm.
PromptSourceis a toolkit designed to facilitate the development and sharing of natural language prompts for training and querying language models in NLP. It offers a templating language for creating data-linked prompts, a swift iterative development interface, and a community-driven set of guidelines for contributing new prompts. Currently, the platform offers over 2,000 prompts for approximately 170 datasets, promoting collaboration and efficient utilization of prompts for language model training and querying.
Prompt Chainingas a concept was explored in [8], where prompts were chained using a visual program editor. Now, there are frameworks with prompt chaining at their core, such as LangChain [17], LLamaIndex [18], and MiniChain [19].
### Legal Prompting
Recent research has combined prompting and natural language tasks in the legal domain. In [20], authors evaluated zero-shot prompting on the legal judgment prediction task using multilingual data from the European Court for Human Rights (ECHR) and the Federal Supreme Court of Switzerland (FSCS). Meanwhile, [21] appraised _GPT-3's_ zero- and few-shot capabilities for legal reasoning tasks on the COLIIE entailment task (using English translations of the Japanese bar exam).
The GPT-3.5 model was evaluated on the US Bar Exam [22], and GPT-4 [23] has demonstrated proficiency in passing multiple tests through zero-shot prompting.
## 3 Data
The datasets utilized in our study are sourced from two widely recognized benchmarks comprising lengthy documents in the legal domain: the European Court of Human Rights (ECHR) and the Supreme Court of the United States (SCOTUS). These datasets form part of the _LexGLE_ benchmark [3].
### Echr
The ECHR dataset comprises approximately \(11,000\) cases sourced from the European Court of Human Rights public database. This dataset is split into training, development, and test sets. Each case includes factual paragraphs, along with the corresponding ECHR articles that were violated or alleged to be violated. The original task involves to predict the violated human rights articles for a case's facts. However, for the purpose of our study, we have simplified this task to a binary classification problem. We distinguish cases based on whether there was a violation of any human rights articles, irrespective of which specific articles were violated.
### Scotus
The SCOTUS dataset provides insight into the highest federal court in the USA, which handles complex or controversial cases unresolved by lower courts. This dataset combines information from SCOTUS opinions and the Supreme Court DataBase (SCDB). The SCDB offers metadata for all cases spanning from 1946 to 2020. Utilizing the SCDB, the dataset classifies court opinions into 14 issue areas (refer to App. B). The dataset is divided chronologically into training (#5000 samples, 1946-1982), development (#1400 samples, 1982-1991), and test (#1400 samples, 1991-2016) sets, each covering a distinct time period.
## 4 Models
Our experimental design incorporates both general-purpose text generation models and task-specific summarization models.
### Generation Models
We used two different 20 billion parameter LLMs in our text generation steps. Both of the models have a context window of up to 2048 tokens.
#### 4.1.1 GPT-NeoX
The _GPT-NeoX_ model [24] is an autoregressive language model, specifically a decoder-only model1, trained on the Pile dataset [25]. This models' weights are openly available under a permissive license (Apache 2.0 2).
Footnote 1: [https://hf.co/EleutherAl/gpt-neox-20b](https://hf.co/EleutherAl/gpt-neox-20b)
Footnote 2: [https://www.apache.org/licenses/LICENSE-2.0.html](https://www.apache.org/licenses/LICENSE-2.0.html)
#### 4.1.2 Flan-Ul2
_Flan-UL2_[26] is an encoder-decoder model3 based on the T5 architecture, trained with the mixture-of-denoisers objectives (diverse span corruption and prefix language modeling tasks). This model was further instruction fine-tuned using the Flan prompting collection [27]. The collection contains instructions for a diverse set of tasks (e.g., to summarize a text; to classify based on a list of options). The model is publicly available with an open source license (Apache 2.0).
Footnote 3: [https://hf.co/google/flan-ul2](https://hf.co/google/flan-ul2)
### Summarization Models
We also used task-specific summarization models for the creation of the legal summaries. In our experiments we found that - due to the lack of ground-truth summaries for our long legal documents - the task-specific summarization models created more coherent summaries compared to results from prompting the general generation models.
#### 4.2.1 Brio
The BRIO4 model [28] is an abstractive summarization model that achieves state-of-the-art result on the _CNN-DailyMail_ and _XSum_ datasets. It uses BART [29] as its base model and has a context window of 1024 tokens.
Footnote 4: [https://ft.co/Yale-LLX/Wrio-cnndm-uncased](https://ft.co/Yale-LLX/Wrio-cnndm-uncased)
#### 4.2.2 Primera
The PRIMERA5 model [30] is an abstractive summarization model that was trained on the _Multi-LexSum_ dataset at different granularities. We used the model that was trained on the granularity from the full source document to create a short summary and it has a context window of 1024 tokens. The other options are - besides different model architectures - long and tiny summaries.
Footnote 5: [https://ft.co/allenai/primera-multi_lexsum-source-short](https://ft.co/allenai/primera-multi_lexsum-source-short)
### Semantic Similarity Search
We use semantic similarity search for the few-shot prompt building were we retrieve semantic similar summaries from the training set to a target summary (either from the development or test set). For this purpose, our summaries were encoded using _thesentence-transformers_ library [31] and the _custom-legalbert6_ model [32]. Furthermore, we used the _annoy7_ library for the approximate nearest neighbor search (semantic similarity search).
Footnote 6: [https://ft.co/jucia-teslbert](https://ft.co/jucia-teslbert)
Footnote 7: [https://github.com/spotify/annoy](https://github.com/spotify/annoy)
## 5 Prompt Chaining
Prompt Chaining is a methodology employed to decompose complex tasks into smaller, manageable sub-tasks. A prompt chain typically comprises several prompts - either
task-specific or general-purpose, each serving a single purpose. The output of one prompt feeds into the next as an input. In our approach, we utilize pre-defined steps; however, it's worth noting that this methodology could be further optimized by incorporating a selection process for the next step or by introducing stopping criteria in the output generation, as exemplified in [9] and [33]. The steps for our prompt chaining process, specifically utilized for the classification of long legal documents, are depicted in Fig. 1. In the following sections, we will delve into each of the primary steps, namely summarization, few-shot prompt building, and final label generation.
### Summary Generation
The inaugural step in our prompt chaining approach is the generation of a succinct summary of the legal case text. As a majority of legal documents are lengthy, often exceeding the large context window of \(2048\) tokens provided by contemporary language models, we advocate the creation of summaries for chunks of the whole document. These chunks are crafted based on the model's context window. Sentences are sequentially stacked until the context limit of the respective models is reached (\(1024\) or \(2048\) tokens). Post the initial parsing of the full document, the summary generation process for chunks is iteratively continued until the desired summary length, in our case up to \(128\) tokens, is achieved. These summaries typically consist of a few sentences (up to 5) derived from our data.
Initial experimentation with direct prompting approaches on large language models resulted in variable outcomes. The templates used for prompting included:
* INPUT TEXT _In summary,_
* INPUT TEXT _TLDR:_
Since we do not possess ground-truth summaries for the documents, our assessments relied on manual inspection of a subset of the summaries generated (from the training set). The inspection indicated that the summaries were relatively generic and often omitted the core legal issues of interest.
Consequently, our investigation steered towards task-specific summarization models. Notably, the _BRIO_ model, being pre-trained and fine-tuned on news articles, generated more generic summaries. In contrast, the _PRIMERA_ model, fine-tuned specifically on legal documents, generated summaries where the core legal context was mostly preserved. This iterative summarization was uniformly applied across all documents using the same parameters.
### Semantic Similarity Search
The objective at this stage was to construct few-shot prompts for the subsequent label generation step. Thus, we embedded all our summaries with the models discussed in section 4.3 and calculated the semantically closest neighbors (up to eight) of each summary in the development and test sets, as compared to the training set. These summaries from the training set, along with their true labels and the current target sample (excluding its true label from the development and test sets), served as the few-shot prompts. This approach leverages the in-context learning abilities of large language models.
As illustrated by [34], an alternative approach could involve prompting a large language model to generate contextual texts based on an input, instead of retrieving from a corpus database as we have done. These generated samples could then be incorporated as in-context samples in the following step. This option can also be potentially implemented via LLMs prompting in our prompting pipeline. The evaluation of this approach's feasibility is left for future work.
### Label Generation
The final step in our prompt chain involves label generation. Here, we queried the LLMs with the few-shot prompts previously constructed, in conjunction with an instruction for the corresponding task and a list of potential labels to choose from. For the ECHR experiments, the binary options provided were _YES_ or _NO_, while for the SCOTUS experiments, up to 13 issue area labels were presented for model prediction. This prompt construction enabled the models to yield the desired label in all our experiments. Greedy decoding was used in this generation step for all results.
Another strategy employed involved sampling from the model output multiple times, a technique known as self-consistency via output sampling [35]. It has been demonstrated that querying multiple times enhances the probability of generating the true label. At the conclusion of each such sampling (up to 10 times), the majority count of the generated labels was taken as the final prediction.
## 6 Experiments
Our experiments pursued two objectives. Firstly, we aimed to enhance the zero-shot outcomes from prior work on the binary classification task on the ECHR dataset, leveraging prompting techniques without any parameter adjustments to the models. Following the successful demonstration of the efficacy of the few-shot approach, we expanded our focus. The second experiment involved extending the process to the 13 labels in the SCOTUS corpus, a task significantly more challenging than the prior binary classification.
In addition to this, we compared our results to the zero-shot ChatGPT results reported in [36], which covered a
subset of the overall samples in the SCOTUS data. We selected the corresponding samples and provided our results for comparison.
Over multiple iterations, we developed the few-shot prompts on a randomly selected portion (n = 40) of the development sets for both the _ECHR_ and the _SCOTUS_ datasets. The final eight-shot prompt incorporated the eight semantically closest summaries from the training set to the corresponding target set, as well as the respective gold label from the training set. Notably for the _SCOTUS_ corpus, we limited the available issue area labels for the model, based on the labels of the included eight samples. Once we identified the most effective composition of the few-shot prompt on the random sample, we applied it to the full development and test sets, and reported the results.
Our computational requirements were limited to CPU resources, as we only performed inference calls on the generative models. Our work did not involve the use of any GPUs. Detailed computational information is available in the appendix (see App. A).
## 7 Results
In this section, we discuss our results on both benchmarks. We have included the confusion matrices for the development sets (see Fig. 2 for ECHR and Fig. 3 for SCOTUS), the labelwise F1-scores (see Tab. 1 and Tab. 4), and the overall results (see Tab. 2 and Tab. 3).
### ECHR Results
With respect to the ECHR results, we managed to improve upon the zero-shot results from previous work. However, the few-shot context still did not suffice to reach the performance of supervised trained models. It's important to remember that full supervised fine-tuning involves hours of update runs with thousands of annotated samples, while our experiments only included inference calls. The confusion matrix (Fig. 2) also demonstrates some misclassification along the off-diagonal axis, although the few-shot prompting did capture more of the minority (_NO_) class.
### SCOTUS Results
The results for the multi-class text classification task for _SCOTUS_ are presented in Tab. 2. Alongside our results on the development and test sets, we also included external (ext.) results from [3] and [36].
While we achieved satisfactory performance on the development set, we observed a substantial drop in performance on the test set. The test set performance in terms of macro-f1 score was below the zero-shot Chat-GPT results. However, our prompt chaining approach was more effective in retrieving the higher frequency classes, as reflected in the better micro-f1 score.
This trend is also evident in the label-wise scores (Tab. 4), where the higher frequency classes received better scores than the minority classes. The confusion matrix (Fig. 3) for this experiment showed that particularly many issue areas were predicted as _civil rights_, while also the _criminal procedure_, _judicial power_ and _federalism_ were misclassified as others.
ments upon the zero-shot results. We also showed that it is feasible to predict frequent labels with appreciable F1 scores using this approach.
The strategy of prompting, and as we have demonstrated, the concept of prompt chaining, represent promising avenues for future exploration. These techniques are particularly advantageous as they circumvent the need for costly data annotation and the development of custom models.
Last but not least, established prompting pipelines can be adapted for use with different (updated) models and, as shown in [23], they offer across-the-board enhancements for a diverse range of tasks, free of cost. Looking ahead, our future work aims to experiment with even larger models on additional legal benchmarks.
\begin{table}
\begin{tabular}{c|l|c|c|c|c|c} \hline \hline & Model & Precision & Recall & macro-F1 & micro-F1 & weighted-F1 & Accuracy \\ \hline \multirow{4}{*}{
\begin{tabular}{c} **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** **F1** \\ **F1** \\ **F1** **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** **F1** \\ **F1** \\ **F1** \\ **F1** **F1** \\ **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** \\ **F1** \\ **F1** **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** \\ **F1** **F1** \\ **F1** \\ **F1** **F1** \\ **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** \\ **F1** **F1** \\ **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** \\ **F1** **F1** \\ **F1** \\ **F1** **F1 \\ **F1** **F1** \\ **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1 \\ **F1** **F1** \\ **F1** \\ **F1** **F1** \\ **F1** \\ **F1** **F1** \\ **F1** **F1 \\ **F1** \\ **F1** **F1** \\ **F1** **F1 \\ **F1** **F1** \\ **F1** \\ **F1** **F1** \\ **F1** **F1 \\ **F1** \\ **F1** \\ **F1** **F1** \\ **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** ** **F1** \\ **F1** ** \\ **F1** \\ **F1** **F1** \\ **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** \\ **F1** **F1** \\ **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** \\ **F1** **F1** \\ **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **F1** \\ **F1** **1** \\ **F1** |
2303.11121 | Towards Sustainable DevOps: A Decision Making Framework | In software industry, the DevOps is an increasingly adopting software
development paradigm. Towards the sustainable DevOps adoption, there is a need
to transform the organization Culture, Automation, Measurement and Sharing
(CAMS) aspects concerning to core theme of continues development and
operations. The software organizations face several complexities while
implementing the DevOps principles. The sustainable DevOps implementation
assist the software organizations to develop the quality projects with good
return on investment. This evidence-based study aims to explore the guidelines
of sustainable DevOps implementation, reported in literature and industry
practices. Using systematic literature review and questionnaire survey, we
identified the 48 guidelines for sustainable DevOps implementation. We further
develop a decision-making framework aiming to assist the practitioners to
consider the most significant set of guidelines on priority. The results show
that out of CAMS, culture is the most important principle for sustainable
DevOps implementation. Moreover, (i) enterprises should focus on building a
collaborative culture with shared goals, (ii) assess your organization
readiness to utilize a microservices architecture and (iii) educate executives
at your company about the benefits of DevOps to gain resource and budget
support are the highest priority guidelines for sustainable DevOps
implementation. We believe that this in-depth study helps the practitioners to
understand the core principles and guidelines for sustainable DevOps
implementation. | Muhammad Zohaib | 2023-03-20T14:01:27Z | http://arxiv.org/abs/2303.11121v1 | # Towards Sustainable DevOps: A Decision Making Framework
###### Abstract
In software industry, the DevOps is an increasingly adopting software development paradigm. Towards the sustainable DevOps adoption, there is a need to transform the organization's Culture, Automation, Measurement and Sharing (CAMS) aspects concerning to core theme of continues development and operations. The software organizations face several complexities while implementing the DevOps principles. The sustainable DevOps implementation assist the software organizations to develop the quality projects with good return on investment. This evidence-based study aims to explore the guidelines of sustainable DevOps implementation, reported in literature and industry practices. Using systematic literature review and questionnaire survey, we identified the 48 guidelines for sustainable DevOps implementation. We further develop a decision-making framework aiming to assist the practitioners to consider the most significant set of guidelines on priority. The results show that out of CAMS, culture is the most important principle for sustainable DevOps implementation. Moreover, (i) enterprises should focus on building a collaborative culture with shared goals, (ii) assess your organization's readiness to utilize a microservices architecture and (iii) educate executives at your company about the benefits of DevOps to gain resource and budget support are the highest priority guidelines for sustainable DevOps implementation. We believe that this in-depth study helps the practitioners to understand the core principles and guidelines for sustainable DevOps implementation.
**Keywords**: DevOps, guidelines, CAMS, systematic literature review, prioritization
## 1 Introduction
Software industry is always looking for effective and flexible ways to develop quality software within limited time and cost. Recently, DevOps paradigm has gained popularity in software development process [1, 2]. DevOps provides platform for both development and operation teams to work collaboratively to develop software products. DevOps facilities cross functional shared responsibilities and trust between both types of development and operation teams [3]. DevOps substantially extends the continuous development goals of the agile movement by supporting automation of continuous integration and release processes[4, 5]. Leonardo et al. [6] defined DevOps as: a culture effort that automate organization infrastructure and the processing cycle of software development, guaranteeing the reliability of software product. DevOps offer several benefits to software organizations such as more focus on implementation and frequent release. Moreover, DevOps also automate the build, testing and deployment processes [7]. Forsgren[7] stated that automated development process assists to reduce the human effort and enable the automated deployment according to the schedule.
Likewise, it is emphasized that the automatic development environment significantly contributed towards the development and quality of software applications[8]. The sustainable DevOps execution allow software organizations to deliver frequent small releases which helps improve visualization of modules to the end-user[9]. The small and frequent deployment offers the development teams to receive appropriate suggestions from client which assists to modify overall quality of a product [10]. In-spite of several benefits associated with sustainable DevOps, software practitioners face numbers of challenges for the sustainability of DevOps practices such as "fear of change", "conceptual deficit", "blame game", and "complex and dynamic environments" [11]. Similarly, Ramtin et al. [12] stated that communication gap and heterogeneous environments are critical challenges for sustainable DevOps implementation in software industry.
Despite challenges associated with DevOps sustainability in software industry, several well-established organizations such as Etsy, IBM, Netflix, and Flickr have successfully adopted it[13]. For example, in Flickr, effective communication and collaboration in both development and operations practitioners have helped the organization to decrease the release time. The implementations of DevOps practices in different organizations revealed that sustainable DevOps implementation enhances the systems quality and delivery process [13, 14]. Erich et al. [13] pointed out that practices for sustainable DevOps are rapidly being adopted by software organizations with an aim to gain benefits with them. The importance of sustainable DevOps paradigm in real world practices motivated us to conduct comprehensive systematic research to investigate and analyses the guidelines reported in the state-of-art and practices. The objectives of this study are: (1) a systematic literature review and questionnaire survey approach to explore and verify guidelines of sustainable DevOps paradigm; (2) to prioritize the investigated guidelines using fuzzy-AHP approach; and (3) to develop a decision-making
framework based on the rankings of guidelines. To reach-out the study objectives, the developed research questions are as follow:
* What guidelines for sustainable DevOps implementation in software development organizations are reported in the literature and industry practices?
* How the explored guidelines were prioritized using fuzzy-AHP?
* What would be the prioritization-based framework for sustainable DevOps guidelines?
The paper is organized as: study background is reported in section 2. The used research methodologies are discussed in section 3. The results and analysis are presented in section 4. Summary of the study findings is shown in section 5. Section 6 presents the threats to validity of study findings. The conclusions and future direction of the study are summarized in section 7.
## 2 Background
Software organizations have shown interest in adoption of software development approaches with reduced development and delivery cycle. The basic intention behind the adoption of new development approaches is the rapid change in the customers' requirements and the consideration of requested change in positive manner. Agile development approaches have been adopted by software industry to address the rapid change concern in software development life cycle[15]. The idea and the success of continues delivery comes-up with a new software development strategy known as DevOps. DevOps is a new software development methodology which focuses on collaboration between Developer and Operation teams to work in an environment where they can share goals, processes, and tools [4, 9, 16-18]. In software industry, experts treat DevOps as the cultural movement that assists the development environment concerning with effective communication, control, and responsibilities[15, 19]. Various studies have reported that the collaboration, automation, and services are the key aspects of DevOps [9, 20].
Dyck et al. [21] mention that the revolution caused by the DevOps significantly contributed to enhance the level of trust among practitioners that assist to transform and change the development environment in software organizations. Furthermore, Smeds et al. [18] "highlighted that the DevOps is not only a culture change it also helps to improve the development process. Literature also reported the limitation and importance of DevOps paradigm[22-24]. According to Logica et al. [25], the main advantages of DevOps are product quality services and continues bonding. Similarly, Gupta et al. [26] mention that the DevOps supports in trust building between Dev and Ops practitioners. Moreover, they explored and ranked DevOps attributes that are critical to evaluate the readiness considering the adoption of DevOps in an organization. Furthermore Gill et al. [27] expressed that the DevOps contributed to develop the bridge between Dev and Ops teams that overcome the communication and coordination gap between practitioners. Anna et al. [28] argued that, the DevOps provides the roadmap for project management team to support better performance, understandability, integration, relationships among teams. However, there is a need of strong collaboration, trainings, skills and effective automation to adopt DevOps practices in a practical way." The organization adopting DevOps also faced several critical challenges[27]. Gill et al. [27] highlighted that the process and procedure-related challenges, cultural conflicts, and the problems in operational models.
The existing literature portrays evidence-based research in the context to explore the guidelines for DevOps sustainability in software organizations. Furthermore, no research has been done to analyze the sustainable DevOps guidelines using the fuzzy-AHP approach. This detailed empirical investigations and analysis, will help the teams to understand and develop the methodologies for sustainable implementation of DevOps in software development industry.
## 3 Research design
The research design is divided into three steps, (1) systematic literature review (SLR), (2) questionnaire survey study, and (3) fuzzy AHP. At initial phase, the SLR was undertaken with the goal of identifying the guidelines for sustainable DevOps. Second, a questionnaire poll was conducted to obtain input from industry practitioners on the identified guidelines. The fuzzy-AHP was used in the third stage to rank the guidelines based on their importance for sustainable DevOps. Figure 1 depicts the research design in detail.
_3.1. Systematic Literature Review (SLR)_
SLR is considered to collect the most related literature concerning the research questions and objective of the study. SLR is considered as an effective approach to collect and evaluate the potential literature related to a specific topic. The SLR procedural approach that provides more valid and comprehensive results than informal literature review[29]. In this study, we have used the procedures defined by Kitchenham and Charters[29] to perform SLR. Considering the recommendations of Kitchenham and Charters[29], the review process consists of three broad phases: "planning the review", "conducting the review" and "reporting the review". The detailed steps of the SLR approach are presented in Figure 1 and described in the following sections.
_3.1.1. Planning the review_
Planning refers to developing the protocols adopted to collect and analyses the data. The following review protocols were adopted to extract and analyses the literature to answer the proposed research question.
**(i) Search sources**
_Data collection source:_
To collect the potential literature related to the research objective of the study, the selection of appropriate data sources is essential. Though, the suggestions of Chen et al. [30] and Zheng et al.[31] were considered, and 8 digital-databases were used to explore the data. Figure 2 presents the selected libraries along with the number of literatures comes against the execution of search string.
_Search string:_
An effective search string is essential to collect literature related to study objective. To develop the search string, we have used the key terms and their alternatives, collected from the existing studies[1, 13, 25, 27, 32], by following the guidelines of [31, 33]. Using the OR and AND operators were used to formulate the complete string, as presented below:
("guidelines" OR "practices" OR "motivators" OR "activities" OR "Concerns" OR "techniques" OR "tools" OR "methods" OR "process" OR "evaluation") AND ("DevOps" OR "Development and Operation" OR "Continues development and operation".
_Initial inclusion criteria:_
The protocols were created to determine whether to include literature gathered throughout the literature extraction process. The protocols for inclusion were taken from current studies. Inayat et al. [34] and Niazi et al. [35]. (1) "The article should be submitted to a reputable journal, conference, or book chapter for publication." (2) "The essay should discuss the obstacles that hinder DevOps implementation." (3) The findings of the study are based on empirical data sets. (4) The report should clearly state why DevOps adoption is important. (5) Selected literature must be written in English.
_Initial exclusion criteria:_
We've refined the protocols to omit the literature gathered from databases at the outset. The exclusion criteria were determined based on previous research. Inayat et al. [34], Niazi et al. [35], and Akbar et al. [36] (1) "Only the most completed study from a similar research endeavor was taken into account." (2) The article should include specific details on DevOps implementation. (3) The study that has nothing to do with the research's goal. (4) The paper used in the study is full or regular. (5) The studies from the literature review were not considered.
Figure 1: Study Research Design
_Study quality assessment (QA):_
The purpose of the QA was to determine the adequacy of the chosen study for the study objective. The QA is carried out in accordance with Kitchemhm and Chartros's guidelines[29]. The five-questions QA method were developed and evaluated using the Likert scale, if the study fully answers the criteria then the assigned score is 1, for partial=0.5 and 0-score if the study does not give any information about the developed criteria. Several previous studies[34-36] have used similar criteria. Appendix-A contains the outcomes of the QA.
#### 3.1.2 Conducting the review
_Final study selection:_
Primarily, 860 studies were extracted in the response of the search string executed on the selected database. The collected literature was further refined by applying the phase of tollgate approach developed by Afzal et al. [37]. The tollgate approach consists of five phases, and each step is performed carefully, aiming to select the studies for data extraction finally.
Though, as presented in Figure 2, a total of 71 studies were selected for the final data extraction process. All the conclusive selected studies were assessed concerning their significance for addressing the research questions of the survey. The excellent reviews were labelled as 'SP' to present their usage in the paper. The list of selected studies and their QA score is given in Appendix-A.
_Data extraction and synthesis:_
Finally, 71 studies (Figure 2) were thoroughly reviewed for data extraction that corresponded to the research goal. For data extraction, author no.1 and 2 are continually participated, the third and fourth authors validated the extracted data. Initially, the selected studies provided the claims, primary themes, concepts, practices, and actions. The collected data was then synthesized into concise statements, resulting in the final 48 DevOps implementation recommendations.
Although there may be bias in the study outcomes, the "inter-rater reliability test" was used to assess the mapping team's baseness[37]. Three external specialists were requested to participate in the mapping process to help with this. They chose the 12 research at random and began the data extraction process. We developed the non-parametric "Kendall's coefficient of concordance (W) [38] based on the findings of study authors and external experts. "A W=1 number implies perfect agreement, while a W=0 value indicates entire disagreement." The results of W=0.84 p=0.003 suggest that the study authors and external experts have significant agreement significantly. This implied that the study's findings are consistent. The used code is given in this link: _[https://tinyurl.com/v5fct4ql._](https://tinyurl.com/v5fct4ql._)
#### 3.1.3 Reporting the review
\begin{table}
\begin{tabular}{|c|l|} \hline Sr\# & Criteria \\ \hline QA1 & Does the used research method in the selected literature aligned with RQs? \\ \hline QA2 & Does the collected reported the DevOps guidelines? \\ \hline QA3 & Does study elaborate the details of DevOps adoption? \\ \hline QA4 & Are the reported guidelines related to DevOps project management? \\ \hline QA5 & Are the study findings justify the research questions? \\ \hline \end{tabular}
\end{table}
Table 1: QA criteria
Figure 2: Final selected studies
_Quality of selected studies:_
The QA of the selected studies demonstrates how the literature is effective in answering the study's research questions. According to the QA cumulative results, more than 70% of studies receive a score of 70% or higher. Appendix-A contains the detailed QA results. We selected a 50 percent score as a cutoff point in this study.
_List of guidelines:_
During data extraction process, the concepts, themes, and ideas reported in the selected literature were extracted, and by paraphrasing, we develop a list of 48 guidelines that are important for sustainable DevOps implementation in software development organizations.
### Empirical study
To collect the perceptions of industry experts, we used the questionnaire survey approach, and the used steps are discussed in the sub-sequent sections:
#### 3.2.1 Questionnaire development
An online survey questionnaire was created using the Google Form platform (docs.google.com/forms) to verify the SLR findings. The questionnaire survey is divided into three sections: the first section contains questions about the survey participants' bibliographic information; the second section contains questions about the survey participants' organizational information; and the third part is closed-ended and contains the SLR-identified list of DevOps guidelines. (3) The fourth portion is open-ended, and it allows survey respondents to add any extra guidelines that was not included in the closed-ended section. The participants feedback was received in the form of five-point Likert scale, from Strongly agree to strongly disagree. The Likert-scale also includes neutral option which help the practitioners to give unbiased feedback, as without neutral option, the respondents are banned to go to agree or disagree side[39].
#### 3.2.2 Pilot testing
The pilot assessment of the questionnaire was conducted to check and improve the understandability of questionnaire questions [40-42]. To do this, the developed questionnaire was sent to three experts, one experts from academic (Chongqing University, Chain) and two belongs to industry practices ("Virtual force-Pakistan" and "QSoft-Vietnam"). The experts recommend several changes to the questionnaire layout and questions for collecting bibliographic data from survey respondents. They also recommend arranging the questions in a table format. We updated questionnaire according to experts' opinions and the updated one is used for data collection process. The sample of used questionnaire is given in Appendix-B.
#### 3.2.3 Data sources
The data sources are crucial in determining the target population. The potential community must be targeted because it is required for the collection of accurate data. The goal of this survey study was to gain expert opinion into DevOps difficult factors discovered through literature using SLR study. We used professional email addresses, Research-Gate, and LinkedIn to target the population. The survey questionnaire was distributed to the intended geographically dispersed group using the snowballing technique [40-42].
From December 2020 to May 2021, the data collection process was carried out. A total of 102 replies were collected during the data gathering process. All the responses were manually verified, and nine were found to be incomplete. We opted not to include the incomplete response in the data analysis process after consulting with the study team. The final 93 complete replies, on the other hand, were employed in the data analysis procedure. The bibliographic information is presented in section-4.2.
#### 3.2.5 Survey data analysis
Frequency data analysis approach was adopted to analyze the collected responses, as it is considered the effective way to compares the responders opinions in between the variables and across the group of variables [43]. The same approach has been adopted in the existing studies [44-46].
### Phase 3: Fuzzy Set Theory and AHP
#### 3.3.1 Fuzzy set theory
The Fuzzy set theory is an extended version of classical set theory that's initially proposed by Zadeh[47]. That was oriented to fix the vagueness of uncertainties of ear world practices using multicriteria decision making problems.
The basic input of Fuzzy set theory is to epitomize the vague data, where's the membership function uF(x) objects maps between 0 and 1. The protocols of fuzzy set theory along with definition are presented in sub-sequent sections:
Definition: "A triangular fuzzy number (TFN) F is denoted by a set" (vl, vm, vu), as shown in Figure 4 and the membership-function donated as uF(x) of F.
\[\mu_{F}(x)=\begin{pmatrix}\dfrac{t-v^{l}}{v^{m}-v^{l}},&v^{l}\leq t\leq v^{m}\\ \dfrac{v^{u}-t}{v^{u}-v^{m}},&v^{m}\leq t\leq v^{u}\\ 0,&Otherwise\end{pmatrix} \tag{1}\]
Where v\({}^{l}\), v\({}^{m}\)and v\({}^{u}\) denotes the crisp lowest, highest priority, and highest possible values.
#### 3.3.2 Fuzzy analytical hierarchy process (FAHP)
FAHP is the most effective and powerful approach used to fix the multicriteria decision making problems. The key benefit of FAHP is the relative ease with which it manages the multiple criteria, easier to understand, and it can efficiently manage both qualitative and qualitative data. The following primary step of FAHP approach:
**Step1:**: Develop a hierarchy structure of the decision-making problem (as given Figure 5)
**Step2:**: Use pairwise comparison and calculate the weights.
**Step3:**: Apply the consistency check.
**Step4:**: Determine the priority order of each guideline.
However, conventional AHP has numerous advantages[48-50], but it also faced some core limitations as it is based on the "Crisp environment", "Judgmental scale is unbalanced", and the "abse
Figure 4: Triangular fuzzy number
Figure 5: FAHP decision hierarchy
because of these limitations the selection of judgment is subjective. The FAHP was developed to address these limitations of AHP to get results more effectively and accurately [51]. The FAHP deals with uncertainties, imprecise judgment of different experts by handling the linguistic variables. FAHP approach has been considered in different context [51-54]. To address the uncertainties and vagueness we have used the FAHP suggested by Chang [55] that provides more appropriate and consistent results compared with other FAHP approaches.
In a prioritization problems, let X = {\(v_{1}\), \(v_{2}\)..., \(v_{n}\)} signify the elements of main categories as an object set and U = {\(t_{1}\), \(t_{2}\),..., \(t_{n}\)} presents the values of a particular category as a goal set. Considering the Chang [55] approach, every object is measured, and extent-analysis for objective (gi) is performed, respectively. The following Equation" (2) and (3) are used to generate (m) extent analysis values for each object:
\(V^{1}_{g\mid i},V^{2}_{g\mid i}\)...,\(V^{m}_{g\mid i}\), (2)
\(i\) = 1,2,...,n (3)
Where, F\({}_{gi}\), (j = 1, 2,..., m) are fuzzy triangular numbers (TFNs). The Chang's extent analysis [55] is performed in the following steps:
**Step 1:** The element of fuzzy synthetic extent (S\({}_{i}\)) for the ith object using Eq. (4):
\(S_{i}\) = \(\sum\limits_{j\text{-1}}^{m}V^{j}_{g\mid}\) \(\otimes\)\(\left[\sum\limits_{i\text{-1}}^{n}\sum\limits_{j\text{-1}}^{m}V^{j}_{g\mid i}\right]^{ \text{-1}}\) (4)
To achieve the expression \(\sum\limits_{j\text{-1}}^{m}V^{j}_{g\mid i}\), execute the fuzzy addition operation of m extent analysis using Eq. (5):
\(V^{j}_{g\mid i}\) (\(j\) = 1,2,.....,\(m\)) value, as follow using Eq. (6):
\(\sum\limits_{i\text{-1}}^{n}\sum\limits_{j\text{-1}}^{m}V^{j}_{g\mid i}\) = (\(\sum\limits_{i\text{-1}}^{n}v^{i}_{i\text{-1}},\sum\limits_{i\text{-1}}^{n} v^{m}_{i\text{-1}},\sum\limits_{i\text{-1}}^{n}v^{u}_{i\text{-1}}\)) (6)
To end, The inverse of each vector is determined using Eq. (7):
\(\left[\sum\limits_{i\text{-1}}^{n}\sum\limits_{j\text{-1}}^{m}V^{j}_{g\mid i }\right]^{\text{-1}}\) = (\(\sum\limits_{i\text{-1}}^{n}v^{u}_{i\text{-1}},\sum\limits_{i\text{-1}}^{n} v^{m}_{i\text{-1}},\sum\limits_{i\text{-1}}^{n}v^{u}_{i\text{-1}}\)) (7)
**Step 2:** As F\({}_{\text{a}}\) and F\({}_{\text{b}}\) are two fuzzy triangular numbers, then these fuzzy numbers need to be compared that is knows as Degree of possibility i.e. \(V_{\text{a}}\) = (\(V^{1}_{\text{a}}\), \(V^{m}_{\text{a}}\), \(V^{n}_{\text{a}}\)) \(\geq\) V\({}_{\text{b}}\) = (\(V^{1}_{\text{b}}\), \(V^{m}_{\text{b}}\), \(V^{n}_{\text{b}}\)) and is compared as follows using Eq.(8) and Eq. (9).
\(V\) (\(V_{\text{a}}\) \(\geq\) \(V_{\text{b}}\)) = \(hgt\)(\(V_{a}\) \(\cap\) \(V_{\text{b}}\)) = \(\mu_{v_{a}}\) (\(d\)) = \(\left\{\begin{array}{cc}1&\text{if }V^{m}_{\text{b}}\geq V^{m}_{a\text{ }}\\ \dfrac{V^{l}_{\text{b}}-V^{u}_{\text{a}}}{(V^{m}_{\text{b}}-V^{u}_{\text{a}})+( V^{m}_{\text{b}}-V^{l}_{\text{b}})}&\text{Otherwise}\\ 0&\text{$V^{l}_{a}\leq V^{u}_{\text{b}}$}\end{array}\right\}\) (9)
Figure 6 indicates the highest intersection point in D, \(\mu_{v_{\text{a}}}\) and \(\mu_{v_{\text{b}}}\)(Figure 6). The values of \(T_{l}\)(\(V_{\text{a}}\) \(\geq\) \(V_{\text{b}}\)) and \(T_{2}\)(\(V_{\text{a}}\) \(\geq\) \(V_{\text{b}}\)) are compulsory for calculating the value of \(P_{1}\) and \(P_{2}\).
**Step 3:** Calculate the probability that a convex fuzzy number is bigger than k convex fuzzy numbers V\({}_{i}\) (\(i=1\), 2,..., \(k\)) can be calculated as follow using Eq. (10) and Eq.(11):
\[T(V\geq V_{1},V_{2},V_{3}....V_{k})=\min T(V\geq V_{i}) \tag{10}\]
Assuming that,
\[d^{`}(V_{i})=\min T(V_{i}\geq V_{k}) \tag{11}\]
for k = 1,2,...,\(n\); \(k\neq i\).
With the help of Eq. 12, calculate the weight vector.
\[W^{`}= (d^{`}(V_{1}),d^{`}(V_{2}),d^{`}(V_{3}),.....d^{`}(V_{n})) \tag{12}\]
Where, \(V_{i}\) (\(i=1,2,...,n\)) are \(n\) distinct elements.
**Step 4:** The normalized weight vectors are calculated using Equation 13, and the result will be a non-fuzzy number (known as defuzzification) which represents priority weight for the criteria:
\[W= (d(V_{1}),d(V_{2}),d(V_{3}),.....d(V_{n})) \tag{13}\]
Where \(W\) is a non-fuzzy number.
**Step 5: Checking consistency ratio**:
In fuzzy AHP [56], the pairwise matrices should always be consistent. As a result, the consistency ratio of each pairwise comparison matrix must be checked [57, 58]. The graded mean integration technique is used to defuzzify the matrix in this way. P = (l, m, u) is a triangular fuzzy number that can be defuzzified to a crisp number as follows:"
\[P_{crisp}= \frac{(4m+l+u)}{6} \tag{14}\]
After each value in the matrix has been defuzzified, the consistency ratio (CR) of the matrix may be simply calculated and tested to see if it is less than 0.10. Two key parameters are employed for this, namely the consistency index (CI) and the consistency ratio (CR), which are defined using Equations 14 and 15, respectively.
\[CI=\frac{I_{\max}-n}{n-1} \tag{15}\]
Figure 6: Triangular Fuzzy number
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline “Size of & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 \\ the matrix” & & & & & & & & & & & & & & & \\ \hline “Random & 0 & 0 & 0.58 & 0.9 & 1.12 & 1.24 & 1.32 & 1.41 & 1.45 & 1.49 & 1.51 & 1.48 & 1.56 & 1.57 & 1.59 \\ index” (RI) & & & & & & & & & & & & & & & \\ \hline \end{tabular}
\end{table}
Table 3: RI against each matrix size
\[CR=\frac{CI}{RI} \tag{16}\]
## 4 Results and analysis
By carefully applying the literature collection and data extraction process, a total of 48 guidelines were identified, which are reported in the literature. Bajpai [59] emphasized that for sustainable DevOps implementation, the software development organization should focused on core principle of DevOps i.e., culture, automation, measurement and sharing (CAMS). Originally, the CAMS framework was introduced by Kim[60] to define the core pillars of DevOps.
_Culture:_ Adapting to a DevOps culture entails converting into a company that prioritizes providing high-quality service. Service is not just making a product available to clients, but also ensuring that the product is in excellent functioning order and consistently meets their demands. To deliver better service, teams should understand how customers will receive and use the product. This will enable the teams to provide ongoing assistance to clients even after the product has been delivered. Developing a DevOps mindset - DevOps teams are motivated by certain traits to focus on providing high-quality service to clients. They are an environment that encourages continuous learning, experimentation, a product-oriented attitude, an engineering culture, and a quality-oriented focus.
_Automation:_ Routine tasks will be automated, resulting in a shift in which members of the Software Delivery team will primarily execute jobs with high variability and analyzability. To put it another way, engineering tasks. This necessitates a company with an organic structure and engineering teams that are mainly independent, multidisciplinary, and self-organized. To add value to the table, organizations must employ DevOps principles. Processes that are automated are repeatable. Process execution costs are kept to a minimum. To put it another way, automation produces predictable and standardized results. These ideas can be used to improve software delivery processes.
_Measurement:_ Continuously providing value and improvement requires systematic measurement process. The process can be improved by keeping track of key parameters and providing feedback. As part of any DevOps implementation, measurement and monitoring must be integrated into your day-to-day procedures. Callanan and Spillane [17] highlights the importance of bidirectional feedback loops for obtaining feedback from the development and operations teams.
_Sharing:_ DevOps is the collaboration of development and operations and sharing is fundamental to this. DevOps requires ongoing collaboration and, most crucially, knowledge exchange. Every team in an organization has experienced failures in terms of people, procedures, tools, projects, and so on. To support successful DevOps deployment, this knowledge should be shared among team members within an organization.
We mapped the investigated guidelines into CAMS principles as it covers all the important aspects of DevOps[61]. the coding scheme [62] was used to perform the mapping process. The mapping team consist of three authors of this study (Author no.1, 3 and 4), and two experts were invited from the real-world industry practices the mapping team labelled and grouped the guidelines into their most related categories. The mapping results are given in Table 4.
We conducted an "inter-rater reliability test" between the mapping team and two external experts to measure the researchers bias (invited from empirical software engineering labs). External experts were tasked with categorizing the collection of rules into the most closely related categories. We calculated the "non-parametric Kendall's coefficient of concordance" (W)[13] using the results of both the mapping team and external experts to check the inter-rater agreement between the researchers and independent experts. W=1 indicates complete agreement, whereas W=0 indicates complete disagreement. The results (W=0.89, p=0.005) demonstrated that researchers' and independent experts' mapping processes are similar. This indicates that the researcher and independent specialists have reached an agreement on the mapping method. As a result, the mapping is unbiased and
\begin{table}
\begin{tabular}{c|c|c|c} \hline
**Categories** & **Srf** & **Guidelines** & **IDs of selected SLR studies** \\ \hline Measurement & G1 & Organizations start DevOps practices with & PS1, PS3, PS13, PS 16, PS19, PS 21, PS29, PS33, \\ & small projects & PS37, PS41, PS546, PS52, PS55, PS58, PS64, PS70 \\ \cline{2-4} & G2 & Include modeling for legacy infrastructure and & PS6, PS15, PS 18, PS25, PS 31, PS39, PS43, PS54, \\ & applications in your DevOps plans & PS67 \\ \cline{2-4} & G3 & Consider application architecture changes based & PS9, PS17, PS27, PS 36, PS44, PS 55, PS70 \\ & n on-premises, cloud, and containers early on in the & \\ \hline \end{tabular}
\end{table}
Table 4: List of identified guidelines and their mapping in CAMS
\begin{tabular}{|c|c|c|} \hline & process & & \\ \hline G4 & Avoid fragmented toolset adoption, & PS2, PS5, PS10, PS 12, PS17, PS 24, PS28, PS34, \\ & which can add to your costs & PS38, PS44, PS54, PS50, PS54, PS57, PS61, PS68 \\ \hline G5 & Effective and comprehensive measurement & PS5, PS10, PS 14, PS18, PS 25, PS27, PS35, PS39, \\ & and monitoring & PS43, PS46, PS50, PS55, PS61, PS64, PS66 \\ \hline G6 & Decide which processes and tests to automate & PS1, PS7, PS 17, PS26, PS 31, PS43, PS44, PS56, \\ & first & PS61, PS70 \\ \hline G7 & Monitor the Application's Performance & PS4, PS7, PS11, PS20, PS 22, PS28, PS34, PS38, \\ & & PS42, PS45, PS49, PS53, PS54, PS56, PS59, PS66, \\ & & PS70, PS71 \\ \hline G8 & Integrated Configuration Management & PS11, PS22, PS31, PS 42, PS53, PS59, PS69 \\ \hline G9 & Emphasize Quality Assurance Early & PS6, PS9, PS 15, PS23, PS 29, PS31, PS37, PS42, \\ & & PS45, PS54, PS60, PS63, PS67 \\ \hline G10 & Active Stakeholder Participation & PS5, PS13, PS 19, PS27, PS 30, PS36, PS43, PS49, \\ & & PS55 & PS55, \\ \hline G11 & Use tools to capture every request & PS8, PS18, PS 25, PS33, PS 45, PS52, PS60, PS61 \\ \hline Automation & G12 & Decide which processes and tests to automate & PS4, PS8, PS 12, PS16, PS 26, PS30, PS36, PS38, \\ & first & PS40, PS45, PS52, PS56, PS62, PS68 \\ \hline G13 & Continuous integration and testing & PS4, PS5, PS5, PS 19, PS30, PS 41, PS47, PS52, PS65, \\ & & PS70 \\ \hline G14 & Implement tracking and version control tools & PS 9, PS14, PS 24, PS28, PS36, PS37, PS42, PS51, \\ & & PS59, PS62, PS66 \\ \hline G15 & Have a centralized unit for DevOps & PS5, PS8, PS13, PS 15, PS23, PS 24, PS30, PS35, \\ & PS37, PS40, PS44, PS50, PS57, PS64 \\ \hline G16 & Reduce handoffs & PS3, PS59, PS14, PS 16, PS21, PS 23, PS29, PS36, \\ & PS37, PS41, PS46, PS48, PS52, PS57, PS68, PS69 \\ \hline G17 & Implement Automation in Dashboards & PS2, PS11, PS14, PS30, PS 38, PS45, PS54, PS64, \\ & & PS66 \\ \hline G18 & Use the right and advanced tools & PS7, PS15, PS16, PS 22, PS 31, PS34, PS39, \\ & PS42, PS47, PS55, PS61, PS69 \\ \hline G19 & Use tools to capture every request & PS9, PS17, PS 23, PS35, PS 41, PS47, PS53, PS64 \\ \hline G20 & Use tools to log metrics on both manual and & PS7, PS8, PS 13, PS24, PS 27, PS23, PS39, PS43, \\ & automated processes & PS46, PS51, PS60 \\ \hline G21 & Provisioning and change management & PS3, PS6, PS15, PS 17, PS19, PS 25 25, PS29, PS35, \\ & PS39, PS45, PS47, PS51, PS55, PS56, PS62, PS67, \\ & PS69, PS70 \\ \hline G22 & Build Up the Rest of Your CICD Pipeline & PS6, PS11, PS 18, PS 24, PS28, PS 34, PS35, PS39, \\ & PS45, PS52, PS57, PS70 \\ \hline G23 & Take a ’security first approach’ & PS11, PS19, PS 26, PS34, PS 42, PS51, PS59 \\ \hline G24 & Use on-demand testing environments & PS3, PS10, PS 17, PS20, PS 25, PS32, PS35, PS41, \\ & & PS46, PS50, PS50, PS58, PS60, PS62 \\ \hline G25 & Develop automated continues deployment & PS2, PS7, PS98, PS 11, PS13, PS 16, PS23, PS26, \\ & environment & PS29, PS34, PS37, PS41, PS45, PS52, PS58, PS64, \\ & PS68, PS70 \\ \hline G26 & Standardize and automate complex DevOps & PS6, PS11, PS 15, PS 15, PS22, PS 26, PS31, PS38, PS42, \\ &vironments with cloud sandboxes and other tools & PS44, PS49, PS51, PS53, PS56,PS61, PS65 \\ \hline Sharing & G27 & Ensure continuous feedback between the teams to & PS8, PS13, PS15, PS 23, PS26, PS 29, PS35, PS38, \\ & spot gaps, issues, and inefficiencies & PS45, PS49, PS56, PS69 \\ \hline G28 & Communications and collaboration planning & PS4, PS4, PS8, PS 12, PS15, PS 19, PS22, PS28, \\ & PS29, PS31, PS35, PS39, PS43, PS54, PS59, PS65, \\ & & PS69, PS70, PS71 \\ \hline G29 & Continuous practice and planning to avoid & PS13, PS17, PS 25, PS36, PS 45, PS53, PS62 \\ & resistance & PS5, PS20, PS31, PS37, PS44, PS48, PS52, PS54,, \\ \hline G30 & Create real-time project visibility & PS5, PS 20, PS31, PS37, PS44, PS48, PS52, PS54,, \\ & PS66 \\ \hline G31 & Increase flow of communication by reducing & PS1, PS7, PS 16, PS21, PS 27, PS33, PS35, PS43, \\ & batch size & PS48, PS56, PS67 \\ \hline G32 & Building trust and share values and goals for & PS3, PS4, PS10, PS 15, PS 15, PS16, PS 20, PS24, PS27, \\ & effective channel & PS30, PS32, PS36, PS38, PS45, PS46, PS52, PS54, \\ & PS62, PS65, PS66, PS68 & PS68 \\ \hline G33 & Enterprises should standardized processes and & PS9, PS11, PS 17, PS22, PS 25, PS30, PS34, PS41, \\ & establish common operational procedures & PS47, PS49, PS51 \\ \hline G34 & Create a clear plan that includes milestones, & PS4, PS8, PS 14, PS22, PS 25, PS30, PS34, PS41, \\ & project owners, and well-defined deliverables & PS47, PS49, PS51, PS55, PS56, PS63 \\ \hline G35 & Teams need training on DevOps & PS5, PS12, PS 21, PS32, PS35, PS42, PS45, PS51, \\ & PS55, PS61, PS71 \\ \hline G36 & Shared code of conduct, a formal roles & PS1, PS9, PS 18, PS24, PS32, PS40, PS46, PS52, \\ & segment, and clear and simple processes may help & PS56, PS60, PS62, PS69, PS71 \\ & in understanding responsibilities & \\ \hline Culture & G37 & Exercise Patience & PS5, PS7, PS12, PS 13, PS17, PS 21, PS23, PS26, \\ & PS31, PS34, PS35, PS40, PS43, PS47, PS47, PS50, PS53, \\ & PS58, PS64 & PS58, PS64, PS49, PS47, PS47, PS50, PS53, \\ \hline G38 & Educate executives at your company about the & PS7, PS16, PS 25, PS33, PS39, PS46, PS49, PS54, \\ \hline \end{tabular}
### Results of Empirical investigations
#### 4.1.1 Respondents bibliographic information
The important aspects of bibliographic data of the survey participates are analyzed to check authenticity and generalizability of the collected data. The detailed bibliographic data of survey participants is given in Appendix-C.
_Respondent's country affiliation_
According to the analyzed results of bibliographic data, the survey participant is from 20 different countries. We noted that most respondents are from Asian countries. Besides, a good mix of the survey participates observed from over the globe (Figure 7), which is a positive sign for the generalization of study results. Based on the respondent's country affiliation, we are confident as the results of our study can consider by software industry of any country.
_Respondent's organization size_
We further analyzed the bibliographic data to check the organization size of survey participants. The results presented in Figure 8 shows that 26 (22%) respondents belong to small organizations, 49 (42%) are belongs to medium organizations, and 41 (35%) are from large scale organizations. The results show that there is a significant proportion of survey participants form each size of the organization. Hence, it is concluded that the results of this study are useful for every size of the organization.
Figure 7: Respondent’s affiliation countries
_Respondents working experience_
The bibliographic data was also investigated to check the experiences of survey respondents. Figure 9 depicts the survey respondents' experience, which runs from two to twenty years. The statistics (6 and 5.5, respectively) represent a young group of people who responded. According to the researcher, "there is a good mix of survey participants with varied levels of experience with software development operations."
_Respondent's designations_
Finstad et al. [38] mention that the responses are varied with respect to the designation of participants. Niazi et al. [31] reported that a respondent could only be measured appropriately if the participants deal with it frequently. The analyzed results show that most of the survey participants either project manager or software developers. The detailed results are shown in Figure 10.
Figure 8: Respondents organization size
Figure 9: Experience of survey respondents
#### 4.1.2 Respondents feedback
The aim of this questionnaire survey was to collect expert input and perceptions on the selected guidelines and their related principles. A total of 116 full responses were considered for further analysis during the data gathering procedure. Positive (strongly agree, agree), Neutral (strongly disagree, disagree), and Negative (strongly disagree, disagree) responses were categorized (Table 5). The positive category findings indicate the ideas of those participants who agree with the identified criteria identified by SLR and their categories. The negative comments reflect the opinions of those who disagree with the established rules in the various categories. The results of the neutral category show the responses of those participants who have no idea regarding the impact of the mentioned factors.
The results of the empirical study presented in Table 5 shows that most of the survey participants agree as the reported guidelines could positively influence for sustainable DevOps implementation in in software organizations. It is observed that G41 (enterprises should focus on building a collaborative culture with shared goals, 91%) is reported as the most important guidelines from the survey participants. We further noted that G9 (Emphasize Quality Assurance Early, 88%) and G40 (Keep All Teams on the Same Page, 88%) are declared as the second highly considered guidelines by the survey respondents.
Moreover, it is noted that C4 (Culture, 93%) is the most important category of the investigated guidelines as considered by the survey participants. C3 (Sharing, 88%) and C1 (Measurement, 84%) is declared as the second and third highest regarded as principle of the guidelines considered by the survey participants.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{3}{*}{} & \multicolumn{6}{c|}{**Number of responses =116**} \\ \cline{2-9} & \multicolumn{3}{c|}{**Positive**} & \multicolumn{3}{c|}{**Negative**} & \multicolumn{3}{c|}{**Neutral**} \\ \cline{2-9}
**S. No.** & **S. A** & **A** & **\%** & **D** & **S. D** & **\%** & **F** & **\%** \\ \hline C1 & 40 & 57 & 84 & 2 & 6 & 7 & 11 & 9 \\ \hline G1 & 40 & 56 & 83 & 4 & 1 & 4 & 15 & 13 \\ \hline G2 & 51 & 42 & 80 & 5 & 5 & 9 & 13 & 11 \\ \hline G3 & 46 & 39 & 73 & 9 & 3 & 10 & 19 & 16 \\ \hline G4 & 40 & 57 & 84 & 2 & 6 & 7 & 11 & 9 \\ \hline G5 & 37 & 51 & 76 & 6 & 4 & 9 & 18 & 16 \\ \hline G6 & 49 & 34 & 72 & 7 & 6 & 11 & 20 & 17 \\ \hline G7 & 37 & 48 & 73 & 6 & 7 & 11 & 18 & 16 \\ \hline G8 & 31 & 61 & 79 & 3 & 6 & 8 & 15 & 13 \\ \hline G9 & 58 & 44 & 88 & 0 & 3 & 3 & 11 & 9 \\ \hline G10 & 41 & 47 & 76 & 7 & 6 & 11 & 15 & 13 \\ \hline G11 & 30 & 64 & 81 & 2 & 6 & 7 & 14 & 12 \\ \hline C2 & 41 & 54 & 82 & 3 & 6 & 8 & 12 & 10 \\ \hline G12 & 39 & 46 & 73 & 8 & 7 & 13 & 16 & 14 \\ \hline G13 & 39 & 48 & 75 & 6 & 8 & 12 & 15 & 13 \\ \hline G14 & 30 & 51 & 70 & 10 & 5 & 13 & 20 & 17 \\ \hline G15 & 39 & 44 & 72 & 14 & 7 & 18 & 12 & 10 \\ \hline G16 & 33 & 40 & 63 & 16 & 5 & 18 & 22 & 19 \\ \hline G17 & 42 & 53 & 82 & 6 & 2 & 7 & 13 & 11 \\ \hline G18 & 39 & 56 & 82 & 8 & 3 & 9 & 10 & 9 \\ \hline G19 & 51 & 47 & 84 & 2 & 3 & 4 & 13 & 11 \\ \hline G20 & 45 & 48 & 80 & 4 & 4 & 7 & 15 & 13 \\ \hline G21 & 33 & 56 & 77 & 8 & 4 & 10 & 15 & 13 \\ \hline \end{tabular}
\end{table}
Table 5: Results of a questionnaire survey study
Figure 10: Designation of survey participants
### Application of Fuzzy-AHP
The fuzzy-AHP method was used to rank the discovered guidelines in terms of their importance in ensuring the sustainability of DevOps adopting in software industry. The phases of the Fuzzy-AHP technique are outlined in the sections below.
**Step-1 (Develop a hierarchy structure of reported guidelines and their categories)**
To apply the fuzzy-AHP, the critical session making problem is arranged in a hierarchy structure (as presented in Figure 5). The proposed hierarchy structure (Figure 11) was developed by considering the investigated guidelines and their core principle. The main objective of the study is found on the first levels (i.e., prioritization of sustainable DevOps guidelines), the categories and their corresponding guidelines are given on level-2 and level-3, respectively. The proposed hierarchy structure is presented in Figure 11.
Figure 11: Proposed hierarchy structure
**Step-2 (Conducting the pairwise comparison)**
The motive of this study is to rank the identified guidelines in terms of their importance for long-term DevOps implementation in software development organizations. We created a questionnaire and contacted respondents from the initial survey to undertake the pairwise comparison (for fuzzy-AHP analysis). Participants in the survey provided a total of 29 replies. To ensure that no data was missing, all the responses were manually examined. All 29 replies were judged to be complete. Appendix-C includes instrument used for the pairwise-comparison (second survey).
With the application of fuzzy-AHP analysis, a small sample size can be a problem. However, several other studies [41-44] have used a dataset of similar size to do the AHP analysis. Based on the opinions of five experts, Shameem et al. [33] prioritized the agile software development influencing factors.
Cheng and Li [43] prioritize the success criteria of construction collaboration based on information gathered from nine experts. Lam and Zhao [44] collected feedback from eight experts to prioritize the influence teaching quality factors. Furthermore, Wong and Li [63] used an AHP analysis to pick an intelligent buildings system, considering the responses of nine experts. As a result, we conducted a fuzzy AHP analysis using data from 29 experts, which is a sufficient sample size for generalizing the findings of this study.
To analyze the pairwise comparison of the DevOps standards and their respective categories, the data acquired via the fuzzy AHP survey was transformed in geometric mean. The geometric mean is a valuable tool for converting expert opinions into TFN values; the formula for doing so is as follows:
\[\begin{array}{c}\text{GM}=\text{n}\sqrt{\text{m}1\text{x}\,\text{m}2\, \times\text{m}3\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,\,-\,-\,-\,-\,-\,-\,-\,\,-\,-\,-\,-\,-\,-\,-\,\,-\,-\,-\,-\,\,-\,-\,-\,-\,-\,-\,-\,-\,\,-\,-\,-\,-\,\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,\,-\,-\,-\,-\,-\,-\,\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,-\,\,-\,
\[\sum_{j=1}^{m}F_{g^{3}}^{j}=(0.5,0.6,1)+(1.5,2,2.5)+(1,1,1)+(1,1.5,2)=(4,5.1,6.5)\]
\[\sum_{j=1}^{m}F_{g^{4}}^{j}=(0.4,0.5,0.6)+(1,1.5,2)+(0.5,0.6,1)+(1,1,1)=(2.9,3.6,4.6)\]
The "Measurement" (M), "Automation" (A), "Sharing" (S) and "Culture" (C) represent the synthesis values of DevOps principles which were calculated using Equation 4 as follow:
\[A= \sum_{j}F_{g^{i}}^{j}\otimes\left[\sum_{i}\sum_{j}F_{g^{j}}^{j} \right]^{+}\] \[= (5,7,8.5)\otimes\left(0.04386,\ 0.054945,\ 0.070922\right)=(0.219298,\ 0.384615,\ 0.602837)\]
The level of possibility was calculated via equation 6, and equation 7 and 8 are used to calculate the priority-weight of pairwise comparison matrixes.
The calculated weight vector is W\({}^{\prime}\) = (1, 0.030019, 0.69836, 0.36405) (Table 7) and by normalizing weight vector, priority wight are determined i.e., W = (0.4789, 0.01435, 0.3337). As per calculated weights, "culture" is the highest priority principle of selected DevOps project management guidelines.
**Step-4 (Test the consistency of the pairwise matrix)**
All pairwise comparison matrices are evaluated for consistency, and we offered a step-by-step calculation of the technique used to determine whether a particular pairwise matrix is consistent. We considered the Table of Principles for this i.e., (Table 8). Using Equation 14, a triangular fuzzy number from the pairwise comparison matrix of the DevOps principles is defuzzified to a crisp number, corresponding the Fuzzy Crisp Matrix (FCM), as shown in Table 8:
The value of \(\lambda_{\text{max}}\) of FCM matrix is calculated by adding the values of each column of Table 8, and then divided each value of with its column sum. At the end, the average of each is calculated for priority weight calculation (Table 9).
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & Measurement & Automation & Sharing & Culture \\ \hline Measurement & (1,1,1) & (0.3, 0.4, 0.5) & (1, 1.5, 2) & (0.5, 0.6, 0.1) \\ \hline Automation & (2, 2.5, 3) & (1,1,1) & (0.4, 0.5, 0.6) & (0.5, 0.6, 0.1) \\ \hline Sharing & (0.5, 0.6, 0.1) & (1.5, 2, 2.5) & (1,1,1) & (0.5, 0.6, 0.1) \\ \hline Culture & (1, 1.5, 2) & (1, 1.5, 2) & (1, 1.5, 2) & (1,1,1) \\ \hline \end{tabular}
\end{table}
Table 6: Pairwise comparing between the principles
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & M & A & S & C & d (Priority Weight) \\ \hline V (M\(\geq\)....) & - & 1 & 1 & 1 & 1 \\ \hline V (A\(\geq\)....) & 0.030018 & - & 0.26503 & 0.62273 & 0.030018 \\ \hline V (S\(\geq\)....) & 0.69837 & 1 & - & 1 & 0.69837 \\ \hline V (C\(\geq\)....) & 0.36406 & 1 & 0.64662 & - & 0.36406 \\ \hline \end{tabular}
\end{table}
Table 7: Results of V values for criteria.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & Measurement & Automation & Sharing & Culture & Priority \\ \hline Measurement & (1,1,1) & (0.3, 0.4, 0.5) & (1, 1.5, 2) & (0.5, 0.6, 0.1) & 0.11591 \\ \hline Automation & (2, 2.5, 3) & (1,1,1) & (0.4, 0.5, 0.6) & (0.5, 0.6, 0.1) & 0.29500 \\ \hline Sharing & (0.5, 0.6, 0.1) & (1.5, 2, 2.5) & (1,1,1) & (0.5, 0.6, 0.1) & 0.17028 \\ \hline \end{tabular}
\end{table}
Table 9: Normalized matrix of DevOps guidelines
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Culture} & \multicolumn{1}{c|}{(1, 1.5, 2)} & \multicolumn{1}{c|}{(1, 1.5, 2)} & \multicolumn{1}{c|}{(1, 1.5, 2)} & \multicolumn{1}{c|}{(1, 1.5, 2)} & \multicolumn{1}{c|}{(1,1,1)} & \multicolumn{1}{c|}{0.41882} \\ \hline \multicolumn{2}{|c|}{\(\lambda_{\max}\)= \(\Sigma\) ([\(\Sigma\) Cj]\(\times\) \{W\}\)} & \multicolumn{1}{c|}{(18)} & \multicolumn{1}{c|}{Where, \(\Sigma\)Cj\(=\) sum of the columns of Matrix [C] (Table 6),} \\ \multicolumn{2}{|c|}{W\(=\) weight-vector (Table 9), therefore} \\ \multicolumn{2}{|c|}{\(\lambda_{\max}=2.7\)*0.11591\(+\) 7.0*0.29500\(+\) 3.7*0.17028\(+\) 5.2*0.41882\(=\) 4.1067 & \\ \end{tabular}
According to the calculations, the FCM maximum Eigenvalue (max) is 4.1067. The FCM has 4 number. As a result, n=4 and the Random Consistency Index (RI) for n=4 is 0.9. (Table 3). As a result, the consistency index and consistency ration are calculated using equations 15 and 16 as follows:
\[\begin{split}\mathbf{CI}=\frac{\lambda_{\max}-n}{n-1}& =\frac{4.1067-4}{4-1}=0.035553\\ CR=\frac{CI}{RI}&=\frac{0.035553}{0.9}=0.039503 \\ \end{split}\]
The determined CR is 0.039513\(<\)0.10; therefore, the developed pairwise matrixes are constant. Using the same process, the consistency of other matrixes is determined and given at the end of Table 10, 11,12 and 13.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{G1} & \multicolumn{2}{c|}{G2} & \multicolumn{2}{c|}{G3} & \multicolumn{2}{c|}{G4} & \multicolumn{2}{c|}{G5} & \multicolumn{2}{c|}{G6} & \multicolumn{2}{c|}{G7} & \multicolumn{2}{c|}{G8} & \multicolumn{2}{c|}{G9} & \multicolumn{2}{c|}{G10} & \multicolumn{2}{c|}{G11} & \multicolumn{2}{c|}{Priority} \\ \hline G1 & (1,1,1) & \(\{1,1,5,2\}\) & (0,4,5, & (1,15,5) & 0,5,6, & (15,2, & (1,15,5, & 0,5,6, 1) & \(15,2,2) & (1,15,2) & \(5,0,6,1) & 0.099531 \\ \hline G2 & (0,5,0.6,1) & (1,1,1) & (1,5,2, & (0,5,6, & (0,4,5, & (1,15,5, & (0,5,6, & (1,15,5, & (0,4,5, & (1,5,2, & 0.0995757 \\ & 2.5) & 1) & 0.69 & 2) & 2.5 & 1) & 2) & 2) & 0.69 & 2) & 0.5 & 2) & 0.099531 \\ \hline G3 & (1,5, 2, 2,5) & (0,4, & (1,1,1) & (0,4,5, & (0,5,6, & (1,2, & (0,5,6, & (1,15, & (0,5, 0) & 0.098031 \\ & 0,6) & 0.69 & 0.69 & 0.69 & 1) & 2.5 & 1) & 2) & 2) & 0.69 & 2) & 0.69 & 2) & 0.69 \\ \hline G4 & (0,5, 0.6,1) & (1,1,5) & (0,4, & (1,1,5, & (0,5,6, & (1,1,1) & (0,4,5, & (1,1,5, & (0,5,6, & (1,15, & 0,098031 \\ & 0,6) & 0.69 & 0.69 & 1) & 2.5 & 1) & 2) & 2) & 1) & 0.69 & 2) & 0.69 & 1) & 0.69 \\ \hline G5 & (1,1,5,2) & (1,5,2) & (0,4,5, & (1,15, & (1,15, & (0,5,6, & (0,5,6, & (1,15, & (0,5,6, & (1,15, & 0,984217 \\ & 0,6) & 0.69 & 0.69 & 1) & 2.5 & 1) & 2) & 1) & 0) & 0.69 & 2) & 0.69 & 1) & 0.69 \\ \hline G6 & (0,4, 0.5, & (0,5,6, & (0,4,5, & (1,15, & (1,15, & (0,5,6, & (1,15, & (0,5,6, & (1,15, & 0,5,6, & (1,15, & 0,6, & (0,5, & 0.08523 \\ & 0,6) & 0.69 & 2) & 2) & 1) & 2) & 0.69 & 1) & 0.69 & 2) & 0.69 & 2) & 0.69 \\ \hline G7 & (0,5, 0.6,1) & (0,4,5) & (0,5, & (1,15, & (1,15, & (0,5,6, & (1,5, & (1,1,1) & (0,4,5, & (1,15, & 0,6, & (0,5, & 0.08523 \\ & 0,6) & 0.69 & 0.69 & 1) & 2.5 & 1) & 0.69 & 1) & 2) & 2) & 0.69 & 1) & 0.69 \\ \hline G8 & (1,15,2) & (1,15,2) & (0,4,5) & (0,5, & (0,4,5, & (0,5,6, & (1,5, & (1,15, & (1,15, & (0,4,5, & (1,1,1) & 0.08665 \\ & 0,6) & 0.69 & 0.69 & 1) & 0.69 & 1) & 2.55 & 2) & 2) & 2) & 2) & 2) \\ \hline G9 & (0,4, 0.5, & (0,4,5) & (0,4,5) & (1,15, & (1,15, & (1,15, & (1,15, & (1,15, & (0,4,5, & (0,5, & 0.08582 \\ & 0,6) & 0.69 & 0.69 & 0.69 & 0.69 & 0.69 & 1) & 2) & 0.69 & 2) & 0.69 & 2) \\ \hline G10 & (0,5, 0.6,1) & (1,5,2) & (0,4,5, & (0,4,5, & (1,15, & (1,15, & (1,15, & (0,5,6, & (1,15, & (0,4,5, & (1,1,1) & (1,5, & 0.08277 \\ & 2,5) & 1) & 0.59 & 2) & 1) & 0.69 & 1) & 2) & 2) & 0.69 & 2) & 0.69 & 1) \\ \hline G11 & (1,15,2) & (0,4,5, & (1,15, & (0,5,6, & (0,4,5, & (1,15, & (1,15, & (0,5,6, & (1,15, & 0,4,5, & (1,1,1) & 0.083275 \\ & 0,6) & 0.69 & 0.69 & 2) & 1) & 0.69 & 2) & 1) & 2) & 0.69 & 1) & 0.69 \\ \hline \hline \multicolumn{2}{|c|}{\(\lambda\)= \(12.249\), CI = 0.12485, CR = 0.082685} & \multicolumn{1}{c|}{} & \mul
**Step 5: Calculating the global weights**
The local weight (LW) represents the impact of a guideline on the overall study objective, i.e., prioritization of sustainable DevOps implementation guidelines, while the global weight (GW) represents the impact of a guideline on the overall study objective, i.e., prioritization of sustainable DevOps implementation guidelines. Beyond their principle, the GW is utilized to calculate the ultimate ranking of the guidelines in comparison to all 48 guidelines evaluated. The GW is calculated by multiplying a guideline's LW by the weight of their principal.
For example, the LW of G1 (Organizations start DevOps practices with small projects, 0.099531) and the principal's weight is C1 (Measurement, 0.11591); so, the GW of G1\(=(0.099531)\times(0.11591)=0.011537\)
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline & G37 & G38 & G39 & G40 & G41 & G42 & G43 & G44 & G45 & G46 & G47 & G48 & Priori \\ \hline G & (1,1,1) & \(1.5,2\) & \(\underline{0.4},0.5\), & (1.5, 2, 2) & (0.5, 0.6, 1) & (1.5, 2, 2) & \(\underline{5.0},6, 5) & \(\underline{5.2}, 1, 1.5, 2) & \(\underline{5.0},6, 5) & \(\underline{5.0},6, 1) & \(\underline{5.0},6, 1) & \(\underline{5.0},6, 1) & \(\underline{5.0}, 089771\) \\ \hline G & (0.5, 0.6, 1) & (1,1) & (1.5, 2, 2) & (0.5, 0.6, 1) & (0.5, 0.6, 2) & \(\underline{2.5}\) & (0.5, 1) & (1.5, 2) & \(\underline{0.5}, 0.6, 1) & \(\underline{2.2}, 2.5) & \(\underline{2.5}\) & \(\underline{637}\) \\ \hline G & (1.5, 2, 0.4, 10.5, 0.6) & (1,1.5, 2) & (0.5, 0.6, 1) & (1.5, 2, 2) & (0.5, 0.6, 1) & (1.5, 2, 2) & (0.5, 1) & (1.5, 0.05, 1) & \(\underline{0.05}, 0.6, 1) & \(\underline{1.5}, 2) & \(\underline{375}\) \\ \hline G & (0.4, 0.5, 0.6, 1) & (1,1.5, 2) & (0.5, 0.6, 1) & (1,1.5, 2, 2) & (0.5, 0.6, 1) & (1.5, 2, 2) & (0.5, 0.6, 1) & (0.4, 1.5, 2) & (0.5, 0.6, 1) & \(\underline{1.5}, 2) & \(\underline{0.6}, 1) & \(\underline{1.5}, 2) & \(\underline{0.05}, 0.6, 1) & \(\underline{1.5}, 2) & \(\underline{0.05}, 0.6, 1) \\ \hline G & (1,1.5, 2) & (0.4, 1.5, 0.6, 1) & (1,1.5, 5) & (0.4, 0.4, 0.5, 0.6, 1) & (1,1.5, 2) & (1.5, 2) & (1.5, 2) & (1.5, 2) & (1.5, 2) & \(\underline{0.5}, 1.5, 2) & \(\underline{1.5}, 2) & \(\underline{556}\) \\ \hline \end{tabular} \(\lambda\)max \(=17.140\), CI \(=0.15286\), CR \(=0.09613\)
\end{table}
Table 13: Pairwise comparison of guidelines of Culture principle
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline G24 & 0.4, 0.5, 0.6, 0.4, 0.5, 0.6, 0.
(Table 14). By comparing the local rank of G1 within their mapped principle, it is ranked as the second-highest priority guideline.
While comparing its GW with all other 48 guidelines, it stands out 39th most important guideline for the sustainable DevOps implementation in software organizations. The results presented in Table 14 shows that the GW of G41 (Enterprises should focus on building a collaborative culture with shared goals, GW=0.041591) is the highest priority guideline for sustainable DevOps implementation in software organizations. Moreover, G44 (Assess your organization's readiness to utilize a microservices architecture, GW= 0.039183) and G 38 (Educate exectives at your company about the benefits of DevOps, to gain resource and budget support, GW=0.038798) are declared as the second and third most significant guidelines for DevOps paradigm. The final ranking of all the other guidelines is presented in Table 14.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
**Category** & **Principle Weight** & **Guidelines** & **Local Weight** & **Local Weight** & **Global Rank** \\ \hline \multirow{9}{*}{C1 (Measurement)} & \multirow{9}{*}{0.11591} & G1 & 0.099531 & 2 & 0.011537 & 39 \\ \cline{3-7} & & G2 & 0.099575 & 4 & 0.011099 & 41 \\ \cline{3-7} & & G3 & 0.089031 & 6 & 0.01032 & 43 \\ \cline{3-7} & & G4 & 0.094217 & 5 & 0.010921 & 42 \\ \cline{3-7} & & G5 & 0.106180 & 1 & 0.012307 & 38 \\ \cline{3-7} & & G6 & 0.073984 & 11 & 0.008575 & 48 \\ \cline{3-7} & & G7 & 0.085232 & 9 & 0.009879 & 46 \\ \cline{3-7} & & G8 & 0.098665 & 3 & 0.011436 & 40 \\ \cline{3-7} & & G9 & 0.085852 & 8 & 0.009951 & 45 \\ \cline{3-7} & & G10 & 0.088277 & 7 & 0.010232 & 44 \\ \cline{3-7} & & G11 & 0.083275 & 10 & 0.009652 & 47 \\ \hline \multirow{9}{*}{C2 (Automation)} & \multirow{9}{*}{0.29500} & G12 & 0.078232 & 1 & 0.023078 & 13 \\ \cline{3-7} & & G13 & 0.077156 & 2 & 0.022761 & 14 \\ \cline{3-7} & & G14 & 0.061135 & 11 & 0.018035 & 26 \\ \cline{3-7} & & G15 & 0.072116 & 6 & 0.021274 & 18 \\ \cline{3-7} & & G16 & 0.075751 & 4 & 0.022347 & 16 \\ \cline{3-7} & & G17 & 0.074993 & 5 & 0.022123 & 17 \\ \cline{3-7} & & G18 & 0.065516 & 9 & 0.019327 & 22 \\ \cline{3-7} & & G19 & 0.077156 & 3 & 0.022761 & 15 \\ \cline{3-7} & & G20 & 0.069726 & 8 & 0.00569 & 20 \\ \cline{3-7} & & G21 & 0.052684 & 14 & 0.015542 & 34 \\ \cline{3-7} & & G22 & 0.06264 & 10 & 0.018479 & 25 \\ \cline{3-7} & & G23 & 0.052583 & 15 & 0.015512 & 35 \\ \cline{3-7} & & G24 & 0.054597 & 12 & 0.016106 & 31 \\ \cline{3-7} & & G25 & 0.071115 & 7 & 0.020979 & 19 \\ \cline{3-7} & & G26 & 0.054597 & 13 & 0.016106 & 32 \\ \hline \multirow{9}{*}{C3 (Sharing)} & \multirow{9}{*}{0.17028} & G27 & 0.110115 & 2 & 0.01875 & 23 \\ \cline{3-7} & & G28 & 0.098379 & 5 & 0.016752 & 28 \\ \cline{3-7} & & G29 & 0.095144 & 7 & 0.016201 & 30 \\ \cline{3-7} & & G30 & 0.089664 & 9 & 0.015268 & 36 \\ \cline{3-7} & & G31 & 0.10977 & 3 & 0.018692 & 24 \\ \cline{3-7} & & G32 & 0.087082 & 10 & 0.014828 & 37 \\ \cline{3-7} & & G33 & 0.119216 & 1 & 0.0203 & 21 \\ \cline{3-7} & & G34 & 0.099249 & 4 & 0.0169 & 27 \\ \cline{3-7} & & G35 & 0.097639 & 6 & 0.016626 & 29 \\ \cline{3-7} & & G36 & 0.093742 & 8 & 0.015962 & 33 \\ \hline \multirow{9}{*}{C4 (Culture)} & \multirow{9}{*}{0.41882} & G37 & 0.089771 & 5 & 0.037598 & 5 \\ \cline{3-7} & & G38 & 0.092637 & 3 & 0.038798 & 3 \\ \cline{3-7} & & G39 & 0.082375 & 7 & 0.0345 & 7 \\ \cline{3-7} & & G40 & 0.074275 & 9 & 0.031108 & 9 \\ \cline{3-7} & & G41 & 0.099306 & 1 & 0.041591 & 1 \\ \cline{3-7} & & G42 & 0.072511 & 11 & 0.030369 & 11 \\ \cline{3-7} & & G43 & 0.083396 & 6 & 0.034928 & 6 \\ \cline{3-7} & & G44 & 0.093556 & 2 & 0.039183 & 2 \\ \cline{3-7} & & G45 & 0.074588 & 8 & 0.031239 & 8 \\ \cline{3-7} & & G46 & 0.092637 & 4 & 0.038798 & 4 \\ \cline{3-7} & & G47 & 0.071128 & 12 & 0.02979 & 12 \\ \cline{3-7} & & G48 & 0.073821 & 10 & 0.030918 & 10 \\ \hline \end{tabular}
\end{table}
Table 14: Determining global weights
**5. Summary and discussion**
This study aims to explore the guidelines of sustainable DevOps implementation and to prioritize them concerning their significance for software organizations. To address the objective of this study, three research questions were proposed: the summary of the findings of each question is presented below:
_RQ1 (What guidelines for sustainable DevOps implementation in software development organizations are reported in the literature and industry practices?)_
A comprehensive literature review was done to identify prospective literature relevant to the study's goal. A total of 71 studies were found using the SLR. The studies were carefully reviewed, and 48 best practices that could influence the DevOps paradigm in the software industry were investigated. The guidelines were then classified into the CAMS model's (i.e., Culture, automation, measurement and sharing). The mapping of identified guidelines into CAMS aids in the development of the investigated guidelines' hierarchy structure, which is then used to perform the fuzzy-AHP. We conducted a questionnaire survey study with experts to scale the investigated SLR guidelines and their categorization in the CAMS model. 116 full responses were received from the experts during the data gathering process for the survey. All the researched guidelines from the current literature are related to industry practices, according to the summary results of the questionnaire survey study, respondents agree with the categories of the rules.
_RQ2 (How the explored guidelines were prioritized using fuzzy-AHP?)_
The fuzzy-AHP approaches' step-by-step protocols were used to prioritize investigated best practice in regard to their significance for the DevOps paradigm. The pairwise matrixes of the guidelines of each category were generated based on the expert's opinions to do the fuzzy-AHP analysis. All the fuzzy-AHP processes were meticulously followed, and the priority weights of each best practice were calculated. The priority weight (global weight) of each best practice was obtained using the fuzzy-AHP analysis. The results show that G41 (Enterprises should focus on building a collaborative culture with shared goals, GW=0.041591) is the highest priority best practice for DevOps adoption and its progression in software organizations. Leonardo et al.[6] highlighted that DevOps required a cultural change in the software development organization, as it offers continues and a collaborative work environment between the developers and operators. Gupta et al.[26] and Marijan et al.[65] also highlighted the importance of collaborative culture for the successful adoption of DevOps paradigm. Moreover, G44 (Assess your organization's readiness to utilize a microservices architecture, GW= 0.039183) and G38 (Educate executives at your company about the benefits of DevOps, to gain resource and budget support, GW=0.038798) are ranked as three most priority best practise of DevOps paradigm.
_RQ3 (What would be the prioritization-based framework of DevOps sustainable guidelines?)_
The taxonomy of the investigated guidelines was developed considering the CAMS model and using the rankings determined via fuzzy-AHP. We have used both global and local ranks (Table 14). The objective of taxonomy development is to show the impact of each guideline in their own principle and for overall DevOps paradigm.
For example, G1 (Organizations start DevOps practices with small projects) is locally ranked as the 2\({}^{\text{nd}}\) most important guideline for the sustainable DevOps execution in software industry. An interesting observation is that G1 is ranked as the 39\({}^{\text{th}}\). Similarly, G2 (Include modelling for legacy infrastructure and applications in your DevOps plans) is declared as the 4\({}^{\text{th}}\) most important guidelines in 'Management category' and ranked as the 42\({}^{\text{nd}}\)with respect to global rankings.
The local and global ranks of each guideline are presented in Figure 12, which renders the impact of a particular guidelines within their respective principal and for overall project compared with all the identified 48 guidelines. Moreover, C1 (Measurement, CW=0.41882) is ranked as the most significant principal of the identified guideline. Furthermore, it is observed that C2 (Automation=0.295), C3 (Sharing, CW=0.17028) is ranked as the second and third most significant principle of the sustainable DevOps implementation guidelines. Hence, the focus on these areas could assist the organization for sustainable DevOps practices implementations.
## 6 Threats to Validity
One of the limitations of the study is potential researcher's biasness in the investigated guidelines using a literature review study. To address this comment, the "inter-rater reliability test" was performed, and the results shows no significant biasness in the literature study findings. Moreover, the literature was collected from the limited selected repositories that might causes the chance of omitting the related literature. Though, considering the existing studies, this threat is no systematic [35, 40, 66].
Similarly, the small size of the data set poses an external threat to the generalization of the questionnaire survey. We received n=116 responses for this study, which may be insufficient to generalize the findings. However, based on previous research in the software engineering sector [17, 50, 51], this sample size is adequate for generalizing the findings.
Furthermore, the fuzzy-AHP was used to rank the investigated guidelines and their corresponding categories based on the opinions of experts. To mitigate this concern, the consistency ratio of pairwise comparison matrices was calculated, and the results show that fuzzy AHP analysis results have appropriate internal validity.
## 7 Conclusion and Future directions
DevOps is an approach which combines development and operations to enable agility during software development process. The implementation of DevOps practices is complex, and this motivates us to explore the guidelines that are important for sustainable DevOps implementation in software development organizations. A total of 48 guidelines were discovered as a consequence of the systematic literature review. The identified guidelines were then mapped into the CAMS model, which represents the key DevOps principles. Moreover, the questionnaire survey study was conducted to get the insight of experts on the identified guidelines. The results of the questionnaire survey study indicated that the identified guidelines are in line with real-world practices. Finally, the investigated guidelines were further prioritized with respect to their significance for sustainable DevOps implementation using fuzzy-AHP. The prioritization results show that 'enterprises should focus on building a collaborative culture with shared goals', 'assess your organization's readiness to utilize a microservices architecture' and 'educate executives at your company about the benefits of DevOps' are important guidelines. The categorization of investigated guidelines and their rankings provides a taxonomy that could assist the academic researchers and industry practitioners in revising and developing the new effective strategies for the sustainable implementation of DevOps paradigm in software organizations.
As part of future work, we plan to conduct multivocal literature review and case studies to explore the additional guidelines associated with DevOps paradigm. In addition, we will identify the critical challenges and success factors that need to be addressed for the sustainable DevOps practices in software organizations. Ultimately, we
Figure 12: Prioritization based framework of the investigated guidelines
plan to develop a readiness model which will assist the practitioners in assessing and improving their DevOps implementation strategies.
**Appendix-A:** Selected studies along with quality assessment score ([https://tinyurl.com/y9x3fg3z](https://tinyurl.com/y9x3fg3z))
**Appendix-B:** Sample of questionnaire survey ([https://tinyurl.com/v83245jv](https://tinyurl.com/v83245jv))
**Appendix-C:** Sample of pairwise comparison questionnaire ([https://tinyurl.com/v97k7jp9](https://tinyurl.com/v97k7jp9))
**Conflict of Interest:** Authors declare no conflicts of interest.
**Acknowledgment**
The authors are grateful to the Deanship of Scientific Research, King Saud University for funding through Vice Deanship of Scientific Research Chairs.
|
2306.05952 | Overcoming Adversarial Attacks for Human-in-the-Loop Applications | Including human analysis has the potential to positively affect the
robustness of Deep Neural Networks and is relatively unexplored in the
Adversarial Machine Learning literature. Neural network visual explanation maps
have been shown to be prone to adversarial attacks. Further research is needed
in order to select robust visualizations of explanations for the image analyst
to evaluate a given model. These factors greatly impact Human-In-The-Loop
(HITL) evaluation tools due to their reliance on adversarial images, including
explanation maps and measurements of robustness. We believe models of human
visual attention may improve interpretability and robustness of human-machine
imagery analysis systems. Our challenge remains, how can HITL evaluation be
robust in this adversarial landscape? | Ryan McCoppin, Marla Kennedy, Platon Lukyanenko, Sean Kennedy | 2023-06-09T15:09:16Z | http://arxiv.org/abs/2306.05952v2 | # Overcoming Adversarial Attacks for Human-in-the-Loop Applications
###### Abstract
Including human analysis has the potential to positively affect the robustness of Deep Neural Networks and is relatively unexplored in the Adversarial Machine Learning literature. Neural network visual explanation maps have been shown to be prone to adversarial attacks. Further research is needed in order to select robust visualizations of explanations for the image analyst to evaluate a given model. These factors greatly impact Human-In-The-Loop (HITL) evaluation tools due to their reliance on adversarial images, including explanation maps and measurements of robustness. We believe models of human visual attention may improve interpretability and robustness of human-machine imagery analysis systems. Our challenge remains, how can HITL evaluation be robust in this adversarial landscape?
Machine Learning, Adversarial Attacks, Human-In-The-Loop, Adversarial Attacks
## 1 Introduction
The adversarial advent has greatly impacted the machine learning landscape. Adversarial images can degrade a model's ability to perform its task. In addition to degrading ML model performance, these perturbations are often crafted to evade detection by image analysts. Additional data can be included in image analysis applications to aid the analyst; including robustness metrics and explanation maps. Even so, adversarial attacks have managed to corrupt or evade many of these additional tools. HITL defenses are needed in order to thwart attacks targeting the human element of computer vision. We present the need for further research in HITL applications and human observation in the adversarial domain.
## 2 Related Work
### Attacking Active Learning
Humans are involved throughout the learning pipeline, including in curating training data. Bespoke training data is unlabeled and is labeled at substantial expense. Active learning involves more efficient labeling methods and is largely separated from the field of adversarial machine learning. While several studies have looked at misleading active learning querying (Miller et al., 2017) there is much to explore in how active learning is affected by adversarial attacks. Specifically, how can a model query and labeling be enacted in an adversarial environment without poisoning the model.
### Manipulating Model Explanations
Model utility hinges on user trust, which requires reasonable reliability measures (e.g. confidence intervals) and understanding why it made a particular decision. Past work has developed attacks that can modify classifier explanations without changing the classifier prediction. (Dombrowski et al., 2019). The relationship between model outputs and trust is complicated, and a recent report found that user trust following attacks is poorly studied (Xiong et al., 2022). Explanation maps which are expected to reveal adversarial images (Ye et al., 2020) may be manipulated to disguise adversarial attacks and model biases. This poses obstacles not only for model trust but also for HITL evaluation (Figure 1). While solutions have been put forward which provide robustness toward manipulation (Dombrowski et al., 2022), the issue remains when analysts are looking at unfamiliar classes or using traditional explanatory techniques.
Figure 1: Adversarial images can have manipulated explanations. Image A - adversarial image; explanation reveals target class. Image B - adversarial image; manipulated explanation hides target class. Manipulation based on (Dombrowski et al., 2019)
## 3 Opportunity: Human-In-The-Loop Studies
### Human Vision and Human Vision Models
Because humans are not fooled by adversarial images in the same way as deep networks, humans and human vision models may contribute to adversarial robustness. Human attention models can be used to predict where humans will look when viewing a scene. Task-specific attention models can be built with gaze tracking data or crowd-sourced using interfaces that direct users to highlight (or deblur) areas of an image that are critical to their classification decision (Linsley et al., 2017).
Disagreements between human attention models and model explanations may indicate manipulated images, low salient classes or faulty models. Taking user gaze into account could also improve model value by reducing user workload, simplifying training, or improving user performance (Matzen et al., 2016).
### Prototype Humans-in-the-Loop Tools
Active and Interactive machine learning methods allow users to label data within an interface. This can be used to investigate attacks on active/interactive labeling, user interfaces, and be used to train models from user responses. One of our prototype HITL tools examines how users can contribute to adversarial or poisoned image detection. Users assign 'cards' that contain images and metadata to 'poisoned' or 'benign' categories, as seen in Figure 2. Visual explanations of a classifier's decision, obtained via Grad-CAM (Selvaraju et al., 2017), provide the user with regions of an image the ML model focuses on during classification. As user annotated data is collected, a poison detection ML model is updated via active learning. With this tool, we aim to explore:
* How do adversarial detection models compare to analyst detection capability?
* What explanations do users find useful and convincing?
* Do more robust explanations improve human performance?
* Which attacks are really invisible? What tools can make them more visible?
## 4 Challenges
There are many challenges in HITL research concerning adversarial robustness that have yet to find answers. These concerns include visualizing perturbations, which explanation maps are robust and beneficial, the reliability of active learning query methods and which additional metrics may be beneficial to the analyst.
Generally, users are unable to detect adversarial images and thus rely on "explanation maps" to provide visibility. These maps can also be adversarial distorted (Dombrowski et al., 2019). If explanation maps are not always reliable or truthful, what information can be provided to the user in order to detect adversarial images and improve model robustness? In addition, is it possible to gain a robustness to these distortions in a similar manner to adversarial training? We believe human vision models may provide a key in detecting and interpreting adversarial images. However, recent research has shown that deep models of human attention are also vulnerable to adversarial attack (Che et al., 2020). Can we improve the adversarial robustness of these models and combine human and machine attention to identify adversarial images? Research has already begun in adversarial robustness, but not in how it relates to HITL evaluation and how analysts are able to use more robust explanations.
## Acknowledgements
The opinions expressed herein are solely those of the authors and do not necessarily represent the opinions of the United States Government, the U.S. Department of Defense, the Department of the Air Force, or any of their subsidiaries or employees. This research was funded by a Commander's Research and Development Fund grant from the Air Force Research Laboratory. DISTRIBUTION STATEMENT A.
|
2310.09656 | Mixed-Type Tabular Data Synthesis with Score-based Diffusion in Latent
Space | Recent advances in tabular data generation have greatly enhanced synthetic
data quality. However, extending diffusion models to tabular data is
challenging due to the intricately varied distributions and a blend of data
types of tabular data. This paper introduces Tabsyn, a methodology that
synthesizes tabular data by leveraging a diffusion model within a variational
autoencoder (VAE) crafted latent space. The key advantages of the proposed
Tabsyn include (1) Generality: the ability to handle a broad spectrum of data
types by converting them into a single unified space and explicitly capture
inter-column relations; (2) Quality: optimizing the distribution of latent
embeddings to enhance the subsequent training of diffusion models, which helps
generate high-quality synthetic data, (3) Speed: much fewer number of reverse
steps and faster synthesis speed than existing diffusion-based methods.
Extensive experiments on six datasets with five metrics demonstrate that Tabsyn
outperforms existing methods. Specifically, it reduces the error rates by 86%
and 67% for column-wise distribution and pair-wise column correlation
estimations compared with the most competitive baselines. | Hengrui Zhang, Jiani Zhang, Balasubramaniam Srinivasan, Zhengyuan Shen, Xiao Qin, Christos Faloutsos, Huzefa Rangwala, George Karypis | 2023-10-14T19:59:03Z | http://arxiv.org/abs/2310.09656v3 | # Mixed-Type Tabular Data Synthesis with Score-based Diffusion in Latent Space
###### Abstract
Recent advances in tabular data generation have greatly enhanced synthetic data quality. However, extending diffusion models to tabular data is challenging due to the intricately varied distributions and a blend of data types of tabular data. This paper introduces TabSyn, a methodology that synthesizes tabular data by leveraging a diffusion model within a variational autoencoder (VAE) crafted latent space. The key advantages of the proposed TabSyn include (1) **Generality**: the ability to handle a broad spectrum of data types by converting them into a single unified space and explicitly capture inter-column relations, (2) **Quality**: optimizing the distribution of latent embeddings to enhance the subsequent training of diffusion models, which helps generate high-quality synthetic data, (3) **Speed**: much fewer number of reverse steps and faster synthesis speed than existing diffusion-based methods. Extensive experiments on six datasets with five metrics demonstrate that TabSyn outperforms existing methods. Specifically, it reduces the error rates by _86%_ and 67%_ for column-wise distribution and pair-wise column correlation estimations compared with the most competitive baselines. Code has been made available at [https://github.com/amazon-science/tabsyn](https://github.com/amazon-science/tabsyn).
## 1 Introduction
Tabular data synthesis has a wide range of applications, such as augmenting training data (Fonseca & Bacao, 2023), protecting private data instances (Assefa et al., 2021; Hernandez et al., 2022), and imputing missing values (Zheng & Charoenphakdee, 2022). Recent developments in tabular data generation have notably enhanced the quality of synthetic data (Xu et al., 2019; Borisov et al., 2023; Liu et al., 2023b), while the synthetic data is still far from the real one. To further improve the generation quality, researchers have explored adapting diffusion models, which have shown strong performance in image synthesis tasks (Ho et al., 2020; Rombach et al., 2022), for tabular data generation (Kim et al., 2022; Kotelnikov et al., 2023; Kim et al., 2023; Lee et al., 2023).
Figure 1: TabSyn **wins:** Our TabSyn method consistently outperforms SOTA tabular data generation methods across five data quality metrics.
Despite the progress made by these methods, tailoring a diffusion model for tabular data leads to several challenges. Unlike image data, which comprises pure continuous pixel values with local spatial correlations, tabular data features have complex and varied distributions (Xu et al., 2019), making it hard to learn joint probabilities across multiple columns. Moreover, typical tabular data often contains mixed data types, i.e., continuous (e.g., numerical features) and discrete (e.g., categorical features) variables. The standard diffusion process assumes a continuous input space with Gaussian noise perturbation, which leads to additional challenges with categorical features. Existing solutions either transform categorical features into numerical ones using techniques like one-hot encoding (Kim et al., 2023; Liu et al., 2023) and analog bit encoding (Zheng and Charoenphakdee, 2022) or resort to two separate diffusion processes for numerical and categorical features (Kotelnikov et al., 2023; Lee et al., 2023). However, it has been proven that simple encoding methods lead to suboptimal performance (Lee et al., 2023), and learning separate models for different data types makes it challenging for the model to capture the co-occurrence patterns of different types of data. Therefore, we seek to develop a diffusion model in a joint space of numerical and categorical features that preserves the inter-column correlations.
This paper presents TabSyn, a principled approach for tabular data synthesis that involves training a diffusion model in the latent space. TabSyn first transforms raw tabular data into a continuous embedding space, where well-developed diffusion models with Gaussian noises become feasible. Subsequently, we learn a score-based diffusion model in the embedding space to capture the distribution of latent embeddings. To learn an informative, smoothed latent space while maintaining the decoder's reconstruction ability, we specifically designed a Variational AutoEncoder (VAE (Kingma and Welling, 2013)) model for tabular-structured data. Our proposed VAE model includes 1) Transformer-architecture encoders and decoders for modeling inter-column relationships and obtaining token-level representations, facilitating token-level tasks. 2) Adaptive loss weighting to dynamically adjust the reconstruction loss weights and KL-divergence weights, allowing the model to improve reconstruction performance gradually while maintaining a regularized embedding space. Finally, when applying diffusion models in the latent, we propose a simplified forward diffusion process, which adds Gaussian noises of linear standard deviation with respect to time. We demonstrate through theory and experiments that this approach can reduce the errors in the reverse process, thus improving sampling speed.
The advantages of TabSyn are three-fold: (1) **Generality**: _Mixed-type Feature Handling_ - TabSyn transforms diverse input features, encompassing numerical, categorical, etc., into a unified embedding space. (2) **Quality**: _High Generation Quality_ - with tailored designs of the VAE model, the tabular data is mapped into regularized latent space of good shape, e.g., a standard normal distribution. This will greatly simplify training the subsequent diffusion model (Vahdat et al., 2021), making TabSyn more expressive and enabling it to generate high-quality synthetic data. (3) **Speed**: With the proposed linear noise schedule, our TabSyn can generate high-quality synthetic data with fewer than \(20\) reverse steps, which is significantly fewer than existing methods.
Recognizing the absence of unified and comprehensive evaluations for tabular data synthesis methods, we perform extensive experiments, which involve comparing TabSyn with seven state-of-the-art
Figure 2: An overview of the proposed TabSyn. Each row data \(x\) is mapped to latent space \(z\) via a column-wise tokenizer and an encoder. A diffusion process \(z_{0}\to z_{T}\) is applied in the latent space. Synthesis \(z_{T}\to z_{0}\) starts from the base distribution \(p(z_{T})\) and generates samples \(z_{0}\) in latent space through a reverse process. These samples are then mapped from latent \(z\) to data space \(\tilde{x}\) using a decoder and a detokenizer.
methods on six mixed-type tabular datasets using five distinct evaluation metrics. The experimental results demonstrate that TabSyn consistently outperforms previous methods (see Figure 1). Specifically, TabSyn reduces the average errors in column-wise distribution shape estimation (i.e., single density) and pair-wise column correlation estimation (i.e., pair correlation) tasks by \(86\%\) and \(67\%\) than the most competitive baselines, demonstrating the superiority of TabSyn. Furthermore, we demonstrate that TabSyn achieves competitive performance across two downstream tabular data tasks, machine learning efficiency and missing value imputation. Specifically, the well-learned unconditional TabSyn is able to be applied to missing value imputation without retraining. Moreover, thorough ablation studies and visualization case studies substantiate the rationale and effectiveness of our developed approach.
## 2 Related Works
Deep Generative Models for Tabular Data Generation.Generative models for tabular data have become increasingly important and have widespread applications Assefa et al. (2021); Zheng and Charoenphakdee (2022); Hernandez et al. (2022). To deal with the imbalanced categorical features in tabular data, Xu et al. (2019) proposes CTGAN and TVAE based on the popular Generative Adversarial Networks (Goodfellow et al., 2014) and VAE (Kingma and Welling, 2013), respectively. Multiple advanced methods have been proposed for synthetic tabular data generation in the past year. Specifically, GOGLE (Liu et al., 2023) became the first to explicitly model the dependency relationship between columns, proposing a VAE-based model using graph neural networks as the encoder and decoder models. Inspired by the success of large language models in modeling the distribution of natural languages, GReaT transformed each row in a table into a natural sentence and learned sentence-level distributions using auto-regressive GPT2. STaSy (Kim et al., 2023), TabDDPM (Kotelnikov et al., 2023), and CoDi (Lee et al., 2023) concurrently applied the popular diffusion-based generative models for synthetic tabular data generation.
Generative Modeling in the Latent Space.While generative models in the data space have achieved significant success, latent generative models have demonstrated several advantages, including more compact and disentangled representations, robustness to noise and greater flexibility in controlling generated styles (van den Oord et al., 2017; Razavi et al., 2019; Esser et al., 2021). Recently, the Latent Diffusion Models (LDM) (Rombach et al., 2022; Vahdat et al., 2021) have achieved great success in image generation as they exhibit better scaling properties and expressivity than the vanilla diffusion models in the data space (Ho et al., 2020; Song et al., 2021; Karras et al., 2022). The success of LDMs in image generation has also inspired their applications in video (Blattmann et al., 2023) and audio data (Liu et al., 2023). To the best of our knowledge, the proposed work is the first to explore the application of latent diffusion models for general tabular data generation tasks.
## 3 Synthetic Tabular Data generation with TabSyn
Figure 2 gives an overview of TabSyn. In Section 3.1, we first formally define the tabular data generation task. Then, we introduce the design details of TabSyn's autoencoding and diffusion process in Section 3.2 and 3.3. We summarize the training and sampling algorithms in Appendix A.
### Problem Definition of Tabular Data Generation
Let \(M_{\mathrm{num}}\) and \(M_{\mathrm{cat}}\) (\(M=M_{\mathrm{num}}+M_{\mathrm{cat}}\)) be the number of numerical columns and categorical columns, respectively. Each row is represented as a vector of numerical features and categorical features \(\mathbf{x}=[\mathbf{x}^{\mathrm{num}},\mathbf{x}^{\mathrm{cat}}]\), where \(\mathbf{x}^{\mathrm{num}}\in\mathbb{R}^{M_{\mathrm{num}}}\) and \(\mathbf{x}^{\mathrm{cat}}\in\mathbb{R}^{M_{\mathrm{cat}}}\). Specifically, the \(i\)-th categorical attribute has \(C_{i}\) finite candidate values, therefore we have \(x_{i}^{\mathrm{cat}}\in\{1,\cdots,C_{i}\},\forall i\).
This paper focuses on the **unconditional generation** task. With a tabular dataset \(\mathcal{T}=\{\mathbf{x}\}\), we aim to learn a parameterized generative model \(p_{\theta}(\mathcal{T})\), with which realistic and diverse synthetic tabular data \(\hat{\mathbf{x}}\in\hat{\mathcal{T}}\) can be generated.
### AutoEncoding for Tabular Data
Tabular data is highly structured of mixed-type column features, with columns having specific meanings, and different columns are highly dependent on each other. These characteristics make it challenging to design an approximate encoder to model and effectively utilize the rich relationships between columns. Motivated by the successes of applying Transformers for classification/regression of tabular data (Gorishniy et al., 2021), we first learn a unique tokenizer for each column, and then the token(column)-wise representations are fed into a Transformer for capturing the intricate relationships among columns.
**Feature Tokenizer**. The feature tokenizer converts each column (both numerical and categorical) into a \(d\)-dimensional vector. First, we use one-hot encoding to pre-process categorical features, i.e., \(x_{i}^{\mathrm{cat}}\Rightarrow\mathbf{x}_{i}^{\mathrm{oh}}\in\mathbb{R}^{1\times C _{i}}\). Each record is represented as \(\mathbf{x}=[\mathbf{x}^{\mathrm{num}},\mathbf{x}_{1}^{\mathrm{oh}},\cdots,\mathbf{x}_{M_{ \mathrm{cat}}}^{\mathrm{oh}}]\in\mathbb{R}^{M_{\mathrm{num}}+\sum_{i=1}^{3L_{ \mathrm{cat}}}C_{i}}\). Then, we apply a linear transformation for numerical columns and create an embedding lookup table for categorical columns, where each category is assigned a learnable \(d\)-dimensional vector, i.e.,
\[\mathbf{e}_{i}^{\mathrm{num}}=x_{i}^{\mathrm{num}}\cdot\mathbf{w}_{i}^{\mathrm{num}}+ \mathbf{b}_{i}^{\mathrm{num}},\ \ \mathbf{e}_{i}^{\mathrm{cat}}=\mathbf{x}_{i}^{\mathrm{oh}}\cdot\mathbf{W}_{i}^{\mathrm{cat}} +\mathbf{b}_{i}^{\mathrm{cat}}, \tag{1}\]
where \(\mathbf{w}_{i}^{\mathrm{num}},\mathbf{b}_{i}^{\mathrm{num}},\mathbf{b}_{i}^{\mathrm{cat}} \in\mathbb{R}^{1\times d}\), \(\mathbf{W}_{i}^{\mathrm{cat}}\in\mathbb{R}^{C_{i}\times d}\) are learnable parameters of the tokenizer, \(\mathbf{e}_{i}^{\mathrm{num}},\mathbf{e}_{i}^{\mathrm{cat}}\in\mathbb{R}^{1\times d}\). Now, each record is expressed as the stack of the embeddings of all columns
\[\mathbf{E}=[\mathbf{e}_{1}^{\mathrm{num}},\cdots,\mathbf{e}_{M_{\mathrm{num}}}^{\mathrm{ num}},\mathbf{e}_{1}^{\mathrm{cat}},\cdots,\mathbf{e}_{M_{\mathrm{cat}}}^{\mathrm{cat}}] \in\mathbb{R}^{M\times d}. \tag{2}\]
**Transformer Encoding and Decoding**. As with typical VAEs, we use the encoder to obtain the mean and log variance of the latent variable. Then, we acquire the latent embeddings with the reparameterization tricks. The latent embeddings are then passed through the decoder to obtain the reconstructed token matrix \(\hat{\mathbf{E}}\in\mathbb{R}^{M\times d}\). The detailed architectures are in Appendix D.
**Detokenizer**. Finally, we apply a detokenizer to the recovered token representation of each column to reconstruct the column values. The design of the detokenizer is symmetrical to that of the tokenizer:
\[\begin{split}&\hat{x}_{i}^{\mathrm{num}}=\hat{\mathbf{e}}_{i}^{ \mathrm{num}}\cdot\hat{\mathbf{w}}_{i}^{\mathrm{num}}+\hat{b}_{i}^{\mathrm{num}}, \ \ \hat{\mathbf{x}}_{i}^{\mathrm{oh}}=\mathrm{Softmax}(\hat{\mathbf{e}}_{i}^{\mathrm{cat}} \cdot\hat{\mathbf{W}}_{i}^{\mathrm{cat}}+\hat{\mathbf{b}}_{i}^{\mathrm{cat}}),\\ &\hat{\mathbf{x}}=[\hat{x}_{1}^{\mathrm{num}},\cdots,\hat{x}_{M_{ \mathrm{num}}}^{\mathrm{num}},\hat{\mathbf{x}}_{1}^{\mathrm{oh}},\cdots,\hat{\mathbf{x} }_{M_{\mathrm{cat}}}^{\mathrm{oh}}],\end{split} \tag{3}\]
where \(\hat{\mathbf{w}}_{i}^{\mathrm{num}}\in\mathbb{R}^{d\times 1},\hat{b}_{i}^{ \mathrm{num}}\in\mathbb{R}^{1\times 1}\), \(\mathbf{W}_{i}^{\mathrm{cat}}\in\mathbb{R}^{d\times C_{i}},\hat{\mathbf{b}}_{i}^{ \mathrm{cat}}\in\mathbb{R}^{1\times C_{i}}\) are detokenizer's parameters.
**Training with adaptive weight coefficient.** The VAE model is usually learned with the classical ELBO loss function, but here we use \(\beta\)-VAE (Higgins et al., 2016), where a coefficient \(\beta\) balances the importance of the reconstruction loss and KL-divergence loss
\[\mathcal{L}=\ell_{\mathrm{recon}}(\mathbf{x},\hat{\mathbf{x}})+\beta\ell_{\mathrm{kl}}. \tag{4}\]
\(\ell_{\mathrm{recon}}\) is the reconstruction loss between the input data and the reconstructed one, and \(\ell_{\mathrm{kl}}\) is the KL divergence loss that regularizes the mean and variance of the latent space. In the vanilla VAE model, \(\beta\) is set to be \(1\) because the two loss terms are equally important to generate high-quality synthetic data from Gaussian noises. However, in our model, \(\beta\) is expected to be smaller, as we do not require the distribution of the embeddings to precisely follow a standard Gaussian distribution because we have an additional diffusion model. Therefore, we propose to adaptively schedule the scale of \(\beta\) in the training process, encouraging the model to achieve lower reconstruction error while maintaining an appropriate embedding shape.
With an initial (maximum) \(\beta=\beta_{\mathrm{max}}\), we monitor the epoch-wise reconstruction loss \(\ell_{\mathrm{recon}}\). When \(\ell_{\mathrm{recon}}\) fails to decrease for a predefined number of epochs (which indicates that the KL-divergence dominates the overall loss), the weight is scheduled by \(\beta=\lambda\beta,\lambda<1\). This process continues until \(\beta\) approaches a predefined minimum value \(\beta_{\mathrm{min}}\). This strategy is simple yet very effective, and we empirically justify the effectiveness of the design in Section 4.
### Score-based Generative Modeling in the Latent Space
**Training and sampling via denoising.** After the VAE model is well-learned, we extract the latent embeddings through the encoder and flatten the encoder's output as
\(\mathbb{R}^{1\times Md}\) such that the embedding of a record is a vector rather than a matrix. To learn the underlying distribution of embeddings \(p(\mathbf{z})\), we consider the following forward diffusion process and reverse sampling process (Song et al., 2021; Karras et al., 2022):
\[\mathbf{z}_{t} =\mathbf{z}_{0}+\sigma(t)\mathbf{\varepsilon},\ \mathbf{\varepsilon}\sim \mathcal{N}(\mathbf{0},\mathbf{I}), (\text{Forward Process}) \tag{5}\] \[\mathrm{d}\mathbf{z}_{t} =-2\hat{\sigma}(t)\sigma(t)\nabla_{\mathbf{z}_{t}}\log p(\mathbf{z}_{t}) \mathrm{d}t+\sqrt{2\hat{\sigma}(t)\sigma(t)}\mathrm{d}\mathbf{\omega}_{t}, (\text{Reverse Process}) \tag{6}\]
where \(\mathbf{z}_{0}=\mathbf{z}\) is the initial embedding from the encoder, \(\mathbf{z}_{t}\) is the diffused embedding at time \(t\), and \(\sigma(t)\) is the noise level. In the reverse process, \(\nabla_{\mathbf{z}_{t}}\log p_{t}(\mathbf{z}_{t})\) is the score function of \(\mathbf{z}_{t}\), and \(\mathbf{\omega}_{t}\) is the standard Wiener process. The training of the diffusion model is achieved via denoising score matching (Karras et al., 2022):
\[\mathcal{L}=\mathbb{E}_{\mathbf{z}_{0}\sim p(\mathbf{z}_{0})}\mathbb{E}_{t\sim p(t)} \mathbb{E}_{\mathbf{\varepsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{I})}\|\mathbf{\epsilon}_{ \theta}(\mathbf{z}_{t},t)-\mathbf{\varepsilon})\|_{2}^{2},\ \ \text{where}\ \mathbf{z}_{t}=\mathbf{z}_{0}+\sigma(t)\mathbf{\varepsilon}, \tag{7}\]
where \(\mathbf{\epsilon}_{\theta}\) is a neural network (named denoising function) to approximate the Gaussian noise using the perturbed data \(\mathbf{x}_{t}\) and the time \(t\). Then \(\nabla_{\mathbf{z}_{t}}\log p(\mathbf{z}_{t})=-\mathbf{\epsilon}_{\theta}(\mathbf{z}_{t},t)/ \sigma(t)\). After the model is trained, synthetic data can be obtained via the reverse process in Eq. 6. The detailed algorithm description of TabSyn is provided in Appendix A. Detailed derivations are in Appendix B.
Schedule of noise level \(\sigma(t)\).The noise level \(\sigma(t)\) defines the scale of noises for perturbing the data at different time steps and significantly affects the final Differential Equation solution trajectories (Song et al., 2021; Karras et al., 2022). Following the recommendations in Karras et al. (2022), we set the noise level \(\sigma(t)=t\) that is linear w.r.t. the time. We show in Proposition 1 that the linear noise level schedule leads to the smallest approximation errors in the reverse process:
**Proposition 1**.: _Consider the reverse diffusion process in Equation (6) from \(\mathbf{z}_{t_{0}}\) to \(\mathbf{z}_{t_{a}}(t_{b}>t_{a})\), the numerical solution \(\hat{\mathbf{z}}_{t_{a}}\) has the smallest approximation error to \(\mathbf{z}_{t_{a}}\) when \(\sigma(t)=t\)._
See proof in Appendix C. A natural corollary of Proposition 1 is that a small approximation error allows us to increase the interval between two timesteps, thereby reducing the overall number of sampling steps and accelerating the sampling. In Section 4, we demonstrate that with this design, TabSyn can generate synthetic tabular data of high quality within less than \(20\) NFEs (number of function evaluations), which is much smaller than other tabular-data synthesis methods based on diffusion (Kim et al., 2023; Kotelnikov et al., 2023).
## 4 Benchmarking Synthetic Tabular Data Generation Algorithms
In this section, we conduct experiments to evaluate TabSyn against alternative methods from various data quality measurements. We deliver a unified and comprehensive comparison environment for tabular data synthesis methods and have released the code base1.
Footnote 1: [https://github.com/amazon-science/tabsyn](https://github.com/amazon-science/tabsyn)
### Experimental Setups
**Datasets**. We select six real-world tables, including numerical and categorical column attributes: Adult, Default, Shoppers, Magic, Faults, Beijing, and News. Table 6 provides the overall statistics of these datasets, and the detailed descriptions can be found in Appendix E.1.
**Baselines**. We compare the proposed TabSyn with seven existing synthetic tabular data generation methods. The first two are classical GAN and VAE models: CTGAN (Xu et al., 2019) and TVAE (Xu et al., 2019). Additionally, we evaluate five SOTA methods introduced recently: GOGALE (Liu et al., 2023), a VAE-based method; GReaT (Borisov et al., 2023), a language model variant; and three diffusion-based methods: STaSy (Kim et al., 2023), TabDDPM (Kotelnikov et al., 2023), and CoDi (Lee et al., 2023). Notably, these approaches were nearly simultaneously introduced, limiting opportunities for extensive comparison. Our paper fills this gap by providing the first comprehensive evaluation of their performance in a standardized setting.
**Evaluation Methods**. We evaluate the quality of the synthetic data from three aspects: 1) **Low-order statistics** - _column-wise density estimation_ and _pair-wise column correlation_, estimating the density of every single column and the correlation between every column pair (Section 4.2), 2)
**High-order metrics** - \(\alpha\)_-precision_ and \(\beta\)_-recall_ scores (Alaa et al., 2022) that measure the overall fidelity and diversity of synthetic data (the results are deferred to Appendix F.2), and 3) Performance on **downstream tasks** - _machine learning efficiency_ (MLE) and _missing value imputation_. MLE is to compare the testing accuracy on real data when trained on synthetically generated tabular datasets. The performance on privacy protection is measured by MLE tasks that have been widely adopted in previous literature (Section 4.3.1). We also extend TabSyn for the missing value imputation task, which aims to fill in missing features/labels given partial column values.
**Implementation details**. The reported results are averaged over \(20\) randomly sampled synthetic data. The implementation details are in Appendix E.
### Estimating Low-order Statistics of Data Density
We start by evaluating the synthetic data's capacity to estimate the density of single columns and the correlation between pairs of columns.
**Metrics**. We employ the Kolmogorov-Sirnov Test (KST) for numerical columns and the Total Variation Distance (TVD) for categorical columns to quantify column-wise density estimation. For pair-wise column correlation, we use Pearson correlation for numerical columns and contingency similarity for categorical columns. The performance is measured by the difference between the correlations computed from real data and synthetic data. For the correlation between numerical and categorical columns, we first group numerical values into categorical ones via bucketing, then calculate the corresponding contingency similarity. Further details on these metrics are in Appendix E.3.
Column-wise distribution density estimation.In Table 1, we note that TabSyn consistently outperforms baseline methods in the column-wise distribution density estimation task. On average, TabSyn surpasses the most competitive baselines by \(86.0\%\). While STaSy and TabDDPM perform well, STaSy is sub-optimal because it treats one-hot embeddings of categorical columns as continuous
\begin{table}
\begin{tabular}{l c c c c c|c c|c} \hline \hline
**Method** & **Adult** & **Default** & **Shoppers** & **Magic** & **Beijing** & **News** & **Average** & **Ranking** \\ \hline CTGAN & \(16.84_{\pm 0.03}\) & \(16.83_{\pm 0.04}\) & \(21.15_{\pm 0.10}\) & \(9.81_{\pm 0.08}\) & \(21.39_{\pm 0.05}\) & \(16.09_{\pm 0.02}\) & \(7\) \\ TVAE & \(14.22_{\pm 0.08}\) & \(10.17_{\pm 0.05}\) & \(24.51_{\pm 0.06}\) & \(8.25_{\pm 0.06}\) & \(19.16_{\pm 0.06}\) & \(16.62_{\pm 0.03}\) & \(15.49\) & \(5\) \\ GOGGLE\({}^{1}\) & \(16.97\) & \(17.02\) & \(22.33\) & \(1.90\) & \(16.93\) & \(25.32\) & \(16.74\) & \(6\) \\ GRU\({}^{2}\) & \(12.12_{\pm 0.04}\) & \(19.94_{\pm 0.06}\) & \(14.51_{\pm 0.12}\) & \(16.16_{\pm 0.09}\) & \(8.25_{\pm 0.12}\) & - & \(14.20\) & \(3\) \\ STaSy & \(11.29_{\pm 0.06}\) & \(5.77_{\pm 0.06}\) & \(9.37_{\pm 0.06}\) & \(6.29_{\pm 0.13}\) & \(6.71_{\pm 0.03}\) & \(6.89_{\pm 0.03}\) & \(7.72\) & \(2\) \\ CoDi & \(21.38_{\pm 0.06}\) & \(15.77_{\pm 0.07}\) & \(31.84_{\pm 0.05}\) & \(11.56_{\pm 0.26}\) & \(16.94_{\pm 0.02}\) & \(32.27_{\pm 0.04}\) & \(21.63\) & \(8\) \\ TabDDPM\({}^{3}\) & \(17.54_{\pm 0.03}\) & \(1.57_{\pm 0.08}\) & \(17.22_{\pm 0.13}\) & \(10.03_{\pm 0.03}\) & \(10.30_{\pm 0.03}\) & \(78.75_{\pm 0.01}\) & \(14.52\) & \(4\) \\ \hline TabSyn & \(0.58_{\pm 0.06}\) & \(0.85_{\pm 0.04}\) & \(1.43_{\pm 0.24}\) & \(0.88_{\pm 0.09}\) & \(1.12_{\pm 0.05}\) & \(1.64_{\pm 0.04}\) & \(1.08\) & \(1\) \\ Improv. & \(66.9\%\downarrow\) & \(45.9\%\downarrow\) & \(47.4\%\downarrow\) & \(12.9\%\downarrow\) & \(13.8\%\downarrow\) & \(76.2\%\downarrow\) & \(86.0\%\downarrow\) & \\ \hline \hline \end{tabular}
* 1 GOGGLE fixes the random seed during sampling in the official codes, and we follow it for consistency.
* 2 GReT cannot be applied on News because of the maximum length limit.
* 3 TabDDPM fails to generate meaningful content on the News dataset.
\end{table}
Table 1: Error rate (%) of **column-wise density estimation**. Bold Face represents the best score on each dataset. Lower values indicate more accurate estimation (superior results). TabSyn outperforms the best baseline model by \(86.0\%\) on average.
\begin{table}
\begin{tabular}{l c c c c c|c c|c} \hline \hline
**Method** & **Adult** & **Default** & **Shoppers** & **Magic** & **Beijing** & **News** & **Average** & **Ranking** \\ \hline CTGAN & \(20.23_{\pm 1.20}\) & \(26.95_{\pm 0.93}\) & \(13.08_{\pm 0.16}\) & \(7.00_{\pm 0.19}\) & \(22.95_{\pm 0.08}\) & \(5.37_{\pm 0.05}\) & \(15.93\) & \(5\) \\ TVAE & \(14.15_{\pm 0.88}\) & \(15.05_{\pm 0.95}\) & \(18.67_{\pm 0.38}\) & \(5.82_{\pm 0.49}\) & \(18.01_{\pm 0.08}\) & \(6.17_{\pm 0.09}\) & \(13.72\) & \(4\) \\ GOGGLE & \(45.29\) & \(21.94\) & \(23.90\) & \(9.47\) & \(4.59\) & \(23.19\) & \(28.28\) & \(7\) \\ GReT & \(17.59_{\pm 0.22}\) & \(70.02_{\pm 0.12}\) & \(45.16_{\pm 0.18}\) & \(10.23_{\pm 0.40}\) & \(59.60_{\pm 0.55}\) & \(44.24\) & \(8\) \\ STaSy & \(14.51_{\pm 0.25}\) & \(5.96_{\pm 0.28}\) & \(8.49_{\pm 0.15}\) & \(6.61_{\pm 0.53}\) & \(8.00_{\pm 0.10}\) & \(3.07_{\pm 0.04}\) & \(7.77\) & \(3\) \\ CoDi & \(22.24_{\pm 0.08}\) & \(68.41_{\pm 0.05}\) & \(17.78_{\pm 0.11}\) & \(6.53_{\pm 0.26}\) & \(7.07_{\pm 0.15}\) & \(11.10_{\pm 0.01}\) & \(22.23\) & \(6\) \\ TabDDPM & \(3.01_{\pm 0.25}\) & \(4.89_{\pm 0.10}\) & \(6.61_{\pm 0.16}\) & \(1.70_{\pm 0.22}\) & \(2.71_{\pm 0.09}\) & \(13.16_{\pm 0.11}\) & \(5.34\) & \(2\) \\ \hline TabSyn & \(1.54_{\pm 0.27}\) & \(2.05_{\pm 0.12}\) & \(2.07_{\pm 0.21}\) & \(1.06_{\pm 0.31}\) & \(2.24_{\pm 0.28}\) & \(1.44_{\pm 0.03}\) & \(1.73\) & \(1\) \\ Improve. & \(48.8\%\downarrow\) & \(58.1\%\downarrow\) & \(68.7\%\downarrow\) & \(37.6\%\downarrow\) & \(17.3\%\downarrow\) & \(53.1\%\downarrow\) & \(67.6\%\downarrow\) & \\ \hline \hline \end{tabular}
* 2 GOGGLE fixes the random seed during sampling in the official codes, and we follow it for consistency.
* 3 TabSyn samples the most competitive baselines by \(86.0\%\). While STaSy and TabDDPM perform well, STaSy is sub-optimal because it treats one-hot embeddings of categorical columns as continuous
\end{table}
Table 2: Error rate (%) of **pair-wise column** correction score. Bold Face represents the best score on each dataset. TabSyn outperforms the best baseline model by \(67.6\%\) on average.
features. Additionally, TabDDPM exhibits unstable performance across datasets, failing to generate meaningful content on the News dataset despite a standard training process.
Pair-wise column correlations.Table 2 displays the results of pair-wise column correlations. TabSyn outperforms the best baselines by an average of \(67.6\%\). Notably, the performance of GReaT is significantly poorer in this task than in the column-wise task. This indicates the limitations of autoregressive language models in density estimation, particularly in capturing the joint probability distributions between columns.
### Performance on Downstream Tasks
#### 4.3.1 Machine Learning Efficiency
Real-world tabular data, such as financial data and health records, often contain some of the most sensitive and personally identifiable attributes of customers/patients. Therefore, their usage is strictly restricted (Assefa et al., 2021; Hernandez et al., 2022). Tabular data generation models address this issue by synthesizing synthetic data that conforms to the characteristics of real data but cannot be traced back to the actual data. Machine Learning Efficiency (MLE) is a crucial application for protecting data privacy. It measures whether a generative model can produce synthetic data that conforms to the characteristics of real data by comparing the performance of a machine learning model trained on real data and synthetic data. Following established settings (Kotelnikov et al., 2023; Kim et al., 2023; Lee et al., 2023), we first split a real table into a real training and a real testing set. The generative models are learned on the real training set, from which a synthetic set of equivalent size is sampled. This synthetic data is then used to train a classification/regression model (XGBoost Classifier and XGBoost Regressor (Chen & Guestrin, 2016)), which will be evaluated using the real testing set. The performance of MLE is measured by the AUC score for classification tasks and RMSE for regression tasks. The detailed settings of the MLE evaluations are in Appendix E.4.
In Table 3, we demonstrate that TabSyn consistently outperforms all the baseline methods. The performance gap between methods is smaller compared to column-wise density and pair-wise column correlation estimation tasks (Tables 1 and 2). This suggests that some columns may not significantly impact the classification/regression tasks, allowing methods with lower performance in previous tasks to show competitive results in MLE (e.g., GReaT on Default dataset). This underscores the need for a comprehensive evaluation approach beyond just MLE metrics. As shown above, we have incorporated low-order and high-order statistics for a more robust assessment.
#### 4.3.2 Missing Value Imputation
One advantage of the diffusion model is that a well-trained unconditional model can be directly used for data imputation (e.g., image inpainting (Song et al., 2021b; Lugmayr et al., 2022)) without
\begin{table}
\begin{tabular}{l c c c c c c|c c} \hline \hline \multirow{2}{*}{Methods} & **Adult** & **Default** & **Shoppers** & **Magic** & **Beijing** & **News** & **Average Gap** & **Ranking** \\ \cline{2-9} & AUC \(\uparrow\) & AUC \(\uparrow\) & AUC \(\uparrow\) & AUC \(\uparrow\) & RMSE\({}^{2}\) \(\downarrow\) & RMSE\({}^{2}\) \(\downarrow\) & \(\%\) & - \\ \hline Real & \(.927_{\pm 0.00}\) & \(.770_{\pm 0.005}\) & \(.926_{\pm 0.001}\) & \(.946_{\pm 0.001}\) & \(.423_{\pm 0.003}\) & \(.842_{\pm 0.002}\) & \(0\%\) & - \\ \hline CTGAN & \(.886_{\pm 0.002}\) & \(.696_{\pm 0.005}\) & \(.875_{\pm 0.009}\) & \(.855_{\pm 0.008}\) & \(.902_{\pm 0.019}\) & \(.880_{\pm 0.016}\) & \(24.5\%\) & \(6\) \\ TVAE & \(.878_{\pm 0.004}\) & \(.724_{\pm 0.005}\) & \(.871_{\pm 0.006}\) & \(.887_{\pm 0.003}\) & \(.770_{\pm 0.011}\) & \(1.01_{\pm 0.016}\) & \(20.9\%\) & \(5\) \\ GOGLE & \(.778_{\pm 0.12}\) & \(.584_{\pm 0.005}\) & \(.658_{\pm 0.002}\) & \(.654_{\pm 0.004}\) & \(10_{\pm 0.025}\) & \(.877_{\pm 0.002}\) & \(43.6\%\) & \(8\) \\ GReaT & \(.913_{\pm 0.003}\) & \(.756_{\pm 0.006}\) & \(.902_{\pm 0.005}\) & \(.888_{\pm 0.008}\) & \(.653_{\pm 0.013}\) & \(13.3\%\) & \(4\) \\ STaSy & \(.906_{\pm 0.001}\) & \(.752_{\pm 0.006}\) & \(.914_{\pm 0.005}\) & \(.934_{\pm 0.003}\) & \(.656_{\pm 0.014}\) & \(.871_{\pm 0.002}\) & \(10.9\%\) & \(3\) \\ CoDi & \(.871_{\pm 0.006}\) & \(.852_{\pm 0.006}\) & \(.885_{\pm 0.006}\) & \(.932_{\pm 0.003}\) & \(.818_{\pm 0.012}\) & \(1.21_{\pm 0.005}\) & \(30.5\%\) & \(7\) \\ TabDDPM & \(.907_{\pm 0.001}\) & \(.758_{\pm 0.004}\) & \(.918_{\pm 0.005}\) & \(.935_{\pm 0.003}\) & \(.592_{\pm 0.011}\) & \(.486_{\pm 0.004}\) & \(91.4\%\) & \(2\) \\ \hline TabSyn & \(.915_{\pm 0.002}\) & \(.764_{\pm 0.004}\) & \(.920_{\pm 0.005}\) & \(.938_{\pm 0.002}\) & \(.582_{\pm 0.008}\) & \(.861_{\pm 0.027}\) & \(7.23\%\) & \(1\) \\ \hline \hline \end{tabular}
* TabDDPM collapses on News, leading to an extremely high error on this dataset.
additional training. This paper explores adapting TabSyn for missing value imputation, a crucial task in real-world tabular data. Due to space limitation, the detailed algorithms for missing value imputation and the results are deferred to Appendix F.3.
### Ablation Studies
We conduct ablation experiments to study if each component is crucial to the success of TabSyn.
The effect of adaptive \(\beta\)-Vae.We assess the effectiveness of scheduling the weighting coefficient \(\beta\) in the VAE model. Figure 3 presents the trends of the reconstruction loss and the KL-divergence loss with the scheduled \(\beta\) and constant \(\beta\) values (from \(10^{-1}\) to \(10^{-5}\)) across 4,000 training epochs. Notably, a large \(\beta\) value leads to subpar reconstruction, while a small \(\beta\) value results in a large divergence between the embedding distribution and the standard Gaussian, making the balance hard to achieve. In contrast, by dynamically scheduling \(\beta\) during training (\(\beta_{\max}=0.01,\beta_{\min}=10^{-5},\lambda=0.7\)), we not only prevent excessive KL divergence but also enhance quality. Table 4 further evaluates the learned embeddings from various \(\beta\) values of the VAE model via synthetic data quality (single-column density and pair-wise column correlation estimation tasks). This demonstrates the superior performance of our proposed scheduled \(\beta\) approach to train the VAE model.
The effect of linear noise levels.We evaluate the effectiveness of using linear noise levels, \(\sigma(t)=t\), in the diffusion process. As Section 3.3 outlines, linear noises lead to linear trajectories and faster sampling speed. Consequently, we compare TabSyn and two other diffusion models (STaSy and TabDDPM) in terms of the single-column density and pair-wise column correlation estimation errors relative to the number of function evaluations (NFEs), i.e., denoising steps to generate the real data. As continuous-time diffusion models, the proposed TabSyn and STaSy are flexible in choosing NFEs. For TabDDPM, we use the DDIM sampler (Song et al., 2021) to adjust NFEs. Figure 4 shows that TabSyn not only significantly improves the sampling speed but also consistently yields better performance (with fewer than 20 NFEs for optimal results). In contrast, STaSy requires 50-200 NFEs, varying by datasets, and achieves sub-optimal performance. TabDDPM achieves competitive performance with 1,000 NFEs but significantly drops in performance when reducing NFEs.
Comparing different encoding/diffusion methods.We assess the effectiveness of learning the diffusion model in the latent space learned by VAE by creating two TabSyn variants: 1) TabSynOneHot: replacing VAE with one-hot encodings of categorical variables and 2) TabSyn-DDPM: substituting the diffusion process in Equation (5) with DDPM as used in TabDDPM. Results in Table 5 demonstrate: 1) One-hot encodings for categorical variables plus continuous diffusion models lead to the worst performance, indicating that it is not appropriate to treat categorical columns simply as continuous features; 2) TabSyn-DDPM in the latent space outperforms TabDDPM in the data space, highlighting the benefit of learning high-quality latent embeddings for improved diffusion modeling; 3) TabSyn surpasses TabSyn-DDPM, indicating the advantage of employing tailored diffusion models in the continuous latent space for better data distribution learning.
### Visualization
In Figure 5, we compare column density across eight columns from four datasets (one numerical and one categorical per dataset). TabDDPM matches TabSyn's accuracy on numerical columns but falls short on categorical ones. Figure 6 displays the divergence heatmap between estimated pair-wise column correlations and real correlations. TabSyn gives the most accurate correlation estimation, while other methods exhibit suboptimal performance. These results justify that employing generative modeling in latent space enhances the learning of categorical features and joint column distributions.
## 5 Conclusions and Future Directions
In this paper, we have proposed TabSyn for synthetic tabular data generation. The TabSyn framework leverages a VAE to map tabular data into a latent space, followed by utilizing a diffusion-based generative model to learn the latent distribution. This approach presents the dual advantages of accommodating numerical and categorical features within a unified latent space, thus facilitating a more comprehensive understanding of their interrelationships and enabling the utilization of advanced generative models in a continuous embedding space. To address potential challenges, TabSyn proposes a model design and training methods, resulting in a highly stable generative model. In addition, TabSyn rectifies the deficiency in prior research by employing a diverse set of evaluation metrics to comprehensively compare the proposed method with existing approaches, showcasing the remarkable quality and fidelity of the generated samples in capturing the original data distribution.
For future research, one potential direction is exploring the application of TabSyn in other scenarios, such as conditional generation or controllable generation. In addition, TabSyn adopts an MLP model as the denoising function, which is simple and already demonstrates competitive performance. It would be interesting to study if more advanced designs of the denoising function can lead to better generation quality.
Figure 5: Visualization of synthetic data’s single column distribution density (from STaSy, TabDDPM, and TabSyn) v.s. the real data. Upper: numerical columns; Lower: Categorical columns. Note that numerical columns show competitive performance with baselines, while TabSyn excels in estimating categorical column distributions. |
2306.14687 | GSMorph: Gradient Surgery for cine-MRI Cardiac Deformable Registration | Deep learning-based deformable registration methods have been widely
investigated in diverse medical applications. Learning-based deformable
registration relies on weighted objective functions trading off registration
accuracy and smoothness of the deformation field. Therefore, they inevitably
require tuning the hyperparameter for optimal registration performance. Tuning
the hyperparameters is highly computationally expensive and introduces
undesired dependencies on domain knowledge. In this study, we construct a
registration model based on the gradient surgery mechanism, named GSMorph, to
achieve a hyperparameter-free balance on multiple losses. In GSMorph, we
reformulate the optimization procedure by projecting the gradient of similarity
loss orthogonally to the plane associated with the smoothness constraint,
rather than additionally introducing a hyperparameter to balance these two
competing terms. Furthermore, our method is model-agnostic and can be merged
into any deep registration network without introducing extra parameters or
slowing down inference. In this study, We compared our method with
state-of-the-art (SOTA) deformable registration approaches over two publicly
available cardiac MRI datasets. GSMorph proves superior to five SOTA
learning-based registration models and two conventional registration
techniques, SyN and Demons, on both registration accuracy and smoothness. | Haoran Dou, Ning Bi, Luyi Han, Yuhao Huang, Ritse Mann, Xin Yang, Dong Ni, Nishant Ravikumar, Alejandro F. Frangi, Yunzhi Huang | 2023-06-26T13:32:09Z | http://arxiv.org/abs/2306.14687v2 | # GSMorph: Gradient Surgery for cine-MRI Cardiac Deformable Registration
###### Abstract
Deep learning-based deformable registration methods have been widely investigated in diverse medical applications. Learning-based deformable registration relies on weighted objective functions trading off registration accuracy and smoothness of the deformation field. Therefore, they inevitably require tuning the hyperparameter for optimal registration performance. Tuning the hyperparameters is highly computationally expensive and introduces undesired dependencies on domain knowledge. In this study, we construct a registration model based on the gradient surgery mechanism, named GSMorph, to achieve a hyperparameter-free balance on multiple losses. In GSMorph, we reformulate the optimization procedure by projecting the gradient of similarity loss orthogonally to the plane associated with the smoothness constraint, rather than additionally introducing a hyperparameter to balance these two competing terms. Furthermore, our method is model-agnostic and can be merged into any deep registration network without introducing extra parameters or slowing down inference. In this study, We compared our method with state-of-the-art (SOTA) deformable registration approaches over two publicly available cardiac MRI datasets. GSMorph proves superior
to five SOTA learning-based registration models and two conventional registration techniques, SyN and Demons, on both registration accuracy and smoothness.
Keywords:Medical image registration Gradient surgery Regularization.
## 1 Introduction
Image registration is fundamental to many medical image analysis applications, e.g., motion tracking, atlas construction, and disease diagnosis [5]. Conventional registration methods usually require computationally expensive iterative optimization, making it inefficient in clinical practice [1, 19]. Deep learning has recently been widely exploited in the registration domain due to its superior representation extraction capability and fast inference speed [2, 7]. Deep-learning-based registration (DLR) formulates registration as a network learning process minimizing a composite objective function comprising one similarity loss to penalize the difference in the appearance of the image pair, and a regularization term to ensure the smoothness of deformation field. Typically, to balance the registration accuracy and smoothness of the deformation field, a hyperparameter is introduced in the objective function. However, performing hyperparameter tuning is labor-intensive, time-consuming, and _ad-hoc_; searching for the optimal parameter setting requires extensive ablation studies and hence training tens of models and establishing a reasonable parameter search space. Therefore, alleviating, even circumventing, hyperparameter search to accelerate development and deployment of DLR models remains challenging.
Recent advances [6, 11, 13] in DLR have primarily focused on network architecture design to boost registration performance. Few studies [9, 16] investigated the potential in preventing hyperparameter searching by hypernetwork [8] and conditional learning [10]. Hoopes _et al._[9] leveraged a hyper-network that takes the hyperparameter as input to generate the weight of the DLR network. Although effective, it introduces a large number of additional parameters to the basic DLR network, making the framework computationally expensive. In parallel, Mok _et al._[16] proposed to learn the effect of the hyperparameter and condition it on the feature statistics (usually illustrated as _style_ in computer vision [10]) to manipulate the smoothness of the deformation field in the inference phase. Both methods can avoid hyperparameter tuning while training the DLR model. However, they still require a reasonable sampling space and strategy of the hyperparameter, which can be empirically dependent.
Gradient surgery (GS) projects conflicting gradients of different losses during the optimization process of the model to mitigate gradient interference. This has proven useful in multi-task learning [20] and domain generalization [15]. Motivated by these studies, we propose utilizing the GS to moderate the discordance between the similarity loss and regularization loss. The proposed method can further avert searching the weight for balancing losses in training the DLR.
* We propose GSMorph, a gradient-surgery-based DLR model. Our method can circumvent tuning the hyperparameter in composite loss function with a gradient-level reformulation to reach the trade-off between registration accuracy and smoothness of the deformation field.
* Existing GS approaches have operated the parameters' gradients independently or integrally. We propose a layer-wise GS to group by the parameters for optimization to ensure the flexibility and robustness of the optimization process.
* Our method is model-agnostic and can be integrated into any DLR network without extra parameters or losing inference speed.
## 2 Methodology
Deformable image registration estimates the non-linear correspondence field \(\phi\) between the moving, \(M\), and fixed, \(F\), images (Fig. 1). Such procedure is mathematically formulated as \(\phi=f_{\theta}(F,M)\). For learning-based registration methods, \(f_{\theta}\) (usually adopted by a neural network) takes the fixed and moving image pair as input and outputs the deformation field via the optimal parameters \(\theta\). Typically, \(\theta\) can be updated using standard mini-batch gradient descent as follows:
\[\theta:=\theta-\alpha\nabla_{\theta}\left(\mathcal{L}_{sim}(\theta;F,M\circ \phi)+\lambda\mathcal{L}_{Reg}(\theta;\phi)\right) \tag{1}\]
where \(\alpha\) is the learning rate; \(\mathcal{L}_{sim}\) is the similarity loss to penalize differences in the appearance of the moving and fixed images (e.g., mean square error, mutual information or local negative cross-correlation); \(\mathcal{L}_{reg}\) is the regularization loss to encourage the smoothness of the deformation field (this can be computed by the gradient of the deformation field); \(\lambda\) is the hyperparameter balancing the trade-off between \(\mathcal{L}_{sim}\) and \(\mathcal{L}_{reg}\) to achieve desired registration accuracy while preserving the smoothness of the deformation field in the meantime. However, hyper-parameter tuning is time-consuming and highly experience-dependent, making it tough to reach the optimal solution.
Figure 1: Schematic illustration of our proposed GSMorph. GS modifies the gradients computed by similarity loss \(\mathcal{L}_{sim}\) and regularization loss \(\mathcal{L}_{reg}\), then updates the model’s parameters \(\theta\).
Insight into the optimization procedure in Eq. 1, as registration accuracy and spatial smoothness are potentially controversial in model optimization, the two constraints might have different directions and strengths while going through the gradient descent. Based on this, we provide a geometric view to depict the gradient changes for \(\theta\) based on the _gradient surgery_ technique. The conflicting relationship between two controversial constraints can be geometrically projected as orthogonal vectors. Depending on the orthogonal relationship, merely updating the gradients of the similarity loss would automatically associate with the updates of the regularization term. In this way, we avoid tuning the hyperparameter \(\lambda\) to optimize \(\theta\). The Eq. 1 can then be rewritten into a non-hyperparameter pattern:
\[\theta:=\theta-\alpha\Phi(\nabla_{\theta}\mathcal{L}_{sim}(\theta;F,M\circ \phi)) \tag{2}\]
where \(\Phi(\cdot)\) is the operation of proposed GS method.
### Layer-wise Gradient Surgery
Figure 2 illustrates the two scenarios of gradients while optimizing the DLR network via vanilla gradient descent or gradient surgery. We first define that the gradient of similarity loss, \(g_{sim}\), and that of regularization loss, \(g_{reg}\), are conflicting when the angle between \(g_{sim}\) and \(g_{reg}\) is the obtuse angle, viz. \(\langle g_{sim},g_{reg}\rangle<0\). In this study, we propose updating the parameters of neural networks by the original \(g_{sim}\) independently, when \(g_{sim}\) and \(g_{reg}\) are non-conflicting, representing \(g_{sim}\) has no incompatible component of the gradient along the direction of \(g_{reg}\). Consequently, optimization with sole \(g_{sim}\) within a non-conflicting scenario can inherently facilitate the spatial smoothness of deformations.
Figure 2: Visualization of vanilla gradient descent and gradient surgery for non-conflicting and conflicting gradients. Regarding vanilla gradient descent, the gradient, \(g\), is computed based on the average of \(g_{sim}\) and \(g_{reg}\). Our GS-based approach projects the \(g_{sim}\) onto the normal vector of \(g_{reg}\) to prevent disagreements between the similarity loss and regularization loss. On the other hand, we only update the \(g_{sim}\) in non-conflicting scenarios.
Conversely, as shown in Fig. 2, conflicting gradients are the dominant reason associated with non-smooth deformations. Hence, deconflicting gradients in the optimization of the DLR network to ensure high registration accuracy, as well as smooth deformation, is the primary goal of our study. Following a simple and intuitive procedure, we project the \(g_{sim}\) onto the normal plane of the \(g_{reg}\), where the projected similarity gradient \(g\) and \(g_{reg}\) are non-conflicting along each gradient's direction.
Existing studies [15, 20] performed the GS in terms of independent parameters or the entire network. Despite the effectiveness, these can be either unstable or inflexible. Considering that a neural network usually extracts features through the collaboration of each parameter group in the convolution layers, we introduce a layer-wise GS to ensure its stability and flexibility. The parameter updating rule is detailed in the Algorithm 1. Specifically, in each gradient updating iteration, we first compute the gradients of two losses for the parameter group in each layer separately. Then, the conflicting relationship between the two gradients is calculated based on their inner production. Once the two gradients are non-conflicting, the gradients used to update its corresponding parameter group will be only the original gradients of similarity loss; on the contrary, the gradients will be the projected similarity gradients orthogonal to the gradients of regularization, which can be calculated as \(g_{sim}^{i}-\frac{\langle g_{sim}^{i},g_{reg}^{i}\rangle}{\|g_{reg}^{i}\|^{2}} g_{reg}^{i}\). After performing GS on all layer-wise parameter groups in the network, the final gradients will be used to update the model's parameters.
```
0: Parameters \(\theta_{i}\) in \(i\)th layer of the network; Number of layers in the network \(N\).
1:\(g_{sim}\leftarrow\nabla_{\theta}\mathcal{L}_{sim}\)
2:\(g_{reg}\leftarrow\nabla_{\theta}\mathcal{L}_{reg}\)
3:for\(i=1\to N\)do
4:if\(\langle g_{sim}^{i},g_{reg}^{i}\rangle>0\)then
5:\(g_{i}=g_{sim}^{i}\)
6:else
7:\(g_{i}=g_{sim}^{i}-\frac{\langle g_{sim}^{i},g_{reg}^{i}\rangle}{\|g_{reg}^{i }\|^{2}}g_{reg}^{i}\)
8:endif
9:\(\Delta\theta_{i}=g_{i}\)
10:endfor
11: Update \(\theta\)
```
**Algorithm 1** Gradient surgery
### Network Architecture
Our network architecture (seen in Fig. 1) is similar to VoxelMorph [7] that comprises naive U-Net [18] and spatial transform network (STN) [12]. The U-Net takes the moving and fixed image pair as input and outputs the deformation field, which is used to warp the moving image via STN. The U-Net consists of an encoder and a decoder with skip connections, which forward the features from
each layer in the encoder to the corresponding layer in the decoder by concatenation to enhance the feature aggregation and prevent gradient vanishing. The number of feature maps in the encoder part of the network is 16, 32, 64, 128, and 256, increasing the number of features as their size shrinks, and vice versa in the decoder part. Each convolutional block in the encoder and decoder has two sequential convolutions with a kernel size of 3, followed by a batch normalization and a leaky rectified linear unit.
## 3 Experiments and Results
### Datasets and Implementations
#### 3.1.1 Datasets
In this study, we used two public cardiac cine-MRI datasets for investigation and comparison: ACDC [3] and M&M [4]. ACDC and M&M contain 100 and 249 subjects, respectively. We followed a proportion of 75%, 5%, and 20% to split each dataset for training, validation, and testing. We selected the image from the cine-MRI cardiac sequence at the End Systole (ES) time point of the cardiac cycle as the moving image, and that at the End Diastole (ED) as the fixed one. All images were cropped into the size of 128\(\times\)128 centralized to the heart. We normalized the intensity of images into the range from 0 to 1 before inputting them into the model.
#### 3.1.2 Implementation details
We implemented our model in PyTorch [17], using a standard PC with an NVIDIA GTX 2080ti GPU. We trained the network through Adam optimizer [14] with a learning rate of 5e-3 and a batch size of 32 for 500 epochs. We also implemented and trained alternative methods for comparison with the same data and similar hyper-parameters for optimization. Our source code is available at [https://github.com/wulalago/GSMorph](https://github.com/wulalago/GSMorph).
### Alternative Methods and Evaluation Criteria
To demonstrate the advantages of our proposed method in medical image registration, we compared it with two conventional deformable registration methods, i.e., **Demons**[19] and **SyN**[1], and a widely-used DLR model, **VoxelMorph**[7]. These methods usually need laborious effort in hyperparameter tuning. Additionally, we reported the results of VoxelMorph trained with different \(\lambda\) (i.e., 0.1, 0.01, and 0.001, denoted as **VoxelMorph-l**, **VoxelMorph-m**, **VoxelMorph-s**). Meanwhile, we compared our approach to one alternative DLR model based on the hyperparameter learning, i.e., **HyperMorph**[9]. This method only require additional validations in searching the optimal hyperparameter without necessarily tuning it from scratch. Finally, we reformulated two variations of GS based on our concept for further comparison. Specifically, **GS-Agr**[15] treats the gradient of each parameter independently. It updates the parameter with the gradient of similarity loss in the non-conflicting scenario, and a random
gradient sampled from the Gaussian distribution when conflicting. While **GS-PCGrad**[20] uses the same GS strategy as ours, but with respect to the whole parameters of the entire network. The **Initial** represents the results without any deformation.
In this study, we used six criteria to evaluate the efficacy and efficiency of the investigated methods, including Dice score (Dice) and 95% Hausdorff distance (HD95) to validate the registration accuracy of the regions of interest, Mean square error (MSE) to evaluate the pixel-level appearance difference between the moved and fixed image-pairs, the percentage of pixels with negative Jacobian determinant (NJD) values to compare the smoothness and diffeomorphism of the deformation field, the number of parameters (Param) of the neural network and inference speed (Speed) to investigate the efficiency.
### Results
As summarized in Table 1, our method could obtain the best MSE in the ACDC dataset and Dice in the M&M dataset while achieving comparable performance with the tuned VoxelMorph over other metrics. The Dice and HD95 reported in Table 1 were averaged over three anatomical regions of interest in the heart, i.e., Left ventricle, Myocardium, and Right Ventricle (LV, Myo, and RV). Consequently, the proposed model achieved superior registration accuracy and spatial regularization with faster inference speed than the two conventional registration methods. We also observed that our approach gained higher registration performance than HyperMorph in both datasets. Regarding the GS-based methods, GS-Agr totally collapsed, as the conflicting gradients accounted for most have been replaced by random noise. On the other hand, GS-PCGrad only yielded an inadequate registration performance with an inclination of over-regularization. The comparison in the GS-based method shows the flexibility and robustness of our approach.
\begin{table}
\begin{tabular}{c|c c c c|c c c} \hline \hline \multirow{2}{*}{MethodsDataset} & \multicolumn{4}{c|}{**ACDC**} & \multicolumn{4}{c}{**M\&M**} \\ \cline{2-9} & Dice(\%) & HD95(mm) & MSE(\(10^{-2}\)) & NJD(\%) & Dice(\%) & HD95(mm) & MSE(\(10^{-2}\)) & NJD(\%) \\ \hline Initial & 61.81\(\pm\)8.68 & 4.40\(\pm\)1.33 & 1.58\(\pm\)0.52 & - & 61.03\(\pm\)10.16 & 4.79\(\pm\)1.82 & 1.90\(\pm\)1.08 & - \\ \hline Demons & 85.38\(\pm\)3.52 & 1.67\(\pm\)0.75 & 0.46\(\pm\)0.21 & 1.31\(\pm\)0.59 & 75.66\(\pm\)10.30 & 17.79\(\pm\)6.23 & 0.71\(\pm\)0.61 & 1.84\(\pm\)1.19 \\ SyN & 79.28\(\pm\)8.23 & 2.24\(\pm\)1.28 & 0.65\(\pm\)0.21 & 0.30\(\pm\)0.27 & 81.97\(\pm\)9.36 & **2.45\(\pm\)2.04** & 0.84\(\pm\)0.12 & 0.49\(\pm\)0.47 \\ \hline VoxelMorph-s & 86.69\(\pm\)2.17 & 1.30\(\pm\)0.24 & 0.39\(\pm\)0.14 & 2.01\(\pm\)0.96 & 77.12\(\pm\)9.36 & 3.43\(\pm\)2.18 & **0.42\(\pm\)0.29** & 3.45\(\pm\)2.33 \\ VoxelMorph-m & **87.47\(\pm\)2.21** & **1.29\(\pm\)0.30** & 0.42\(\pm\)0.15 & 0.67\(\pm\)0.48 & 79.93\(\pm\)8.57 & 2.91\(\pm\)1.98 & 0.48\(\pm\)0.32 & 1.31\(\pm\)1.10 \\ VoxelMorph-l & 82.12\(\pm\)4.30 & 1.87\(\pm\)0.64 & 0.59\(\pm\)0.18 & **0.10\(\pm\)0.14** & 77.18\(\pm\)8.69 & 2.81\(\pm\)1.60 & 0.74\(\pm\)0.43 & **0.16\(\pm\)0.22** \\ \hline HyperMorph & 83.44\(\pm\)3.55 & 1.75\(\pm\)0.64 & 0.47\(\pm\)0.20 & 1.60\(\pm\)0.86 & 77.21\(\pm\)8.45 & 3.28\(\pm\)1.99 & 0.59\(\pm\)0.37 & 2.50\(\pm\)1.22 \\ \hline GS-Agr & 63.40\(\pm\)9.15 & 4.20\(\pm\)1.35 & 1.33\(\pm\)0.43 & 0 & 63.41\(\pm\)9.85 & 4.50\(\pm\)1.77 & 1.55\(\pm\)0.86 & \(<\)0.001 \\ GS-PCGrad & 84.59\(\pm\)3.53 & 1.62\(\pm\)0.53 & 0.51\(\pm\)0.16 & 0.11\(\pm\)0.17 & 80.67\(\pm\)8.18 & 2.48\(\pm\)1.67 & 0.59\(\pm\)0.36 & 0.41\(\pm\)0.44 \\ GSMorph & 87.45\(\pm\)2.27 & 1.34\(\pm\)0.40 & **0.31\(\pm\)0.11** & 0.87\(\pm\)0.52 & **82.26\(\pm\)6.59** & 2.66\(\pm\)1.93 & 0.49\(\pm\)0.27 & 0.98\(\pm\)0.84 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative comparison of investigated methods on the testing datasets over ACDC and M&M.
Figure 3 illustrates the sample cases of the warped images and the corresponding deformation fields from the compared methods. It can be observed our methods could obtain the moved images most similar to the fixed ones. Voxel-morph could achieve comparable results to us but still require a time-consuming hyperparameter tuning. Overall, the results of the comparisons in Table 1 and Fig. 3 indicate that our method performed the best among all the techniques that we implemented and examined, showing the effectiveness of our model in balancing the trade-off between registration accuracy and smoothness of deformations.
In Table 2, we have also reported the number of parameters and inference speed. We observed that DLR methods could obtain faster speed compared with conventional ones in general. As our proposed approach only modified the optimization procedure of the backbone network, it could maintain the original inference speed and the number of parameters. Conversely, HyperMorph introduced tremendous extra parameters and loss of inference speed as they adopted the secondary network to generate the conditions or weights of the main network architecture.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline Methods & Demons & SyN & VoxelMorph & HyperMorph & GSMorph \\ \hline Params & - & - & 1.96M & 126M & 1.96M \\ Speed & 7.55\(\pm\)1.79 & 16.59\(\pm\)5.48 & 2.29\(\pm\)0.83 & 2.96\(\pm\)1.09 & 2.29\(\pm\)0.83 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Number of parameters and inference speed of investigated methods on the testing datasets over ACDC and M&M.
Figure 3: Visual comparison of the registration results of the investigated methods for two representative test cases in ACDC and M&M datasets. The top rows are the fixed images and moved images from different methods; the bottom rows are the moving images and deformation fields. (We encourage you to zoom in for better visualization)
## 4 Conclusion
This work presents a gradient-surgery-based registration framework for medical images. To the best of our knowledge, this is the first study to employ gradient surgery to refine the optimization procedure in learning the deformation fields. In our GSMorph, the gradients from the similarity constraint were projected onto the plane orthogonal to those from the regularization term. In this way, merely updating the gradients in optimizing the registration accuracy would result in a joint updating of the gradients from the similarity and regularity constraints. Then, no additional regularization loss is required in the network optimization and no hyperparameter is further required to explicitly trade off between registration accuracy and spatial smoothness. Our model outperformed the conventional registration methods and the alternative DLR models. Finally, the proposed method is model-agnostic and can be integrated into any DLR network without introducing extra parameters or compromising the inference speed. We believe GSMorph will facilitate the development and deployment of DLR models and alleviate the influence of hyperparameters on performance.
## 5 Acknowledgement
This work was supported by the National Natural Science Foundation of China (62101365, 62171290, 62101343), Shenzhen-Hong Kong Joint Research Program (SGDX20201103095613036), Shenzhen Science and Technology Innovations Committee (20200812143441001), the startup foundation of Nanjing University of Information Science and Technology, the Ph.D. foundation for Innovation and Entrepreneurship in Jiangsu Province, the Royal Academy of Engineering (INSILEX CiET1819/19), Engineering and Physical Sciences Research Council UKRI Frontier Research Guarantee Programmes (INSILICO, EP/Y030494/1), and the Royal Society Exchange Programme CROSSLINK IES\NSFC\201380.
|
2301.00655 | On Some Characterization of GS-exponential kind of Convex Functions | This manuscript introduces the idea of GS-exponential kind of convex
functions and some of their algebraic features, and we introduce a new class
GS-exponential kind of convex sets. In addition, we describe certain
fundamental GS-exponential kind of convex function with characteristics in both
the general and the differentiable cases. We establish the sufficient
conditions of optimality and offer the proof for unconstrained as well as
inequality-constrained programming while considering the assumption of
GS-exponential kind of convexity. | Ehtesham Akhter, Musavvir Ali | 2023-01-02T13:16:42Z | http://arxiv.org/abs/2301.00655v1 | # On Some Characterization of \(Gs\)-exponential kind of Convex Functions
###### Abstract
This manuscript introduces the idea of \(GS\)-exponential kind of convex functions and some of their algebraic features, and we introduce a new class \(GS\)-exponential kind of convex sets. In addition, we describe certain fundamental \(GS\)-exponential kind of convex function with characteristics in both the general and the differentiable cases. We establish the sufficient conditions of optimality and offer the proof for unconstrained as well as inequality-constrained programming while considering the assumption of \(GS\)-exponential kind of convexity.
**MSC:** 26A51, 26B25, 90C26.
**Keywords:**\(GS\)-exponential kind of convex functions and sets, Inequalities, Optimality conditions, Optimization.
## 1 Introduction
Due to the importance of convexity and its generalisations in the study of optimality to resolve mathematical issues, researchers have concentrated a lot of their efforts on generalised convex functions for this purpose. As an illustration, Hudzik and Maligranda (1994) [7], investigated at two distinct forms of \(s\)-convexity and found that \(s\)-convexity in the next meaning is basically more significant than in the first sense whenever (\(0<s<1\)). Youness (1999) [20] expanded the definitions of convex sets and functions to create a new class of sets and functions known as \(E\)-convex sets and \(E\)-convex functions. Yang (2001) [19] enhanced Youness's paper [20] by incorporating certain illustrations.
In recent years, academic experts have given these generalized convex functions in additional consideration. The semi-preinvex functions were studied by X.J. Long and J.W. Peng in 2006 [12] as a generalization of the semi-preinvex functions and
the \(b\)-vex functions. Y. Syau et al. (2009)[17] developed the \(E\)-\(b\)-vex function family, a novel class of functions which are the generalizations of \(b\)-vex functions and \(E\)-vex functions. In 2011, T. Emam investigated a novel class of functions known as approximately \(b\)-invex functions. He also discussed some of its properties and discovered the necessary optimality conditions for nonlinear programming using these functions. In their investigation of a novel class of generalized sub-\(b\)-convex functions and sub-\(b\)-convex sets, M.T. Chao et al. (2012) [13] showed the conditions for the existence of optimal solutions for both unconstrained and inequality-constrained sub-\(b\)-convex programming.
The study in our paper aims to introduce a new class of generalized exponential kind of convex functions termed as \(GS\)-exponential kind of convex functions and explores certain characteristics of the same class. This paper draws inspiration from a number of research papers [2, 5, 6, 8, 10, 14, 15, 16, 18, 21]. Additionally, we offer the adequate \(GS\)-exponential kind of convexity-derived criteria of optimality for programming with variables which are both unconstrained and inequality-constrained.
## 2 Preliminaries
We will go through the definitions of sub-\(b\)-\(s\)-convexity, exponential kind of convexity, and \(s\)-convexity of functions in this section of the manuscript. For the remainder of this work, let \(V\) stand for any non-empty convex subset in \(\mathbb{R}^{n}\).
**Definition 2.1**.: [11] The function \(Q:V\rightarrow\mathbb{R}\) is known as sub-\(b\)-\(s\)-convex in the second sense associated with the map \(G:V\times V\times(0,1]\rightarrow\mathbb{R}\), if
\[Q(am_{1}+(1-a)m_{2})\leq a^{s}Q(m_{1})+(1-a)^{s}Q(m_{2})+G(m_{1},m_{2},s)\]
holds for all \(m_{1},m_{2}\in V,a\in[0,1]\) and for any fixed \(s\in(0,1]\).
**Definition 2.2**.: [7] The function \(Q:V\rightarrow\mathbb{R}\) is known as \(s\)-convex in the second sense, if for all \(m_{1},m_{2}\in V,a\in[0,1]\) and for any fixed \(s\in(0,1]\), we have
\[Q(am_{1}+(1-a)m_{2})\leq a^{s}Q(m_{1})+(1-a)^{s}Q(m_{2})\]
**Definition 2.3**.: [9] A positive function \(Q:V\rightarrow\mathbb{R}\) is known as exponential kind of convex function, if
\[Q(am_{1}+(1-a)m_{2})\leq(e^{a}-1)Q(m_{1})+(e^{1-a}-1)Q(m_{2})\]
holds for all \(m_{1},m_{2}\in V,a\in[0,1]\).
The concepts defined as in 2.1, 2.2 and 2.3, motivates us to explore a new idea known as \(GS\)-exponential kind of convex function.
## 3 Main Results
**Definition 3.1**.: The function \(Q:V\to\mathbb{R}\) is known as \(GS\)-exponential kind of convex function on \(V\) associated with the map \(G:V\times V\times(0,1]\to\mathbb{R}\), if
\[Q(am_{1}+(1-a)m_{2})\leq(e^{a}-1)^{s}Q(m_{1})+(e^{1-a}-1)^{s}Q(m_{2})+aG(m_{1},m _{2},s) \tag{3.1}\]
holds for each \(m_{1},m_{2}\in V,a\in[0,1]\) and for any fixed \(s\in(0,1]\).
_Remark 3.1_.: If we take \(s=1\), \(Q(m_{1})\) is non-negative and \(G(m_{1},m_{2},s)=0\), the \(GS\)-exponential kind of convex function reduces to be exponential kind of convex function.
**Theorem 3.2**.: _If \(Q_{1},Q_{2}:V\to\mathbb{R}\) are \(GS\)-exponential kind of convex function associated with the map \(G_{1},G_{2}\) respectively, then \(Q_{1}+Q_{2}\) and \(\beta Q_{1}\), \((\beta\geq 0)\) are also a \(GS\)-exponential kind of convex function._
**Corollary 3.2.1**.: _If \(Q_{i}:V\to\mathbb{R}\), \((i=1,2,.....,n)\) are \(GS\)-exponential kind of convex function associated with the map \(G_{i}:V\times V\times(0,1]\to\mathbb{R}\), \((i=1,2,....,n)\), respectively, then \(Q=\sum_{i=1}^{n}\beta_{i}Q_{i},\beta\geq 0,(i=1,2,....,n)\) is \(GS\)-exponential kind of convex function associated with the map \(G=\sum_{i=1}^{n}\beta_{i}G_{i}\)._
**Lemma 3.3**.: _For all \(a\in[0,1]\) and \(s\in(0,1]\), the inequalities \((e^{a}-1)^{s}\geq a\) and \((e^{1-a}-1)^{s}\geq 1-a\) hold._
**Proposition 3.4**.: _Every convex function is \(GS\)-exponential kind of convex function if it has a map \(G\) associated with it that is non-negative._
**Theorem 3.5**.: _If \(Q:V\to\mathbb{R}\) is the GS-exponential kind of convex function associated with the map \(G\) and \(S:\mathbb{R}\to\mathbb{R}\) is a non-negative function in addition to being linear, then \(S\circ Q\) is a GS-exponential kind of convex function associated with the map \(S\circ G\)._
**Definition 3.6**.: Assume that \(U\) be a non-empty subset of \(\mathbb{R}^{n+1}\). Then, \(U\) is known as \(GS\)-exponential kind of convex set associated with the map \(G:\mathbb{R}^{n}\times\mathbb{R}^{n}\times(0,1]\to\mathbb{R}\) if for all \((m_{1},\alpha_{1}),(m_{2},\alpha_{2})\in U,m_{1},m_{2}\in\mathbb{R}^{n},a\in[ 0,1]\) and some fixed \(s\in(0,1]\), we have
\[(am_{1}+(1-a)m_{2},(e^{a}-1)^{s}\alpha_{1}+(e^{1-a}-1)^{s}\alpha_{2}+aG(m_{1},m_{2},s))\in U.\]
Now, we provide a characterization of \(GS\)-exponential kind of convex function \(Q:V\to\mathbb{R}\) based on their respective epigraphs, given by
\[E(Q)=\{(m,\alpha):m\in V,\alpha\in\mathbb{R},Q(m)\leq\alpha\}.\]
**Theorem 3.7**.: _A function \(Q:V\to\mathbb{R}\) is a \(GS\)-exponential kind of convex function associated with the map \(G:V\times V\times(0,1]\to\mathbb{R}\), if and only if \(E(Q)\) is a \(GS\)-exponential kind of convex set associated with the map \(G\)._
**Theorem 3.8**.: _Assume that \(m_{2}>0\) and \(Q_{\beta}:[m_{1},m_{2}]\to\mathbb{R}\) is a family of numerical functions associated with the map \(G_{\beta}\) and each \(G_{\beta}\) is a \(GS\)-exponential kind of convex functions and each \(G_{\beta}\) is bounded function, also assume that \(Q(m)=\sup_{\beta}Q_{\beta}(m)\) and \(G(m_{1},m_{2},s)=\sup_{\beta}G_{\beta}(m_{1},m_{2},s)\). If the set (non-empty) \(K=\{r\in[m_{1},m_{2}]|Q(r)<\infty\}\), then \(K\) is an interval and \(Q\) is \(GS\)-exponential kind of convex function on \(K\)._
**Theorem 3.9**.: _Let \(Q:[m_{1},m_{2}]\to\mathbb{R}\) be a \(GS\)-exponential kind of convex function associated with the map \(G:[m_{1},m_{2}]\times[m_{1},m_{2}]\times(0,1]\to\mathbb{R}\) and also let \(G(m_{1},m_{2},s)\) is bounded, then \(Q\) is also bounded on \([m_{1},m_{2}]\)._
In this section, \(Q\) is considered to be a differentiable function and \(s,a\in(0,1]\).
**Theorem 3.10**.: _Let \(Q:V\to\mathbb{R}\) be a non-negative differentiable \(GS\)-exponential kind of convex function associated with the map \(G\). Then_
\[(i)\nabla Q(m_{2})^{T}(m_{1}-m_{2})<\frac{(e^{a}-1)^{s}}{a}Q(m_{1})+\frac{e^{( 1-a)s}}{a}Q(m_{2})+G(m_{1},m_{2},s)-\frac{o(a)}{a},\]
\[(ii)\nabla Q(m_{2})^{T}(m_{1}-m_{2})<\frac{(e^{s}-1)^{s}(Q(m_{1})-Q(m_{2}))+3Q (m_{2})-o(a)}{a}+G(m_{1},m_{2},s)\]
**Theorem 3.11**.: _Let \(Q:V\to\mathbb{R}\) be a non-positive differentiable \(GS\)-exponential kind of convex function associated with the map \(G\). Then_
\[\nabla Q(m_{2})^{T}(m_{1}-m_{2})\leq\frac{(e^{a}-1)^{s}}{a}[Q(m_{1})-Q(m_{2}) ]+G(m_{1},m_{2},s)-\frac{o(a)}{a}.\]
**Corollary 3.11.1**.: _Assume that \(Q:V\to\mathbb{R}\) is a positive differentiable \(GS\)-exponential kind of convex function, then_
\[\nabla[Q(m_{2})-Q(m_{1})]^{T}(m_{1}-m_{2}) < \frac{(e^{a}-1)^{s}}{a}[Q(m_{1})+Q(m_{2})]+\frac{e^{(1-a)s}}{a}[Q (m_{1})+Q(m_{2})]\] \[+G(m_{1},m_{2},s)+G(m_{2},m_{1},s)-2\frac{o(a)}{a}.\]
_In case if \(Q\) is a negative valued, then_
\[\nabla[Q(m_{2})-Q(m_{1})]^{T}(m_{1}-m_{2})\leq G(m_{1},m_{2},s)+G(m_{2},m_{1}, s)-2\frac{o(a)}{a}.\]
The following methods are then utilized to apply the above outcomes to nonlinear programming. So, we take the unconstrained problem (S).
\[(S):\min\{Q(m),m\in V\} \tag{3.2}\]
**Theorem 3.12**.: _Let \(Q:V\rightarrow\mathbb{R}\) be a positive differentiable \(GS\)-exponential kind of convex function associated with the map \(G\). Also, suppose that \(m\in V\) and the inequality_
\[\nabla Q(m)^{T}(n-m)>G(n,m,s)+\frac{3Q(m)-o(a)}{a} \tag{3.3}\]
_holds for each \(n\in V,a\in(0,1)\), and for any particular \(s\in(0,1],\), then \(n\) is the solution optimal to the problem (3.2) related to \(Q\) on \(V\)._
The following example of unconstrained programming is taken into consideration
|
2302.08981 | Black-Box Batch Active Learning for Regression | Batch active learning is a popular approach for efficiently training machine
learning models on large, initially unlabelled datasets by repeatedly acquiring
labels for batches of data points. However, many recent batch active learning
methods are white-box approaches and are often limited to differentiable
parametric models: they score unlabeled points using acquisition functions
based on model embeddings or first- and second-order derivatives. In this
paper, we propose black-box batch active learning for regression tasks as an
extension of white-box approaches. Crucially, our method only relies on model
predictions. This approach is compatible with a wide range of machine learning
models, including regular and Bayesian deep learning models and
non-differentiable models such as random forests. It is rooted in Bayesian
principles and utilizes recent kernel-based approaches. This allows us to
extend a wide range of existing state-of-the-art white-box batch active
learning methods (BADGE, BAIT, LCMD) to black-box models. We demonstrate the
effectiveness of our approach through extensive experimental evaluations on
regression datasets, achieving surprisingly strong performance compared to
white-box approaches for deep learning models. | Andreas Kirsch | 2023-02-17T16:35:47Z | http://arxiv.org/abs/2302.08981v2 | # Black-Box Batch Active Learning for Regression
###### Abstract
Batch active learning is a popular approach for efficiently training machine learning models on large, initially unlabelled datasets, which repeatedly acquires labels for a batch of data points. However, many recent batch active learning methods are _white-box approaches_ limited to differentiable parametric models: they score unlabeled points using acquisition functions based on model embeddings or first- and second-order derivatives. In this paper, we propose _black-box batch active learning_ for regression tasks as an extension of white-box approaches. This approach is compatible with a wide range of machine learning models including regular and Bayesian deep learning models and non-differentiable models such as random forests. It is rooted in Bayesian principles and utilizes recent kernel-based approaches. Importantly, our method only relies on model predictions. This allows us to extend a wide range of existing state-of-the-art white-box batch active learning methods (BADGE, BAIT, LCMD) to black-box models. We demonstrate the effectiveness of our approach through extensive experimental evaluations on regression datasets, achieving surprisingly strong performance compared to white-box approaches for deep learning models.
## 1 Introduction
By selectively acquiring labels for a subset of available unlabelled data, active learning (Cohn et al., 1994) is suited for situations where the acquisition of labels is costly or time-consuming, such as in medical imaging or natural language processing. However, in deep learning, many recent batch active learning methods have focused on _white-box approaches_ that rely on the model being parametric and differentiable and which use first or second-order derivatives (e.g. model embeddings)1.
Footnote 1: Model embeddings can also be seen as first-order derivatives of the model score under regression in regard to the last layer.
This can present a limitation in real-world scenarios where model internals or gradients might not be accessible--or might be expensive to access. This is particularly true in the case of 'foundation models' (Bommasani et al., 2021) and large language models such as GPT-3 (Brown et al., 2020), for example, when accessed via a third party. More generally, a lack of differentiability might hinder application of white-box batch active learning approaches to non-differentiable models.
To address these limitations, we propose _black-box batch active learning (B\({}^{g}\)AL)_ for regression which is compatible with a wider range of machine learning models. Our approach is rooted in Bayesian principles and only requires model predictions from a small (bootstrapped) ensemble. Specifically, we utilize an _(empirical) predictive covariance kernel_ based on sampled predictions. We show that the well-known gradient kernel (Kothawade et al., 2021, 2022) can be seen as an approximation of this predictive covariance kernel.
The proposed approach extends to non-differentiable models through a Bayesian view on the hypothesis space formulation of active learning, based on the ideas behind query-by-committee (Seung et al., 1992). This enables us to use batch active learning methods, such as BAIT (Ash et al., 2021) and BADGE (Ash et al., 2019) in a black-box setting with non-differentiable models, such random forests or gradient-boosted trees.
We evaluate black-box batch active learning on a diverse set of regression datasets. Unlike the above mentioned _white-box_ parametric active learning methods which scale in the number of (last-layer) model parameters or the embedding size, our method scales in the number of drawn predictions, and we show that we can already
obtain excellent results with a small ensemble. Our results demonstrate the label efficiency of B\({}^{3}\)AL for various machine learning models. For deep learning models, B\({}^{3}\)AL even performs better than the corresponding state-of-the-art white-box methods in many cases.
In summary, by leveraging the strengths of kernel-based methods and Bayesian principles, our approach improves the labeling efficiency of a range of differentiable and non-differentiable machine-learning models with surprisingly strong performance.
The rest of the paper is organized as follows: in SS2, we discuss related work in active learning and kernel-based methods. In SS3, we describe the relevant background and provide a detailed description of B\({}^{3}\)AL. In SS4, we detail the experimental setup and provide the results of our experimental evaluation. Finally, SS5 concludes with a discussion and directions for future research.
## 2 Related Work
Active learning has a rich history in the machine learning community, with its origins dating back to seminal works such as (Cohn et al., 1994; Lindley, 1956; Fedorov, 1972; MacKay, 1992). A comprehensive survey of early active learning methods can be found in Settles (2009), while more recent surveys of contemporary deep learning methods can be found in Ren et al. (2021) and Zhan et al. (2022). Our work focuses on pool-based batch active learning, which involves utilizing a pool of unlabeled data and acquiring labels for batches of points rather than individual points at a time. The main challenge in pool-based batch active learning is the choice of the acquisition function.
**Differentiable Models**. Many acquisition functions approximate well-known information-theoretic quantities (MacKay, 1992), often by approximating Fisher information implicitly or explicitly. This can be computationally expensive, particularly in deep learning where the number of model parameters can be large--even when using last-layer approximations or assuming a generalized linear model (Kirsch and Gal, 2022). BADGE (Ash et al., 2019) and BAIT (Ash et al., 2021) approximate the Fisher information using last-layer loss gradients or the Hessian, respectively, but still have a computational cost scaling with the number of last layer weights. This also applies to methods using similarity matrices (kernels) based on loss gradients of last-layer weights such as SIMILAR (Kothawade et al., 2022) and PRISM (Kothawade et al., 2022), to only name a few. Importantly, all of these approaches require differentiable models.
**Non-Differentiable Models**. _Query-by-committee (QbC, Seung et al. (1992))_ measures the disagreement between different model instances to identify informative samples and has been applied to regression (Krogh and Vedelsby, 1994; Burbidge et al., 2007). Kee et al. (2018) extend QbC to the batch setting with a diversity term based on the distance of data points in input space. Nguyen et al. (2012) show batch active learning for random forests. They train an ensemble of random forests and evaluate the joint entropy of the predictions of the ensemble for batch active learning, which can be seen as a special case of BatchBALD in regression.
**BALD**. Our work most closely aligns with the BALD-family of Bayesian active learning acquisition functions (Houlsby et al., 2011), which focus on classification tasks, however. The crucial insight of BALD is applying the symmetry of mutual information to compute the expected information gain in prediction space instead of in parameter space. As a result, BALD is a _black-box technique_ that only leverages model predictions. The extension of BALD to deep learning and multi-class classification tasks using Monte-Carlo dropout models is presented in Gal et al. (2017), which employs batch active learning by selecting the top-\(B\) scoring samples from the pool in each acquisition round. BatchBALD (Kirsch et al., 2019) builds upon BALD by introducing a principled approach for batch active learning and consistent MC dropout to account for correlations between predictions. The estimators utilized by Gal et al. (2017) and Kirsch et al. (2019) enumerate over all classes, leading to a trade-off between combinatorial explosion and Monte-Carlo sampling, which can result in degraded quality estimates as acquisition batch sizes increase. Houlsby et al. (2011), Gal et al. (2017), and Kirsch et al. (2019) have not applied BALD to regression tasks. More recently, Jesson et al. (2021) investigate active learning for regression of causal treatment effect and adopt a sampling distribution based on individual-acquisition scores, a heuristic proposed in Kirsch et al. (2021).
**Kernel-Based Methods**.Holzmuller et al. (2022) examine the previously mentioned methods and unify them using gradient-based kernels. Specifically, they express BALD (Houlsby et al., 2011), BatchBALD
(Kirsch et al., 2019), BAIT (Ash et al., 2021), BADGE (Ash et al., 2019), ACS-FW (Pinsler et al., 2019), and Core-Set (Sener and Savarese, 2017)/FF-Active (Geifman and El-Yaniv, 2017) using kernel-based methods for regression tasks. They also propose a new method, LCMD (largest cluster maximum distance).
**Our Contribution.** We extend the work of Houlsby et al. (2011) and Holzmuller et al. (2022) by combining the prediction-based approach with a kernel-based formulation. This trivially enables batch active learning on regression tasks using black-box predictions for a wide range of existing batch active learning methods.
## 3 Methodology
Here, we describe the problem setting and motivate the proposed method.
**Problem Setting.** Our proposed method is inspired by the BALD-family of active learning frameworks (Houlsby et al., 2011) and its extension to batch active learning (Kirsch et al., 2019). In our deviation, we make use of a Bayesian model in the narrow sense that we require some stochastic parameters \(\mathbf{\Omega}\)--the model parameters or bootstrapped training data, for example--with a distribution2\(\operatorname{p}(\mathbf{\omega})\):
Footnote 2: We do not require a prior distribution as active learning is not concerned with how we arrive at the model we want to acquire labels for. We can define a prior and perform, e.g, variational inference, but we do not need to. Hence, we use \(\operatorname{p}(\mathbf{\omega})\).
\[\operatorname{p}(y,\mathbf{\omega}\mid\operatorname{\mathbf{x}})=\operatorname{ p}(y\mid\operatorname{\mathbf{x}},\mathbf{\omega})\operatorname{p}(\mathbf{\omega}). \tag{1}\]
Bayesian model averaging (BMA) is performed by marginalizing over \(\operatorname{p}(\mathbf{\omega})\) to obtain the predictive distribution \(\operatorname{p}(y\mid\operatorname{\mathbf{x}})\), and Bayesian inference to obtain a posterior \(\operatorname{p}(\mathbf{\omega}\mid\mathcal{D})\) for additional data \(\mathcal{D}\) by:
\[\operatorname{p}(\mathbf{\omega}\mid\mathcal{D})\propto\operatorname{p}(\mathcal{ D}\mid\mathbf{\omega})\operatorname{p}(\mathbf{\omega}), \tag{2}\]
where \(\operatorname{p}(\mathcal{D}\mid\mathbf{\omega})\) is the likelihood of the data given the parameters \(\mathbf{\Omega}\) and \(\operatorname{p}(\mathbf{\omega}\mid\mathcal{D})\) is the new posterior distribution over \(\mathbf{\Omega}\). Importantly, the choice of \(\operatorname{p}(\mathbf{\omega})\) covers ensembles (Breiman, 1996; Lakshminarayanan et al., 2017) as well as models with additional stochastic inputs (Osband et al., 2021) or randomized training data by subsampling of the training set, e.g., bagging (Breiman, 1996).
**Pool-based Active Learning** assumes access to a pool set of unlabeled data \(\mathcal{D}^{\text{pool}}=\{\operatorname{\mathbf{x}}_{i}^{\text{pool}}\}\) and a small initially labeled training set \(\mathcal{D}^{\text{train}}=\{(\operatorname{\mathbf{x}}_{i}^{\text{train}},y _{i}^{\text{train}})\}\), or \(\mathcal{D}^{\text{train}}=\emptyset\). In the batch acquisition setting, we want to repeatedly acquire labels for a subset \(\{\operatorname{\mathbf{x}}_{i}^{\text{seq}}\}\) of the pool set of a given acquisition batch size \(B\) and add them to the training set \(\mathcal{D}^{\text{train}}\). Ideally, we want to select samples that are highly 'informative' for the model. For example, these could be samples that are likely to be misclassified or have a large prediction uncertainty for models trained on the currently available training set \(\mathcal{D}^{\text{train}}\). Once, we have chosen such an _acquisition batch_\(\{\operatorname{\mathbf{x}}_{i}^{\text{seq}}\}\) of unlabeled data, we acquire labels \(\{y_{i}^{\text{seq}}\}\) for these samples and train a new model on the combined training set \(\mathcal{D}^{\text{train}}\cup\{(\operatorname{\mathbf{x}}^{\text{seq}},y _{i}^{\text{seq}})\}\) and repeat the process. Cruicial to the success of active learning is the choice of acquisition function \(\mathcal{A}(\{\operatorname{\mathbf{x}}_{i}^{\text{seq}}\};\operatorname{p}( \mathbf{\omega}))\) which is a function of the acquisition batch \(\{\operatorname{\mathbf{x}}_{i}^{\text{seq}}\}\) and the distribution \(\operatorname{p}(\mathbf{\omega})\) and which we try to maximize in each acquisition round. It measures the informativeness of an acquisition batch for the current model.
**Univariate Regression** is a common task in machine learning. We assume that the target \(y\) is real-valued (\(\in\mathbb{R}\)) with homoscedastic Gaussian noise:
\[Y\mid\operatorname{\mathbf{x}},\mathbf{\omega}\sim\mathcal{N}(\mu(\operatorname{ \mathbf{x}};\mathbf{\omega}),\,\sigma_{N}^{2}). \tag{3}\]
Equivalently, \(Y\mid\operatorname{\mathbf{x}},\mathbf{\omega}\sim\mu(\operatorname{\mathbf{x}}; \mathbf{\omega})+\varepsilon\) with \(\varepsilon\sim\mathcal{N}(0,\,\sigma_{N}^{2})\). As usual, we assume that the noise is independent for different inputs \(\operatorname{\mathbf{x}}\) and parameters \(\mathbf{\omega}\). Homoscedastic noise is a special case of the general heteroscedastic setting: the noise variance is simply a constant. Our approach could easily be extended to heteroscedastic noise by substituting a function \(\sigma_{N}(\operatorname{\mathbf{x}};\mathbf{\omega})\) for \(\sigma_{N}\), but for this work we limit ourselves to the simplest case.
### Kernel-based Methods & Information Theory
We build on Holzmuller et al. (2022) which expresses contemporary batch active learning methods using kernel methods. While a full treatment is naturally beyond the scope of this paper, we briefly review some key ideas here. We refer the reader to the extensive paper for more details.
Gaussian Processes are one way to introduce kernel-based methods. A simple way to think about Gaussian Processes (Williams and Rasmussen, 2006; Lazaro-Gredilla and Figueiras-Vidal, 2010; Rudner et al., 2022) is as Bayesian linear regression model with an implicit, potentially infinite-dimensional feature space (depending on the covariance kernel) that uses the kernel trick to abstract away the feature map which maps the input space to the feature space.
Multivariate Gaussian Distribution.The distinctive property of a Gaussian Process is that all predictions are jointly Gaussian distributed. We can then write the joint distribution for a univariate regression model as:
\[Y_{1},\ldots,Y_{n}\mid\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\sim\mathcal{N}( \mathbf{0},\,\mathrm{Cov}[\mu(\mathbf{x}_{1}),\ldots,\mu(\mathbf{x}_{n})]+ \sigma_{N}^{2}\mathbf{I}), \tag{4}\]
where \(\mu(\mathbf{x})\) are the observation-noise free predictions as random variables and \(\mathrm{Cov}[\mu(\mathbf{x}_{1}),\ldots,\mu(\mathbf{x}_{n})]\) is the covariance matrix of the predictions. The covariance matrix is defined via the kernel function \(k(\mathbf{x},\mathbf{x}^{\prime})\):
\[\mathrm{Cov}[\mu(\mathbf{x}_{1}),\ldots,\mu(\mathbf{x}_{n})]=\Big{[}k(\mathbf{ x}_{i},\mathbf{x}_{j})\Big{]}_{i,j=1}^{n}. \tag{5}\]
The kernel function \(k(\mathbf{x},\mathbf{x}^{\prime})\) can be chosen almost arbitrarily, e.g. see Williams and Rasmussen (2006, Ch. 4). The linear kernel \(k(\mathbf{x},\mathbf{x}^{\prime})=\mathbf{x}\cdot\mathbf{x}^{\prime}\) and the radial basis function kernel \(k(\mathbf{x},\mathbf{x}^{\prime})=\exp(-\frac{1}{2}|\mathbf{x}-\mathbf{x}^{ \prime}|^{2})\) are common examples, as is the gradient kernel, which we examine next.
Fisher Information & Linearization.When using neural networks for regression, the gradient kernel
\[k_{\mathrm{grad}}(\mathbf{x},\mathbf{x}^{\prime}\mid\mathbf{\omega}^ {*}) \triangleq\nabla_{\mathbf{\omega}}\mu(\mathbf{x};\mathbf{\omega}^{*}) \nabla_{\mathbf{\omega}}^{2}[-\log\mathrm{p}(\mathbf{\omega}^{*})]^{-1}\nabla_{\mathbf{ \omega}}\mu(\mathbf{x}^{\prime};\mathbf{\omega}^{*})^{\top} \tag{6}\] \[=\langle\nabla_{\mathbf{\omega}}\mu(\mathbf{x};\mathbf{\omega}^{*}), \nabla_{\mathbf{\omega}}\mu(\mathbf{x}^{\prime};\mathbf{\omega}^{*})\rangle\nabla_{ \mathbf{\omega}^{*}[-\log\mathrm{p}(\mathbf{\omega}^{*})]^{-1}} \tag{7}\]
is the canonical choice, where \(\mathbf{\omega}^{*}\) is a _maximum likelihood_ or _maximum a posteriori estimate (MLE, MAP)_ and \(\nabla_{\mathbf{\omega}}^{2}[-\log\mathrm{p}(\mathbf{\omega}^{*})]\) is the Hessian of the prior at \(\mathbf{\omega}^{*}\). Note that \(\nabla_{\mathbf{\omega}}\mu(\mathbf{x};\mathbf{\omega}^{*})\) is a _row_ vector. Commonly, the prior is a Gaussian distribution with an identity covariance matrix, and thus \(\nabla_{\mathbf{\omega}}^{2}[-\log\mathrm{p}(\mathbf{\omega}^{*})]=\mathbf{I}\).
The significance of this kernel lies in its relationship with the Fisher information matrix at \(\mathbf{\omega}^{*}\)(Immer, 2020; Immer et al., 2021; Kirsch and Gal, 2022), or equivalently, with the linearization of the loss function around \(\mathbf{\omega}^{*}\)(Holzmuller et al. (2022)). This leads to a Gaussian approximation, which results in a Gaussian predictive posterior distribution when combined with a Gaussian likelihood. The use of the finite-dimensional gradient kernel thus results in an implicit Bayesian linear regression in the context of regression models.
Posterior Gradient Kernel.We can use the well-known properties of multivariate normal distribtions to marginalize or condition the joint distribution in (4). Following Holzmuller et al. (2022), this allows us to explicitly obtain the posterior gradient kernel given additional \(\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\) as:
\[k_{\mathrm{grad}\to\mathrm{post}(\mathbf{x}_{1},\ldots,\mathbf{x}_{n})}( \mathbf{x},\mathbf{x}^{\prime}\mid\mathbf{\omega}^{*}) \tag{8}\] \[\triangleq\nabla_{\mathbf{\omega}}\mu(\mathbf{x};\mathbf{\omega}^{*}) \left(\sigma_{N}^{-2}\begin{pmatrix}\nabla_{\mathbf{\omega}}\mu(\mathbf{x}_{1}; \mathbf{\omega}^{*})\\ \vdots\\ \nabla_{\mathbf{\omega}}\mu(\mathbf{x}_{n};\mathbf{\omega}^{*})\end{pmatrix}\begin{pmatrix} \nabla_{\mathbf{\omega}}\mu(\mathbf{x}_{1};\mathbf{\omega}^{*})\\ \vdots\\ \nabla_{\mathbf{\omega}}\mu(\mathbf{x}_{n};\mathbf{\omega}^{*})\end{pmatrix}^{\top}+ \nabla_{\mathbf{\omega}}^{2}[-\log\mathrm{p}(\mathbf{\omega}^{*})]\right)^{-1}\nabla_{ \mathbf{\omega}}\mu(\mathbf{x}^{\prime};\mathbf{\omega}^{*})^{\top}.\]
The factor \(\sigma_{N}^{-2}\) originates from implicitly conditioning on the observations \(Y_{i}\mid\mathbf{x}_{i}\), which include observation noise.
Importantly for active learning, the multivariate normal distribution is the maximum entropy distribution for a given covariance matrix, and is thus an upper-bound for the entropy of any distribution with the same covariance matrix. The entropy is given by the log-determinant of the covariance matrix:
\[\mathrm{H}[Y_{1},\ldots,Y_{n}\mid\mathbf{x}_{1},\ldots,\mathbf{x}_{n}]=\frac{1 }{2}\log\det(\mathrm{Cov}[\mu(\mathbf{x}_{1}),\ldots,\mu(\mathbf{x}_{n})]+ \sigma^{2}\mathbf{I})+C_{n}, \tag{9}\]
where \(C_{n}\triangleq\frac{n}{2}\log(2\,\pi\,e)\) is a constant that only depends on the number of samples \(n\). Connecting kernel-based methods to information-theoretic quantities like the expected information gain, we then know that the respective acquisition scores are upper-bounds on the actual expected information gain, as we see below.
**Expected Information Gain (EIG)**(Lindley, 1956) is based on the mutual information. For a Bayesian model, the EIG between the parameters \(\mathbf{\Omega}\) and the predictions \(Y\mid x\) is defined as:
\[\mathrm{I}[\mathbf{\Omega};Y\mid\mathbf{x}]=\mathrm{H}[\mathbf{\Omega}]-\mathrm{H}[\mathbf{ \Omega}\mid Y,\mathbf{x}]=\int\mathrm{p}(\mathbf{\omega},y\mid\mathbf{x})\log\frac{ \mathrm{p}(\mathbf{\omega}\mid y,\mathbf{x})}{\mathrm{p}(\mathbf{\omega})}\,d\mathbf{ \omega}\,dy, \tag{10}\]
where \(\mathrm{I}[\mathbf{\Omega};Y\mid\mathbf{x}]\) is the mutual information between the parameters \(\mathbf{\Omega}\) and the model predictions \(Y\) given the data \(\mathbf{x}\); \(\mathrm{H}[\mathbf{\Omega}]\) is the entropy of the parameters \(\mathbf{\Omega}\); and \(\mathrm{H}[\mathbf{\Omega}\mid Y,\mathbf{x}]\) is the conditional entropy of the parameters \(\mathbf{\Omega}\) in expectation over the predictions \(\mathbf{\Omega}\) and the model predictions \(Y\mid\mathbf{x}\). The EIG measures the amount of information that can be gained about the parameters \(\mathbf{\Omega}\) by observing the predictions \(Y\mid\mathbf{x}\) on average, where we think of the entropy \(\mathrm{H}[\mathbf{\Omega}]\) as capturing the uncertainty about the parameters \(\mathbf{\Omega}\).
Computing the EIG can be difficult when focused on the model paramters, as we need to compute the conditional entropy \(\mathrm{H}[\mathbf{\Omega}\mid Y,\mathbf{x}]\)--yet this is what many recent approaches effectively attempt to do via the Fisher information (Kirsch and Gal, 2022). (We do not need to compute the entropy of the parameters \(\mathrm{H}[\mathbf{\Omega}]\), as it does not depend on the data \(\mathbf{x}\) and can be dropped from the optimization objective.)
**BALD**(Houlsy et al., 2011) examines the expected information gain in _prediction space_ instead of parameter space, using the symmetry of the mutual information:
\[\mathrm{I}[\mathbf{\Omega};Y\mid\mathbf{x}]=\mathrm{I}[Y;\mathbf{\Omega}\mid\mathbf{x} ]=\mathrm{H}[Y\mid\mathbf{x}]-\mathrm{H}[Y\mid\mathbf{\Omega},\mathbf{x}]=\int \mathrm{p}(\mathbf{\omega},y\mid\mathbf{x})\log\frac{\mathrm{p}(y\mid\mathbf{x}, \mathbf{\omega})}{\mathrm{p}(y\mid\mathbf{x})}\,d\mathbf{\omega}\,dy, \tag{11}\]
This can be much easier to evaluate by sampling from \(\mathbf{\Omega}\) without the need for additional Bayesian inference. BatchBALD (Kirsch et al., 2019) is an extension of BALD to the batch acquisition of samples \(\{\mathbf{x}_{i}^{\text{acq}}\}\) for classification tasks:
\[\mathrm{I}[\mathbf{\Omega};Y_{1},\dots,Y_{B}\mid\mathbf{x}_{1},\dots,\mathbf{x}_{ B}]=\mathrm{H}[Y_{1},\dots,Y_{B}\mid\mathbf{x}_{1},\dots,\mathbf{x}_{B}]-\sum_{i} \mathrm{H}[Y_{i}\mid\mathbf{x}_{i},\mathbf{\Omega}], \tag{12}\]
using the mutual information between the parameters \(\mathbf{\omega}\) and the predictions \(\{Y_{i}\}\) on an acquisition candidate batch \(\{\mathbf{x}_{i}\}\), and thus takes the correlations between samples into account.
This joint entropy can be upper-bounded by the corresponding entropy of multivariate normal distribution with the same covariance matrix, while the conditional entropy term is precisely the entropy of a normal distribution given our model assumptions for the observation noise, yielding an upper-bound overall.
The same approach can be applied to other information quantities like the expected predicted information gain (Bickford Smith et al., 2023). Unlike Kirsch and Gal (2022), which examines the connection between Fisher information (weight-space) and information-theoretic quantities, this exposition shows the connection between predictions and information-theoretic quantities.
**Greedy Acquisition** is a heuristic to find the subset of size \(B\) in \(\mathcal{D}^{\text{pool}}\) that maximizes \(\mathcal{A}(\{\mathbf{x}_{i}\},\mathrm{p}(\mathbf{\omega}))\). Finding the optimal solution is NP-hard (Schrijver et al., 2003). Kirsch et al. (2019) find that \(\mathcal{A}_{\text{BatchBALD}}\) is submodular and that iteratively adding the sample \(\mathbf{x}^{\text{acq}}{}_{i}\) with the highest \(\mathcal{A}_{\text{BatchBALD}}(\mathbf{x}^{\text{acq}}{}_{1},\dots,\mathbf{x} ^{\text{acq}}{}_{i},\mathrm{p}(\mathbf{\omega}))\) to the acquisition batch size is a \(1-\nicefrac{{1}}{{e}}\)-approximation. This is a well-known result in submodular optimization and is known as the _greedy algorithm_(Nemhauser et al., 1978). For the entropy approximation, finding an optimal solution is equivalent to finding the mode of the k-DPP (Kulesza and Taskar, 2011).
**Other Information-Theoretic Quantities and Selection Methods.** Other information-theoretic quantities can be examind similarly. We refer to Kirsch and Gal (2022) for more details. For other selection methods, we refer to Holzmuller et al. (2022).
### Black-Box Batch Active Learning
We formally introduce the prediction-based covariance kernel and compare it to the gradient kernel commonly used for active learning with deep learning models in parameter-space. For non-differentiable models, we show how it can also be derived using a Bayesian model perspective on the hypothesis space.
In addition to being illustrative, this section allows us to connect prediction-based kernels to the kernels used by Holzmuller et al. (2022), which in turns connects them to various SotA active learning methods.
#### 3.2.1 Prediction Kernel
To perform black-box batch active learning, we directly use the _predictive covariance_ of \(Y_{i}|\mathbf{x}_{i}\) and \(Y_{j}|\mathbf{x}_{j}\):
\[\text{Cov}_{\boldsymbol{\Omega}}[Y_{i};Y_{j}\mid\mathbf{x}_{i},\mathbf{x}_{j}]= \text{Cov}_{\boldsymbol{\Omega}}[\mu^{\boldsymbol{\omega}}_{\mathbf{x}_{i}}; \mu^{\boldsymbol{\omega}}_{\mathbf{x}_{j}}]+\sigma_{N}^{2}\mathbf{1}\{i=j\}. \tag{13}\]
where we have abbreviated \(\mu(\mathbf{x};\boldsymbol{\omega})\) with \(\mu^{\boldsymbol{\omega}}_{\mathbf{x}}\), and used the law of total covariance and the fact that the noise is uncorrelated between samples.
We define the _predictive covariance kernel_\(k_{\text{pred}}(\mathbf{x}_{i},\mathbf{x}_{j})\) as the covariance of the predicted means:
\[k_{\text{pred}}(\mathbf{x}_{i};\mathbf{x}_{j})\triangleq\text{Cov}_{ \boldsymbol{\Omega}}[\mu^{\boldsymbol{\omega}}_{\mathbf{x}_{i}};\mu^{ \boldsymbol{\omega}}_{\mathbf{x}_{j}}]. \tag{14}\]
Compared to SS3.1, we do not define the covariance via the kernel but the kernel via the covariance.
#### 3.2.2 Empirical Predictive Covariance Kernel
For sampled (empirical) model parameters \(\hat{\boldsymbol{\Omega}}=(\boldsymbol{\omega}_{1},\ldots,\boldsymbol{\omega }_{K})\sim\text{p}(\boldsymbol{\omega})\), we define the _empirical predictive covariance kernel_\(k_{\text{pred}}(\mathbf{x}_{i};\mathbf{x}_{j})\):
\[k_{\text{pred}}(\mathbf{x}_{i};\mathbf{x}_{j})\triangleq\widehat{\text{Cov}}_ {\boldsymbol{\Omega}}[\mu^{\boldsymbol{\omega}}_{\mathbf{x}_{i}};\mu^{ \boldsymbol{\omega}}_{\mathbf{x}_{j}}] =\frac{1}{K}\sum_{k=1}^{K}\left(\mu^{\boldsymbol{\omega}_{k}}_{ \mathbf{x}_{i}}-\frac{1}{K}\sum_{l=1}^{K}\mu^{\boldsymbol{\omega}_{l}}_{ \mathbf{x}_{i}}\right)\,\left(\mu^{\boldsymbol{\omega}_{k}}_{\mathbf{x}_{j}}- \frac{1}{K}\sum_{l=1}^{K}\mu^{\boldsymbol{\omega}_{l}}_{\mathbf{x}_{j}}\right) \tag{15}\] \[=\left\langle\frac{1}{\sqrt{K}}(\bar{\mu}^{\boldsymbol{\omega}_{1 }}_{\mathbf{x}_{i}},\ldots,\bar{\mu}^{\boldsymbol{\omega}_{K}}_{\mathbf{x}_{i} }),\frac{1}{\sqrt{K}}(\bar{\mu}^{\boldsymbol{\omega}_{1}}_{\mathbf{x}_{j}}, \ldots,\bar{\mu}^{\boldsymbol{\omega}_{K}}_{\mathbf{x}_{j}})\right\rangle, \tag{16}\]
with centered predictions \(\bar{\mu}^{\boldsymbol{\omega}_{k}}_{\mathbf{x}}\triangleq\mu^{\boldsymbol{ \omega}_{k}}_{\mathbf{x}}-\frac{1}{K}\sum_{l=1}^{K}\mu^{\boldsymbol{\omega}_{ l}}_{\mathbf{x}}\).
#### 3.2.3 Differentiable Models
Similar to Holzmuller et al. (2022, SSC.1), we show that the posterior gradient kernel is a first-order approximation of the covariance kernel. This section explicitly conditions on \(\mathcal{D}^{\text{train}}\). The result is simple but instructive:
**Proposition 3.1**.: _The posterior gradient kernel \(k_{\text{grad}\rightarrow\text{post}(\mathcal{D}^{\text{train}})}(\mathbf{x}_{i };\mathbf{x}_{j}\mid\boldsymbol{\omega}^{*})\) is an approximation of the predictive covariance kernel \(k_{\text{pred}}(\mathbf{x}_{i};\mathbf{x}_{j})\)._
Proof.: We use a first-order Taylor expansion of the mean function \(\mu(\mathbf{x};\boldsymbol{\omega})\) around \(\boldsymbol{\omega}^{*}\):
\[\mu(\mathbf{x};\boldsymbol{\omega})\approx\mu(\mathbf{x};\boldsymbol{\omega}^{* })+\nabla_{\boldsymbol{\omega}}\mu(\mathbf{x};\boldsymbol{\omega}^{*})\, \underbrace{(\boldsymbol{\omega}-\boldsymbol{\omega}^{*})}_{\triangleq \boldsymbol{\omega}\boldsymbol{\omega}}. \tag{17}\]
Choose \(\boldsymbol{\omega}^{*}=\mathbb{E}_{\boldsymbol{\omega}\sim\text{p}(\boldsymbol {\omega}|\mathcal{D}^{\text{train}})}[\boldsymbol{\omega}]\) (BMA). Then we have \(\mathbb{E}_{\text{p}(\boldsymbol{w}|\mathcal{D}^{\text{train}})}[\mu(\mathbf{x} ;\boldsymbol{\omega})]=\mu(\mathbf{x};\boldsymbol{\omega}^{*})\). We then have:
\[k_{\text{pred}}(\mathbf{x}_{i};\mathbf{x}_{j}) =\text{Cov}_{\boldsymbol{\omega}\sim\text{p}(\boldsymbol{\omega }|\mathcal{D}^{\text{train}})}[\mu(\mathbf{x}_{i};\boldsymbol{\omega});\mu( \mathbf{x}_{j};\boldsymbol{\omega})] \tag{18}\] \[\approx\mathbb{E}_{\boldsymbol{\omega}^{*}\sim\boldsymbol{\omega} \sim\text{p}(\boldsymbol{\omega}|\mathcal{D}^{\text{train}})}[(\nabla_{ \boldsymbol{\omega}}\mu^{\boldsymbol{\omega}^{*}}_{\mathbf{x}}\;\Delta \boldsymbol{\omega},\nabla_{\boldsymbol{\omega}}\mu^{\boldsymbol{\omega}^{*}}_{ \mathbf{x}}\;\Delta\boldsymbol{\omega})]\] (19) \[=\nabla_{\boldsymbol{\omega}}\mu^{\boldsymbol{\omega}^{*}}_{ \mathbf{x}_{i}}\,\mathbb{E}_{\boldsymbol{\omega}^{*}\sim\boldsymbol{\omega} \sim\text{p}(\boldsymbol{w}|\mathcal{D}^{\text{train}})}[\Delta\boldsymbol{ \omega}\Delta\boldsymbol{\omega}^{\top}|\,\nabla_{\boldsymbol{\omega}}\mu^{ \boldsymbol{\omega}^{*}\top}_{\mathbf{x}_{j}}\] (20) \[=\nabla_{\boldsymbol{\omega}}\mu(\mathbf{x}_{i};\boldsymbol{\omega}^ {*})\,\text{Cov}[\boldsymbol{\Omega}\;|\mathcal{D}^{\text{train}}]\,\nabla_{ \boldsymbol{\omega}}\mu(\mathbf{x}_{j};\boldsymbol{\omega}^{*})^{\top}\] (21) \[\approx k_{\text{grad}\rightarrow\text{post}(\mathcal{D}^{\text{ train}})}(\mathbf{x}_{i};\mathbf{x}_{j}\mid\boldsymbol{\omega}^{*}). \tag{22}\]
The intermediate expectation is the model covariance \(\text{Cov}[\boldsymbol{\Omega}\,|\,\mathcal{D}^{\text{train}}]\) as \(\boldsymbol{\omega}^{*}\) is the BMA. For the last step, we use the Gauss-Newton approximation (Immer et al., 2021; Kirsch and Gal, 2022) and approximate the inverse of the covariance using the Hessian of the negative log likelihood at \(\boldsymbol{\omega}^{*}\):
\[\text{Cov}[\boldsymbol{\Omega}\,|\,\mathcal{D}^{\text{train}}]^{-1}\approx \nabla_{\boldsymbol{\omega}}^{2}[-\log\text{p}(\boldsymbol{\omega}^{*}\,|\, \mathcal{D}^{\text{train}})] \tag{23}\]
\[=\nabla^{2}_{\mathbf{\omega}}[-\log\mathrm{p}(\mathcal{D}^{\mathrm{train} }\mid\mathbf{\omega}^{*})-\log\mathrm{p}(\mathbf{\omega}^{*})] \tag{24}\] \[=\sigma^{-2}\sum_{i}\nabla_{\mathbf{\omega}}\mu(\mathbf{x}^{\mathrm{ train}_{i}};\mathbf{\omega}^{*})^{\top}\nabla_{\mathbf{\omega}}\mu(\mathbf{x}^{\mathrm{ train}_{i}};\mathbf{\omega}^{*})\] (25) \[\quad-\nabla^{2}_{\mathbf{\omega}}\log\mathrm{p}(\mathbf{\omega}^{*}), \tag{26}\]
where we have first used Bayes' theorem and that \(\mathrm{p}(\mathcal{D}^{\mathrm{train}})\) vanishes under differentiation--it is constant in \(\mathbf{\omega}\). Secondly, the Hessian of the negative log likelihood is just the outer product of the gradients divided by the noise variance in the homoscedastic regression case. \(\nabla^{2}_{\mathbf{\omega}}[-\log\mathrm{p}(\mathbf{\omega}^{*})]\) is the prior term. This matches (8).
#### 3.2.4 Non-Differentiable Models
How can we apply the above result to non-differentiable models? In the following, we use a Bayesian view on the hypothesis space to show that we can connect the empirical predictive covariance kernel to a gradient kernel, too. With \(\mathbf{\hat{\Omega}}\) as defined previously, we introduce a latent \(\Psi\) to represent the 'true' hypothesis \(\mathbf{\omega}_{\psi}\in\mathbf{\hat{\Omega}}\) from the empirical hypothesis space \(\mathbf{\hat{\Omega}}\). This is similar to QbC (Seung et al., 1992).
Specifically, we use a one-hot categorical distribution, that is a multinomial distribution from which we draw one sample. We use \(\Psi\sim\mathrm{Multinomial}(\mathbf{q},1)\) over the empirical \(\mathbf{\omega}_{k}\) as hypotheses, with \(\mathbf{q}\in S^{K-1}\) defining the distribution; \(S^{K-1}\) the \(K-1\) simplex in \(\mathbb{R}^{K}\); \(\mathbf{q}_{k}=\mathrm{p}(\mathbf{\omega}_{\psi}=k)\); and \(\sum_{k=1}^{K}\mathbf{q}_{k}=1\). For the predictive \(\tilde{\mu}(\mathbf{x};\psi)\), we have:
\[\tilde{\mu}(\mathbf{x};\psi)\triangleq\mu(\mathbf{x};\mathbf{\omega}_{\psi})= \langle\mu(\mathbf{x};\cdot),\psi\rangle, \tag{27}\]
where \(\mu(\mathbf{x};\cdot)\) is a column vector of the predictions \(\mu(\mathbf{x};\mathbf{\omega}_{k})\) for \(\mathbf{x}\) for all \(\mathbf{\omega}_{k}\).
We now examine this model and its kernels. We choose \(\mathbf{q}\) as a uninformative3 uniform distribution over the hypotheses \(\mathbf{q}=\frac{1}{K}\). The BMA then matches the empirical mean:
Footnote 3: If we had additional information about the \(\mathbf{\hat{\Omega}}\)—for example, if we had validation set losses—we could use this information to set \(\mathbf{q}\).
\[\tilde{\mu}(\mathbf{x};\mathbf{q})\triangleq\mathbb{E}_{\mathrm{p}(\psi|\mathbf{q})}[ \mu(\mathbf{x};\mathbf{\omega}_{\psi})]=\langle\mu(\mathbf{x};\cdot),\mathbf{q} \rangle=\sum_{\psi=1}^{K}\mathbf{q}_{\psi}\mu(\mathbf{x};\mathbf{\omega}_{\psi})=\sum _{\psi=1}^{K}\frac{1}{K}\mu(\mathbf{x};\mathbf{\omega}_{\psi}). \tag{28}\]
What is the predictive covariance kernel of this model? And what is the posterior gradient kernel for \(\mathbf{q}\)?
**Proposition 3.2**.: _We have:_
1. _The predictive covariance kernel_ \(k_{\mathrm{pred},\psi}(\mathbf{x}_{i},\mathbf{x}_{j})\) _for_ \(\mathbf{\hat{\Omega}}\) _using uniform_ \(\mathbf{q}\) _is equal to the empirical predictive covariance kernel_ \(k_{\mathrm{pred}}(\mathbf{x}_{i};\mathbf{x}_{j})\)_._
2. _The 'posterior' gradient kernel_ \(k_{\mathrm{grad},\mathbf{q}\to\mathrm{post}(\mathcal{D}^{\mathrm{train}})}( \mathbf{x}_{i};\mathbf{x}_{j})\) _for_ \(\mathbf{\hat{\Omega}}\) _in respect to_ uniform \(\mathbf{q}\) _is equal to the empirical predictive covariance kernel_ \(k_{\mathrm{pred}}(\mathbf{x}_{i};\mathbf{x}_{j})\)_._
Proof.: Like for the previous differentiable model, the BMA of the model parameters \(\Psi\) is just \(\mathbf{q}\): \(\mathbb{E}\,\Psi=\mathbf{q}\). The first statement immediately follows:
\[k_{\mathrm{pred},\psi}(\mathbf{x}_{i},\mathbf{x}_{j})=\mathrm{Cov}_{\psi}[ \tilde{\mu}(\mathbf{x}_{i};\psi);\tilde{\mu}(\mathbf{x}_{j};\psi)]=\mathbb{E} _{\mathrm{p}(\psi)}[\tilde{\mu}^{\mathbf{\omega}_{\psi}}_{\mathbf{\omega}_{i}}\,\bar{ \mu}^{\mathbf{\omega}_{\psi}}_{\mathbf{\lambda}_{j}}]=\frac{1}{K}\sum_{\psi}\tilde{\mu }^{\mathbf{\omega}_{\psi}}_{\mathbf{\lambda}_{i}}\,\bar{\mu}^{\mathbf{\omega}_{\psi}}_{\mathbf{ \lambda}_{j}}=k_{\mathrm{pred}}(\mathbf{x}_{i};\mathbf{x}_{j}). \tag{29}\]
For the second statement, we begin with the same linearization as in the previous proposition (but this time in respect to \(\mathbf{q}\)):
\[k_{\mathrm{grad},\mathbf{q}\to\mathrm{post}(\mathcal{D}^{\mathrm{train}})}( \mathbf{x}_{i};\mathbf{x}_{i}) =\mathrm{Cov}_{\psi\sim\mathrm{p}(\psi)}[\mu(\mathbf{x}_{i};\psi); \mu(\mathbf{x}_{j};\psi)] \tag{30}\] \[=\mathbb{E}_{\mathbf{q}+\Delta\Psi\sim\mathrm{p}(\psi)}[(\nabla_{\mathbf{q }}\tilde{\mu}(\mathbf{x}_{i};\mathbf{q})\Delta\Psi,\nabla_{\mathbf{q}}\tilde{\mu}( \mathbf{x}_{j};\mathbf{q})\Delta\Psi)] \tag{31}\]
\[=\nabla_{\mathbf{q}}\tilde{\mu}(\mathbf{x}_{i};\mathbf{q})\ \text{Cov}[\Psi]\, \nabla_{\mathbf{q}}\tilde{\mu}(\mathbf{x}_{i};\mathbf{q})^{\top}. \tag{32}\]
We can read off a linearization for \(\nabla_{\mathbf{q}}\tilde{\mu}(\mathbf{x}_{i};\mathbf{q})\) from the inner product in Equation (28):
\[\nabla_{\mathbf{q}}\tilde{\mu}(\mathbf{x}_{i};\mathbf{q})=\mu(\mathbf{x}; \cdot)^{\top}. \tag{33}\]
Lastly, the covariance of the multinomial \(\Psi\) is:
\[\text{Cov}[\Psi]=\text{diag}(\mathbf{q})-\mathbf{q}\mathbf{q}^{\top}. \tag{34}\]
Putting this all together:
\[k_{\text{grad},\mathbf{q}\to\text{post}(\mathcal{D}^{\text{trull }})}(\mathbf{x}_{i};\mathbf{x}_{i})=\mu(\mathbf{x}_{i};)^{\top}\,\text{diag}( \mathbf{q})\,\mu(\mathbf{x}_{j};\cdot)-\left(\mu(\mathbf{x}_{i};)^{\top}\,\mathbf{q} \right)(\mathbf{q}^{\top}\,\mu(\mathbf{x}_{j};\cdot)) \tag{35}\] \[=\frac{1}{K}\sum_{\psi}\mu(\mathbf{x}_{i};\mathbf{\omega}_{\psi})\, \mu(\mathbf{x}_{j};\mathbf{\omega}_{\psi})^{\top}-\left(\frac{1}{K}\sum_{\psi}\mu (\mathbf{x}_{i};\mathbf{\omega}_{\psi})\right)\,\left(\frac{1}{K}\sum_{\psi}\mu( \mathbf{x}_{j};\mathbf{\omega}_{\psi})\right)=k_{\widehat{\text{pred}}}(\mathbf{ x}_{i};\mathbf{x}_{j}). \tag{36}\]
Above gradient kernel is only the posterior gradient kernel in the sense that we have sampled \(\mathbf{\omega}_{\psi}\) from the non-differentiable model after inference on training data. The samples themselves are drawn uniformly.
A similar Bayesian model using Bayesian Model Combination (BMC) could be set up which allows for arbitrary convex mixtures of the ensemble members. This would entail using a Dirichlet distribution \(\text{Dirichlet}(\mathbf{\alpha})\) instead of the multinomial distribution. Assuming an uninformative prior, this leads to the same results up to a factor of \(1+\sum_{k}\mathbf{\alpha}_{k}\).
This demonstrates that a straightforward Bayesian model can be constructed on top of a non-differentiable ensemble model. Bayesian inference in this context aims to identify the most suitable member of the ensemble. Given the limited number of samples and likelihood of model misspecification, it is likely that none of the members accurately represents the true model. However, for active learning purposes, the main focus is solely on quantifying the degree of disagreement among the ensemble members.
**Application to DNNs, BNNs, and Other Models.** The proposed approach has relevance due to its versatility, as it can be applied to a wide range of models that can be consistently queried for prediction, including deep ensembles (Lakshminarayanan et al., 2016), Bayesian neural networks (BNNs) (Blundell et al., 2015; Gal and Ghahramani, 2015), and non-differentiable models. The kernel used in this approach is simple to implement and scales in the number of empirical predictions per sample, rather than in the parameter space, as seen in other methods such as Ash et al. (2021).
## 4 Results
We follow the evaluation from Holzmuller et al. (2022) and use their framework to ease comparison. This allows us to directly compare to several SotA methods in a regression setting, respectively their kernel-based analogues. Specifically, we compare to the following popular deep active learning methods: BALD (Houlsby et al., 2011), BatchBALD (Kirsch et al., 2019), BAIT (Ash et al., 2021), BADGE (Ash et al., 2019), ACS-FW (Pinsler et al., 2019), Core-Set (Sener and Savarese, 2017)/FF-Active (Geifman and El-Yaniv, 2017), and LCMD (Holzmuller et al., 2022). We also compare to the random selection baseline ('Uniform'). We use 15 large tabular datasets from the UCI Machine Learning Repository (Dua and Graff, 2017) and the OpenML benchmark suite (Vanschoren et al., 2014) for our experiments, see Table 3.
**Experimental Setup.** We use the same experimental setup and hyperparameters as Holzmuller et al. (2022). We report the logarithmic RMSE averaged over 20 trials for each dataset and method. For ensembles, we compute the performance for each ensemble member separately, enabling a fair comparison to the non-ensemble methods. Performance differences can thus be attributed to the acquisition function, rather than the ensemble. We used A100 GPUs with 40GB of GPU memory.
**Ensemble Size.** For deep learning, we use a small ensemble of 10 models, which is sufficient to achieve good performance. This ensemble size can be considered'small' in the regression setting (Lazaro-Gredilla and Figueiras-Vidal, 2010; Zhang and Zhou, 2013), whereas in Bayesian Optimal Experiment Design much higher sample counts are regularly used (Foster et al., 2021). In many cases, training an ensemble of regression models is still fast and could be considered cheap compared to the cost of acquiring additional labels. For non-differentiable models, we experiment with random forests (Breiman, 2001), using the different trees as ensemble members, and a virtual ensemble of gradient-boosted decision trees (Prokhorenkova et al., 2017). For random forests, we use the implementation provided in scikit-learn (Pedregosa et al., 2011) with default hyperparameters, that is using 100 trees per forest. For gradient-boosted decision trees, we use a virtual ensemble of up to 20 members with early stopping using a validation set4. We use the implementation in CatBoost (Dorogush et al., 2018). We do not perform any hyperparameter tuning.
Footnote 4: If the virtual ensemble creation fails because there are no sufficiently many trees due to early stopping, we halve the ensemble size and retry. This was only needed for the _poker_ dataset.
**Black-Box vs White-Box Deep Active Learning.** In Figure 0(a) and 2, we see that B\({}^{3}\)AL is competitive with white-box active learning, when using BALD, BatchBALD, BAIT, BADGE, and Core-Set. On average, B\({}^{3}\)AL outperforms the white-box methods on the 15 datasets we analyzed (excluding ACS-FW and BAIT). We hypothesize that this is due to the implicit Fisher information approximation in the white-box methods (Kirsch and Gal, 2022), which is not as accurate in the low data regime as the more explicit approximation in B\({}^{3}\)AL via ensembling. On the other hand, it seems that the black-box methods suffer from the low feature space dimensionality, as they are much closer to BALD.
Figure 1: _Mean logarithmic RMSE over 15 regression datasets._ **(a)** For DNNs, we see that black-box \(\blacksquare\) methods work as well as white-box \(\square\) methods, and in most cases better, with the exception of ACS-FW and BAIT. **(b)** For random forests with the default hyperparameters from scikit-learn (Pedregosa et al., 2011), we see that black-box methods perform better than the uniform baseline, with the exception of BALD, which uses top-k acquisition. **(c)** For gradient-boosted trees (Dorogush et al., 2018) with a _virtual ensemble_, we hypothesize that the virtual ensembles do not capture predictive disagreement well enough for active learning. In the appendix, see Table 1 for average performance metrics. Averaged over 20 trials.
**Non-Differentiable Models.** In Figure 0(b) and 0(c), see that B\({}^{3}\)AL also works well for non-differentiable models, such as random forests and gradient-boosted decision trees. While for random forests, all methods except for BALD (as top-k selection method), outperform uniform acquisition, for gradient-boosted decision trees, B\({}^{3}\)AL only works better than random acquisition when using LMCD and BADGE. This might be due to the virtual ensemble, which might not capture disagreement well enough.
## 5 Conclusion and Future Work
In this paper, we have demonstrated the effectiveness of a simple extension to kernel-based methods that utilizes empirical predictions rather than gradient kernels. This modification enables black-box batch active learning with good performance. Importantly, B\({}^{3}\)AL also generalizes to non-differentiable models, an area that has received limited attention as of late.
Our results also partially answer one of the research questions posed by Kirsch and Gal (2022): how do prediction-based methods compare to parameter-based ones? We find that for regression the prediction-based methods are competitive with the parameter-based methods in batch active learning.
The main limitation of our proposed approach lies in the acquisition of a sufficient amount of empirical predictions. This could be a challenge, particularly when using deep ensembles with larger models or non-differentiable models that cannot be parallelized efficiently. Our experiments using virtual ensembles indicate that the diversity of the ensemble members plays a crucial role in determining the performance.
## Acknowledgements
The authors would like to thank David Holzmuller for their kind, constructive and helpful feedback, as well as for making their framework and results easily available. AK is supported by the UK EPSRC CDT in Autonomous Intelligent Machines and Systems (grant reference EP/L015897/1). ChatGPT was used to suggest edits.
Figure 2: _Average Logarithmic RMSE by regression datasets for DNNs: \(\blacksquare\) vs \(\square\) (vs Uniform). Across acquisition functions, the performance of black-box methods is highly correlated with the performance of white-box methods, even though black-box methods make fewer assumptions about the model. We plot the improvement of the white-box \(\square\) method over the uniform baseline on the x-axis, so for datasets with markers left of the dashed vertical lines, the white-box method performs better than uniform, and the improvement of the black-box \(\blacksquare\) method over the uniform baseline on the y-axis, so for datasets with markers above the dashed horizontal lines, the black-box method performs better than uniform. Similarly, for datasets with markers in the \(\blacksquare\) region, the black-box method performs better than the white-box method. The average over all datasets is marked with a star \(\star\). Surprisingly, on average over all acquisition rounds, the black-box methods perform better than the white-box methods for all but ACS-FW and BAIT. In the appendix, see Figure 3 for the final acquisition round and Table 3 for details on the datasets. Averaged over 20 trials._ |
2305.03873 | Train Global, Tailor Local: Minimalist Multilingual Translation into
Endangered Languages | In many humanitarian scenarios, translation into severely low resource
languages often does not require a universal translation engine, but a
dedicated text-specific translation engine. For example, healthcare records,
hygienic procedures, government communication, emergency procedures and
religious texts are all limited texts. While generic translation engines for
all languages do not exist, translation of multilingually known limited texts
into new, endangered languages may be possible and reduce human translation
effort. We attempt to leverage translation resources from many rich resource
languages to efficiently produce best possible translation quality for a well
known text, which is available in multiple languages, in a new, severely low
resource language. We examine two approaches: 1. best selection of seed
sentences to jump start translations in a new language in view of best
generalization to the remainder of a larger targeted text(s), and 2. we adapt
large general multilingual translation engines from many other languages to
focus on a specific text in a new, unknown language. We find that adapting
large pretrained multilingual models to the domain/text first and then to the
severely low resource language works best. If we also select a best set of seed
sentences, we can improve average chrF performance on new test languages from a
baseline of 21.9 to 50.7, while reducing the number of seed sentences to only
around 1,000 in the new, unknown language. | Zhong Zhou, Jan Niehues, Alex Waibel | 2023-05-05T23:22:16Z | http://arxiv.org/abs/2305.03873v1 | # Train Global, Tailor Local:
###### Abstract
In many humanitarian scenarios, translation into severely low resource languages often does not require a universal translation engine, but a dedicated _text-specific_ translation engine. For example, healthcare records, hygienic procedures, government communication, emergency procedures and religious texts are all limited texts. While generic translation engines for all languages do not exist, translation of multilingually known limited texts into new, endangered languages may be possible and reduce human translation effort. We attempt to leverage translation resources from many rich resource languages to efficiently produce best possible translation quality for a well _known text_, which is available in multiple languages, in a new, severely low resource language. We examine two approaches: 1.) best selection of seed sentences to jump start translations in a new language in view of best generalization to the remainder of a larger targeted text(s), and 2.) we adapt large general multilingual translation engines from many other languages to focus on a specific text in a new, unknown language. We find that adapting large pretrained multilingual models to the domain/text first and then to the severely low resource language works best. If we also select a best set of seed sentences, we can improve average chrF performance on new test languages from a baseline of 21.9 to 50.7, while reducing the number of seed sentences to only \(\sim\)1,000 in the new, unknown language.
## 1 Introduction
A language dies when no one speaks it. An endangered language is a language that is spoken by enough people that it could survive under favorable conditions but few or no children are learning it (Crystal, 2002; Kincade, 1991; Wurm, 2001). More than half of the 7,139 languages will die in the next 80 years (Austin and Sallabank, 2011; Eberhard et al., 2021). Endangered languages may survive and thrive if they gain prestige, power and visibility (Crystal, 2002). Frisian, for example, struggles to gain prestige in Germany, and is endangered even though it has a large number of speakers. Hebrew, conversely, has been revived as a spoken language because it is critical to the development and identity of the Jewish community. We empower endangered language communities by exercising a language. This can be achieved by translating important texts to their language so that these communities can gain information, knowledge, power and visibility in their own language. One life-saving example of this knowledge-transfer is translating water, sanitation and hygiene (WASH) text into their languages, a process that has long started before the COVID-19 pandemic but has gained much attention since then (Thampi et al., 2020; Reddy et al., 2017).
The problem in these scenarios, therefore, is not to build a high accuracy translation engine for _any texts_ using huge data corpora, but rather to build a good translation for a _known_ text (for which translations in many other languages exist), but in a new language with only extremely little seed data (a few hundred sentences). We assume there is little to no endangered language data and few human translators. To produce high quality translation, existing methods rely on a seed corpus produced by human translators. Previous work has shown progress in using extremely small seed corpora with as small as \(\sim\)1,000 lines of data and has found that random
Figure 1: Translation workflow for endangered languages.
sampling performs better than choosing a fixed portion of the text to build a seed corpus Zhou and Waibel (2021); Lin et al. (2020); Qi et al. (2018). But researchers have yet to 1.) examine various Active Learning (AL) methods to improve accuracy and effectiveness in building better optimized seed corpora so as to minimize the initial human effort and 2.) completely solve the problem of using large multilingual models for representational learning so that we can train (or adapt) them to a new language using an extremely small seed corpus.
To solve these two problems, we propose explainable and robust active learning methods that perform as well as or better than random sampling; we transfer methods learned on data of known languages to the new, endangered language. We also examine different training schedules and we find a strategic way of growing large multilingual models in a multilingual and multi-stage fashion with extremely small endangered seed corpora.
In our translation workflow, human translators are informed by machine sentence ranking to produce a seed corpus. Machine systems then use this seed corpus to produce a full translation draft. Human translators post-edit the draft, and feed new data to machines each time they finish post-editing a portion of the text. In each iteration, machines produce better and better drafts with new data, and human translators find it easier and faster to post-edit. Together they complete the translation of the whole text into an endangered language (Figure 1).
To produce sentence ranking, traditional active learning approaches assume abundant data, but we have little to no data in the target endangered language. We question this assumption and build seed corpora by ranking all sentences in existing translations from other languages to generalize to a new, endangered language. This ranking is target-independent as we do not require any endangered language data. To produce such a ranking, we explore active learning methods (Table 1). For each reference language, we build unigram, n-gram and entropy models (Figure 2). To prevent any language from overpowering the ranking, we aggregate sentence scores across multiple languages and rank the final aggregation. To select the pool of languages for aggregation, we build methods on different voting mechanisms.
To curate a seed corpus in the new, endangered language where we have no data initially, we pass the sentence ranking learned from known languages to human translators. Human translators take this ranking, and translate the top few (\(\sim\)1,000 or less) sentences, curating the seed corpus.
To train on such small seed corpus, we find pretraining to be key. For the pretrained model, we either create our own pretrained model by training on known languages, or use an existing pretrained model. We explore both paths in our work, with and without activating the knowledge in existing large pretrained models. We observe an average increase of 28.8 in chrF score over the baselines.
Our contribution is three-fold: 1. We develop 14 active learning methods on known languages and transfer ranking to the new, endangered language; 2. We activate the knowledge of large multilingual models by proposing multilingual and multi-stage adaptations through 24 different training schedules; we find that adapting pretrained models to the domain and then to the endangered language works best; 3. We aggregate scores from 115 languages to provide a universal ranking and increase robustness by _relaxed memoization_ method.
## 2 Related Works
### Translation into Endangered Languages
Recent advances have succeeded in building multilingual methods to translate from multiple rich resource languages to a new, endangered language Johnson et al. (2017); Ha et al. (2016); Firat et al. (2016); Zhou et al. (2018, 2018). Many have demonstrated good transfer learning to low resource languages Zhou and Waibel (2021); Lin et al. (2020);
Figure 2: Visualizing different active learning methods. We score and rank each sentence in a text corpus.
Qi et al., 2018), while some work on zero-shot learning (Neubig and Hu, 2018; Pham et al., 2019; Philip et al., 2020; Karakanta et al., 2018; Zhang et al., 2020; Chen et al., 2022, 2021). However, zero-shot learning is volatile and unstable, so we choose to use extremely small data instead.
### Active Learning in Machine Translation
Active learning has a long history in machine translation (Settles, 2012; Eck et al., 2005; Gonzalez-Rubio et al., 2012). Random sampling is often surprisingly powerful (Kendall and Smith, 1938; Knuth, 1991; Sennrich et al., 2016). There is extensive research to beat random sampling by methods based on entropy (Koneru et al., 2022), coverage and uncertainty (Peris and Casacuberta, 2018; Zhao et al., 2020), clustering (Haffari et al., 2009; Gangadharaiah et al., 2009), consensus (Haffari and Sarkar, 2009), syntactic parsing (Miura et al., 2016), density and diversity (Koneru et al., 2022; Ambati et al., 2011), and learning to learn active learning strategies (Liu et al., 2018).
### Large Pretrained Multilingual Model
The state-of-the-art multilingual machine translation systems translate from many source languages to many target languages (Johnson et al., 2017; Ha et al., 2016; Zoph and Knight, 2016). The bottleneck in building such systems is in computation limits, as the training data increases quadratically with the number of languages. Some companies have built and released large pretrained multilingual models (Liu et al., 2020; Tang et al., 2020). M2M100 is trained in 100 languages (Fan et al., 2021; Schwenk et al., 2021; El-Kishky et al., 2020) and covers a few endangered languages.
## 3 Methods
We translate a fixed text that is available in many languages to a new, endangered language. In our translation workflow, we first develop active learning methods to transfer sentence ranking from known languages to a new, endangered language. We then pass this ranking to human translators for them to translate the top few (\(\sim\)1,000 or less) sentences into the endangered language, curating the seed corpus. We finally train on the seed corpus, either from scratch or from a pretrained model.
We build training schedules on an extremely small seed corpus, we also build active learning strategies of creating and transferring the sentence ranking to the new, endangered language. We propose and compare 24 training schedules and 14 active learning methods for machine translation into a new, endangered language. To compare all active learning algorithms fairly, we use the same translation system unit as a control for all experiments, varying only the seed corpora built by different methods. We select the same number of words in all seed corpora as most translators are paid by the number of words (Bloodgood and Callison-Burch, 2010; Eck, 2008; Tomanek and Hahn, 2009).
### Training Schedules
In our setup we have the new, endangered language as the target language, and we have a few neighboring languages as the source languages that are either in the same linguistic language family or geographically close to facilitate linguistic transfer. In effect, we have \(N\) source languages with full translations of the text and a new and endangered language that has an extremely small seed corpus.
We use the state-of-the-art multilingual transformer prepending both source and target language labels to each source sentence (Johnson et al., 2017; Ha et al., 2016). For precise translation for all named entities, we use an existing method of _order-preserving named entity translation_ by masking each named entity with ordered __NEs using a parallel multilingual lexicon table in 125 languages (Zhou and Waibel, 2021; Wu et al., 2018).
Using this multilingual transformer architecture as a base, we build 5 training units on the small seed
Figure 3: 24 different training schedules. [N]: multilingual model on N neighboring languages [N+1]\({}^{2}\): multi-target model with endangered language [N+1]: single-target model with endangered language [1]\({}^{2}\): autoencoder in endangered language.
corpus of the new, endangered language and the existing translations of known languages. We let [N]\({}^{2}\) denote the training of all source languages in a N-by-N multilingual transformer. We let [N+1]\({}^{2}\) denote the training of all languages including the endangered language in a (N+1)-by-(N+1) multilingual transformer. We let [N+1] denote the (N+1)-by-1 multilingual transformer that focuses on translating into the endangered language. We let [1]\({}^{2}\) be the autoencoder on the endangered language.
Our translation system is built on these 5 training units: an optional [M2M100] (Fan et al., 2021), [N]\({}^{2}\), [N+1]\({}^{2}\), [N+1] and [1]\({}^{2}\). These 5 stages increase in specificity while they decrease in data size. Building on them, we show 24 different training schedules, among which 8 are pretrained with in-domain data and 16 are pretrained with out-of-domain large multilingual models (Figure 3). We only consider models with pretraining and therefore do not exhaust all 32 training schedules.
### Active Learning Strategies
We have two baselines: the linguistic baseline of the excerpt-based approach, _Luke_, and the statistical baseline of random sampling, _Rand_. The excerpt-based approach, which selects a portion of the text with consecutive sentences, preserves the text's formality, cohesion and context but lacks global coverage. Random sampling increases global coverage but sacrifices local coherence.
#### 3.2.1 N-gram Approach
Many researchers count the number of unknown n-grams as score functions to solve the knapsack problem, covering all vocabulary Eck (2008); Eck et al. (2005); Haffari et al. (2009). Instead of solving the knapsack problem, we choose sentences to partially cover the vocabulary and build an extremely small seed corpus. To cover the vocabulary strategically, we sum the frequency counts of the unknown n-grams to increase density. These frequency counts promote frequent words for learning to be meaningful in the extremely low resource scenario. In Table 1 we denote frequency function by \(F(\cdot)\), denote sequence length by \(L\) and denote the highest n-gram order by \(J\).
#### 3.2.2 Entropy Approach
Many have worked on entropy methods in modelling density and diversity Ambati et al. (2011); Eck (2008); Zeng et al. (2019); Haffari et al. (2009). We use traditional Language Models (LMs) instead of neural language models, as our data size is extremely small. For implementations of LMs, we use KenLM and NLTK's LM because of their simplicity and speed, especially KenLM Heafield (2011); Bird and Loper (2004). In Table 1 we let \(H(\cdot)\) be the cross entropy function, with the choice of KenLM (K) or NLTK (N). To separate training from testing in using language models, we divide the data into three portions, the sentences that we have chosen (\(c\)), and the remaining that are split equally into two parts, left (\(l\)) and right (\(r\)). Let \(I_{l}(\cdot)\) and \(I_{r}(\cdot)\) be indicator functions to show whether a sentence belongs to the left or the right. We aim to maximize the diversity \(H_{c}\) and optimize density by adjusting \(H_{l}\) and \(H_{r}\)Koneru et al. (2022).
#### 3.2.3 Aggregation Approach
To prevent any language from overpowering the ranking, we aggregate sentence scores across different languages (Figure 2). We investigate the use of a customized set of languages for each endangered language, versus the use of a universal set of languages representing world languages. The former requires some understanding of the neighboring languages, the latter requires careful choices of the representative set Blasi et al. (2022).
We have 4 aggregation methods: _one-vote-per-language_ (L), where we aggregate over all languages, _one-vote-per-family_ (F), where we aggregate over languages representing the top few families, _one-vote-per-person_ (P), where we aggregate over the top few most spoken languages, and _one-vote-per-neighbor_ (N), where we aggregate over a customized set of neighboring languages. For the world language distribution, L covers all, F samples across it, P covers the head, while N creates a
\begin{table}
\begin{tabular}{l l l} \hline \hline Name & Description & Score Function \\ \hline \(S\) & Frequency sum of unknown words & \(\sum\limits_{i=0}^{L}F(w_{i}^{u})\) \\ \(SN\) & Normalized \(S\) by \(L\) & \(\frac{1}{\sum\limits_{i=0}^{L}F(w_{i}^{u})}\) \\ \(SN_{J}\) & Normalized Frequency sum of n-grams up to \(J\) & \(\frac{1}{\sum\limits_{j=1}^{L}\sum\limits_{i=0}^{L}F(g_{i,j}^{u})}\) \\ \(AGG_{J}^{M}\) & Aggregation of n-gram scores up to \(J\) & \(\sum\limits_{M}\frac{1}{L}\sum\limits_{j=1}^{J}\sum\limits_{i=0}^{L}F(g_{i,j}^{ u})\) \\ \(ENT^{K}\) & Entropy methods, \(K\) is KenLM or not & \(H_{c}^{K}(s)-I_{l}(s)\cdot H_{r}^{K}(s)-I_{r}(s)\cdot H_{l}^{K}(s)\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of score functions.
niche area around the endangered language.
Aggregation decreases variance and increases accuracy. Typical aggregation involve taking the sum or the average. Since they have the same effect on sentence ranking, we take the sum for simplicity.
To save space and time, we devise _relaxed memoization_. At every step, we compute sentence score for each language, producing a score matrix of languages versus sentences. We update entries that are affected by the selected sentence, cache and reuse other entries. Further parallelism results in >360 times speedup, from \(\sim\)6.5 months to \(\sim\)13 hours.
### Evaluation Method and Metrics
Existing multilingual systems produce multiple outputs from all source languages, rendering comparison messy. To simplify, we combine translations from all source languages into one by an existing _centeredness method_Zhou and Waibel (2021). Using this method, we score each translated sentence by the sum of its similarity scores to all others. We rank these scores and take the highest score as our combined score. The expected value of the combined score is higher than that of each source.
To compare effectively, we control all test sets to be the same. Since different active learning strategies produce different seed corpora to be used as training and validation sets, the training and validation sets vary. Their complement, the test sets therefore also vary, rendering comparison difficult. To build the same test set, we devise an _intersection method_. We take the whole text and carve out all seed corpora, that is, all training and validation sets from all experiments. The remaining is the final test set, which is the intersection of all test sets.
Our metrics are: chrF, characTER, BLEU, COMET score, and BERtscore Popovic (2015); Wang et al. (2016); Post (2018); Zhang et al. (2019); Stewart et al. (2020); Rei et al. (2021). We prioritize chrF over BLEU for better accuracy, fluency and expressive power in morphologically-rich languages Papineni et al. (2002).
## 4 Data
Existing research classifies world languages into Resource 0 to 5, with 0 having the lowest resource and 5 having the highest Joshi et al. (2020). We choose 10 target languages ranging from Resource 0 to 5 (Table 2). For each target language we choose ten neighboring languages as source languages (Table 2). We prioritize Resource 0 to 2 languages as real endangered languages, and we use Resource 3 to 5 languages as hypothetical ones.
To translate into these languages, our text is the Bible in 125 languages Mayer and Cysouw (2014). Each endangered seed corpus contains \(\sim\)3% of the text, while all other languages have full text. Our goal is to translate the rest of the text into the endangered language. In pretraining, we use a 80/10/10 split for training, validation and testing, respectively. In training, we use approximately a 3.0/0.2/96.8 split for training, validation and testing, respectively. Our training data for each experiment is \(\sim\)1,000 lines. We use BPE with size of \(\sim\)3,000 for the endangered language and \(\sim\)9,000 for the combined Sennrich et al. (2016).
Training on \(\sim\)100 million parameters with Geforce RTX 2080 Ti and RTX 3090, we use a 6-layer encoder and a 6-layer decoder with 512 hidden states, 8 attention heads, 512 word vector size, 2,048 hidden units, 6,000 batch size, 0.1 label smoothing, 2.5 learning learning rate and 1.0 finetuning learning rate, 0.1 dropout and attention dropout, a patience of 5 after 190,000 steps in [N]\({}^{2}\) with an update interval of 1000, a patience of 5 for [N+1]\({}^{2}\) with an update interval of 200, and a patience of 25 for [N+1] and [1]\({}^{2}\) with an update interval of 50, "adam" optimizer and "noam" decay
\begin{table}
\begin{tabular}{l l l l} \hline \hline Target & L & Family & Source Languages \\ \hline Frisian & 0 & Germanic & English*, German, Dutch, Norwegian, Afrikans, Swedish, French, Italian, Portuguese, Romanian \\ Hmong & 0 & Hmong–MienKomrein*, Vietnamese, Thai, Chinese, Myanmar, Haka, Tangsa, Zokam, Siyin, Falam \\ Pokomchi & 0 & Mayan & Chui*, Cakchiquel, Mam, Kanjobal, Cuzco, Ayoucho, Bolivian, Huallaga, Aymara, Guajajara \\ Turkmen & 1 & Turkic & Kyrgyz*, Tuvan, Uzbek, Karakalpak, Kazakh, Azerbaijani, Japanese, Korean, Finnish, Hungarian \\ Sesotho & 1 & Niger–Congo Yoruba*, Gikyuya, Khosa, Kuanyama, Kpelle, Fon, Bulu, Swati, Venda, Lenje \\ Welsh & 1 & Cebilsh*, German, Danish, Dutch, Norwegian, Swedish, French, Italian, Portuguese, Romanian \\ Xhosa & 2 & Nguni & Swati*, Gikyu, Sestoho, Yorbua, Lenje, Gbaya, Afrikans, Wolaitta, Kuanyama, Bulu \\ Indonesia\& 3 & Austronesian & Javanese*, Malagsy, Tagalog, Ilokano, Cebuano, Fijian, Sunda, Zokam, Wa, Maori \\ Hungarian4 & Uralic & Finnish*, French, English, German, Latin, Romanian, Swedish, Spanish, Italian, Portuguese \\ Spanish & 5 & Romance & English*, German, Danish, Dutch, Norwegian, Swedish, French, Italian, Portuguese, Romanian \\ \hline \hline \end{tabular}
\end{table}
Table 2: Summary of different target languages used (Campbell and Belew, 2018; Collin, 2010). L, resource level, is from a scale of 0 to 5 Joshi et al. (2020). Reference languages used for active learning methods except aggregate methods are starred.
method (Klein et al., 2017; Papineni et al., 2002).
## 5 Results
For simplicity, we use the centeredness method to combine translations from all source languages and have one score per metric. To compare across different methods, all experiments have the same test set (3,461 lines), the intersection of all test sets.
**Our models improve over the baselines:** With Schedule \(I\), we observe an average improvement of 24.7 in chrF score over the M2M100 baseline (Table 4). By active learning with 4-gram model, we observe an increase of 28.8 in chrF score over the bilingual baseline.
**Our strategic training schedule improves the translation further by activating the knowledge of M2M100 :** With Schedule \(B\) and the 4-gram model, we observe an average improvement of 18.6 in chrF score over the multilingual baseline (Table 3). For Schedule \(I\), the increase is 24.8 over the multilingual baseline (Table 4). Indeed, the increase with the activation of M2M100 is greater.
### Training Schedules
We compare 24 training schedules using a randomly sampled seed corpus (\(\sim\)1,000 lines) to translate into Frisian (Table 5 and 6).
**Pretraining with [N]\({}^{2}\) works well without M2M100:** We compare 8 training schedules without M2M100 (Table 6). We find that Schedule \(B\) (pretraining on [N]\({}^{2}\) and training on [N+1]\({}^{2}\) and [N+1]) and Schedule \(F\) (pretraining on [N]\({}^{2}\) and training on [N+1]) work well without M2M100. Schedule \(B\) gives a chrF score of 51.1 and Schedule \(F\) gives a chrF score of 51.2.
M2M100 is useful when a target language and its corresponding source languages are in the M2M100 list and the test set does not overlap with the M2M100 training set. However, we strongly advise discretion, as training data for large pretrained models is usually not clearly specified and most are not trained with endangered languages in mind. M2M100 training data may very likely contain the Bible data, so it only serves as a comparison and provides an alternative view to show that our model is robust with large models. When M2M100 does not apply, our models pretrained with [N]\({}^{2}\) suffice.
**Full stage training increases robustness:** For models without M2M100 we can use Schedule \(B\) (Table 7) or F (Table 10). Though the results for Frisian are similar, B is much better than F for morphologically rich languages like Pokomchi, Turkmen and Xhosa. Indeed, B with full training is more robust than F, which skips [N+1]\({}^{2}\). Similarly, for models with M2M100, we can use Schedule \(I\) (Table 8) or \(L\) (Table 9). Again, Schedule \(I\) with full training stages perform better than Schedule \(L\).
**Applying M2M100 alone gives poor results:** Schedule \(X\) produces poor results (Table 5). Problems include catastrophic forgetting, bias towards rich resource languages, and unclean data. Existing research shows some released models mislabel their English data as Welsh (Radford et al.).
**Mixed models with M2M100 perform well:** A few training schedules beat those pretrained with [N]\({}^{2}\) (Table 6). Schedule \(I\) (training on 5 stages) gives a chrF score of 52.9, L (training 3 stages skipping [N+1] and [1]\({}^{2}\)) gives 52.8, M (training 4 stages skipping [N+1]\({}^{2}\)) gives 52.7, J (training 4 stages skipping [1]\({}^{2}\)) gives 51.8, and N (training 3 stages skipping [N+1]\({}^{2}\) and [1]\({}^{2}\)) gives 51.9. All are higher than those without M2M100.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \(\uparrow\)chrF & Frisian & Welsh & Hungarian & Spanish & Average \\ \hline
**Baselines:** & & & & & \\ + Bilingual & 23.1 & 22.2 & 20.1 & 22.1 & 21.9 \\ + Multilingual & 28.0 & 26.5 & 22.3 & 26.8 & 25.9 \\ + M2M100 & 26.0 & 9.9 & 38.8 & 47.5 & 24.9 \\ \hline
**Our Models:** & & & & & \\ + Schedule I & 53.5 & 49.5 & 42.2 & 53.2 & 49.6 \\ + Active (AL) & 54.9 & 49.8 & 43.2 & 54.9 & 50.7 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results for translation into 4 languages that are new and severely low resourced to the system, activating knowledge in M2M100 and leveraging active learning.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline \(\uparrow\)chrF & Frisian & Hmong & Pokomchi & Turkmen & Sesotho & Welsh & Xhosa & Indonesian & Hungarian & Spanish & Average \\ \hline
**Baselines:** & & & & & & & & & & \\ + Bilingual & 23.1 & 25.0 & 28.7 & 18.9 & 25.2 & 22.2 & 21.4 & 27.2 & 20.1 & 22.1 & 23.4 \\ + Multilingual & 28.0 & 28.1 & 31.9 & 22.6 & 28.3 & 26.5 & 23.9 & 29.7 & 22.3 & 26.8 & 26.8 \\ \hline
**Our Models:** & & & & & & & & & & \\ + Schedule B & 50.5 & 43.9 & 42.8 & 38.9 & 43.2 & 46.0 & 34.9 & 47.2 & 37.4 & 50.1 & 43.5 \\ + Active (AL) & 53.6 & 45.7 & 44.4 & 40.3 & 44.9 & 47.7 & 36.8 & 49.1 & 39.0 & 52.7 & 45.4 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results for translation into 10 languages that are new and severely low resourced to the system, independent of M2M100.
**Adapting M2M100 to the domain and then to the endangered language works best:** Schedule \(I\) (training on 5 stages) with score 52.9 performs best. These models first adapt M2M100 to the domain by doing another pretraining on N\({}^{2}\). After adapting M2M100 to the domain, we adapt the model to the endangered language by training on [N+1]\({}^{2}\). The final two stages [N+1] and [1]\({}^{2}\) are optional.
### Active Learning Methods
Using Schedule \(B\) without M2M100, and \(L\) with M2M100, we compare 14 active learning methods across languages (Table 7 and 8).
**Normalizing by sequence length improves density:** Without normalization, the model chooses longer sentences with many rare words. Normalization improves density. For Sesotho, the chrF score is 39.0 without normalization and 41.6 with it.
**Marginal benefit of increasing n-gram order wanes:** Existing research shows bigrams suffice [1]. As the n-gram order increases, the data gets sparser and the marginal benefit subsides. Hmong has the best score (46.1) using bigrams.
**Tipping points vary with language:** The optimal highest n-gram order may differ from language to language. 4-grams work best for Frisian while bigrams work best for Hmong. Hmong is an isolating language while Frisian is a fusional language. A possible explanation is that higher n-grams may have more impact on fusional languages.
**Entropy and n-gram methods both beat baselines and higher n-gram models perform best:** KenLM is much faster and performs better than NLTK. The entropy method using KenLM beats both baselines. Frisian has a chrF score of 52.7 with the entropy method using KenLM. This is much higher than the baselines: _Luke_ (47.5) and _Rand_ (50.5). The 4-gram model (53.6) is higher because building LMs from a few lines of data may not be accurate. Simpler n-gram models work better than more evolved entropy models with small data.
**Aggregation over all languages serves as a universal ranking:** The first 10 active learning methods are based on learning from one reference language and generalizing to the endangered language, while the last 4 focus on aggregation over multiple languages (Table 7 and 8). For Welsh, aggregation over multiple languages (48.2 with most spoken languages) performs better than those that rely on one reference language; but for other languages aggregation performs worse. Aggregation over all languages performs better than other aggregation methods for all languages except Welsh. This hinges on the reference language. For Frisian, choosing English (a Germanic language) as a reference language, performs better than aggregation. For Welsh (a Celtic language), choosing a reference language that is not as close, performs worse. But we often do not have such information for endangered languages. In such cases, universal ranking by aggregating over all languages is useful.
**Our active learning methods mimic curriculum learning:** Our models pick short and simple
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c c c c c c c} \hline \hline Network & A & B & C & D & E & F & G & H & & & & & & & & & & & & & \\ \hline \(\text{[N]}^{2}\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrowarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Down\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Down\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Down\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Down\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Down\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Down\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Down\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Down\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Downarrowarrow\) & \(\Downarrow\) & \(\Downarrow\) & \(\Down\
sentences first, emulating curriculum learning and helping human translators [1, 1, 2].
**All active learning methods cover different genres:** Our methods pick a mix of sentences from different genres, sentence lengths and complexity levels. Moreover, our methods pick narrative sentences first, which is helpful for human translators.
**Our model captures some language subtleties:** Apart from the metrics, we showed our translation to native speakers (Table 12). We translate "He sees that it is good" to "lug ca rua huv nwg lu sab" ("He puts it in the liver") in Hmong, which uses liver to express joy. This increases lexical choice.
**Our models and mixed models perform better than M2M100 alone:** M2M100 often produces extremely short sentences or repetition. Our models do not have those issues.
## 6 Future Work
We propose 24 training schedules for translation into endangered languages. We also propose and compare 14 active learning methods to build seed corpus without any endangered language data. Our model is robust with large multilingual models.
While the industry trend is to move towards bigger models with bigger data, our minimalist approach not only uses fewer languages, but we also aggregate over fewer languages. This saves computation power and resources, and therefore time and money, while improving translation performance.
However, we still face challenges with the lack of local coherence and context. The excerpt-based approach enjoys advantage with formality, cohesion and contextual relevance. Active learning methods, on the contrary, do not have consecutive sentences and therefore lose local coherence and pose challenges to human translators [16, 1, 15, 14, 17, 18]. This is an active research area.
Evaluation is still a challenge. It is difficult to find native speakers and establish long-term collaborations. There is also much variety among endangered languages. Some are more accessible than others and these might provide earlier, realistic evaluation of our method. Empowering endangered languages is not just a technology problem. It re
\begin{table}
\begin{tabular}{l l l l l l l l l l l l} \hline \hline \(\uparrow\)chrF & Frisian & Hmong & Pokomchi & Turkmen & Sesotho & Welsh & Xhosa & Indonesian & Hungarian & Spanish & Average \\ \hline \multicolumn{13}{l}{**Baselines:**} \\ + _Luke_ & 47.5 & 41.6 & 39.4 & 34.9 & 41.2 & 41.2 & 32.0 & 43.3 & 34.4 & 46.7 & 40.2 \\ + _Rand_ & 50.5 & 43.9 & 42.8 & 38.9 & 43.2 & 46.0 & 34.9 & 47.2 & 37.4 & 50.1 & 43.5 \\ \hline \multicolumn{13}{l}{**Our Models:**} \\ + \(S\) & 49.2 & 38.5 & 40.4 & 35.2 & 39.0 & 41.9 & 32.5 & 43.5 & 35.1 & 48.0 & 40.3 \\ + _SN_ & 50.9 & 43.9 & 43.2 & 38.3 & 41.6 & 43.2 & 36.1 & 46.9 & 36.7 & 50.3 & 43.1 \\ + _SNG\({}_{2}\)_ & 53.2 & **46.1** & 43.3 & 39.5 & 44.4 & 45.8 & 36.6 & 48.4 & 37.8 & 51.8 & 44.7 \\ + _SNG\({}_{3}\)_ & 52.7 & 46.0 & **44.5** & 39.6 & **45.5** & 47.5 & **36.8** & 48.9 & **39.2** & 52.3 & 45.3 \\ + _SNG\({}_{4}\)_ & **53.6** & 45.7 & 44.4 & **40.3** & **44.9** & 47.7 & 36.8 & **49.1** & 39.0 & **52.7** & **45.4** \\ + _SNG\({}_{5}\)_ & 53.0 & 45.6 & 43.9 & 39.7 & 45.4 & 46.7 & 36.8 & 49.1 & 38.4 & 52.5 & 45.1 \\ + _ENT\({}^{N}\)_ & 50.9 & 43.7 & 38.1 & 37.2 & 42.5 & 44.5 & 34.7 & 46.7 & 36.0 & 49.9 & 42.4 \\ + _ENT\({}^{K}\)_ & 52.7 & 45.7 & 43.5 & 40.2 & 44.6 & 45.2 & 36.4 & 49.0 & 39.1 & 51.8 & 44.8 \\ + _AGG\({}_{5}^{K}\)_ & 47.1 & 41.5 & 39.8 & 34.0 & 39.9 & 42.1 & 31.4 & 43.5 & 33.7 & 45.2 & 39.8 \\ + _AGG\({}_{5}^{K}\)_ & 45.0 & 38.4 & 38.5 & 32.4 & 38.8 & 47.1 & 30.4 & 41.2 & 33.3 & 44.2 & 38.9 \\ + _AGG\({}_{5}^{K}\)_ & 45.5 & 38.8 & 38.0 & 32.0 & 38.8 & **48.2** & 30.5 & 41.0 & 33.2 & 44.0 & 39.0 \\ + _AGG\({}_{5}^{K}\)_ & 45.4 & 39.1 & 38.3 & 32.4 & 38.8 & 48.0 & 30.7 & 41.2 & 33.2 & 44.3 & 39.1 \\ \hline \hline \end{tabular}
\end{table}
Table 7: 140 experiments comparing 14 active learning methods translating into 10 different languages with Schedule \(B\).
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline \(\uparrow\)chrF & Frisian & Welsh & Hungarian & Spanish & Average \\ \hline \multicolumn{13}{l}{**Baselines:**} \\ + _Luke_ & 49.3 & 44.3 & 38.8 & 48.4 & 45.2 \\ + _Rand_ & 53.5 & 49.5 & 42.2 & 53.2 & 49.6 \\ \hline \multicolumn{13}{l}{**Our Models:**} \\ + \(S\) & 51.9 & 45.9 & 40.4 & 51.1 & 47.3 \\ + _SN_ & 54.8 & 47.4 & 42.3 & 53.2 & 49.4 \\ + _SNG\({}_{2}\)_ & 54.5 & 49.5 & 43.5 & 54.2 & 50.4 \\ + _SNG\({}_{3}\)_ & 54.4 & 50.4 & **43.9** & 54.5 & **50.8** \\ + _SNG\({}_{4}\)_ & **54.9** & 49.8 & 43.2 & **54.9** & 50.7 \\ + _SNG\({}_{5}\)_ & 54.5 & 50.1 & 43.5 & 54.1 & 50.6 \\ + _ENT\({}^{N}\)_ & 52.7 & 47.2 & 40.9 & 52.9 & 48.4 \\ + _EN\({}^{K}\)_ & 54.6 & 49.4 & 43.5 & 53.8 & 50.3 \\ + _AGG\({}_{5}^{K}\)_ & 49.4 & 44.2 & 37.3 & 48.2 & 44.8 \\ + _AGG\({}_{5}^{K}\)_ & 46.5 & 49.8 & 36.4 & 46.4 & 44.8 \\ + _AGG\({}_{5}^{M}\)_ & 48.6 & 50.4 & 36.5 & 46.9 & 45.6 \\ + _AGG\({}_{5}^{K}\)_ & 48.8 & **50.8** & 36.4 & 46.9 & 45.7 \\ \hline \hline \end{tabular}
\end{table}
Table 8: 56 experiments activating the knowledge in M2M100 with Schedule \(I\).
quires much efforts in communication with local communities. Through our technologies, we would like to work with local communities to revive endangered languages and bring them to flourish.
## Acknowledgements
Thanks to Alan Black, Alon Lavie, Graham Neubig, Uri Alon, David Mortensen, Kevin Haworth, Christian Hallstein for the great discussions and suggestions.
|
2308.12982 | A Survey of AI Music Generation Tools and Models | In this work, we provide a comprehensive survey of AI music generation tools,
including both research projects and commercialized applications. To conduct
our analysis, we classified music generation approaches into three categories:
parameter-based, text-based, and visual-based classes. Our survey highlights
the diverse possibilities and functional features of these tools, which cater
to a wide range of users, from regular listeners to professional musicians. We
observed that each tool has its own set of advantages and limitations. As a
result, we have compiled a comprehensive list of these factors that should be
considered during the tool selection process. Moreover, our survey offers
critical insights into the underlying mechanisms and challenges of AI music
generation. | Yueyue Zhu, Jared Baca, Banafsheh Rekabdar, Reza Rawassizadeh | 2023-08-24T00:49:08Z | http://arxiv.org/abs/2308.12982v1 | # A Survey of AI Music Generation Tools and Models
###### Abstract
In this work, we provide a comprehensive survey of AI music generation tools, including both research projects and commercialized applications. To conduct our analysis, we classified music generation approaches into three categories: parameter-based, text-based, and visual-based classes. Our survey highlights the diverse possibilities and functional features of these tools, which cater to a wide range of users, from regular listeners to professional musicians. We observed that each tool has its own set of advantages and limitations. As a result, we have compiled a comprehensive list of these factors that should be considered during the tool selection process. Moreover, our survey offers critical insights into the underlying mechanisms and challenges of AI music generation.
## 1 Introduction
Music is an integral part of human culture that has evolved significantly over the centuries, adapting to different cultures, styles, and technologies [18]. Music generation by models has also undergone a paradigm shift with the advancements in AI and machine learning [38]. Artificial intelligence (AI) music generation tools provide musicians and composers with new and innovative ways to create music, which has not only facilitated the expression of users' musical intent but also had a significant impact on their creative ownership and confidence in their ability to collaborate with AI technology [10, 56, 107]. These tools [5, 57, 43, 3, 24] use machine learning algorithms to learn from large music datasets and aim to generate new music compositions that are indistinguishable from human-generated music.
The advent of deep neural networks, a.k.a, deep learning, since 2012, has revolutionized several computer science disciplines [52], including AI music generation. Several deep learning models can generate short-term note sequences, but creating longer melodies has been made possible through the development of recent neural network architectures, such
as MusicVAE [76] and TransformerVAE [46], and generative models such as Denoising Diffusion Probabilistic Models [38]. However, these models generate longer polyphonic melodies that do not necessarily follow a central motif and may need a sense of direction. Deep learning models have been used for harmonization, generating the harmony that accompanies a given melody, and style transfer techniques have been used to transform music in a determined style into another style. Briot, Jean-Pierre et al.[9] discussed the limitations in directly applying deep learning to music generation, including the lack of creativity and control and the need for human interaction in the composition process. Data-driven AI models sometimes have the tendency to produce variations of existing patterns instead of completely original compositions[9]. This constraint stems from their reliance on training data, which inherently bounds their creative output.
In this survey, first, we explain the fundamental terms common in music generation, which are also applied in AI-generated music. Then, we will explore the current state of AI music generation tools and models, evaluating their functionality and discussing their limitations. Finally, by analyzing the latest tools and techniques, we aim to provide a comprehensive understanding of the potential of AI-based music composition and the challenges that must be addressed to improve their performance.
This survey aims to provide an overview of AI music generation tools and models, their capabilities, and their limitations. We start by explaining concepts to readers who are not familiar with music composition. Then, we describe our methods. In particular, we start by explaining our data collection approach. Then, we list traditional approaches without neural networks and generating music. Next, we will examine the common AI-based music generation tools available today. These tools are open-source and have been used by several researchers and developers to create AI-generated music. However, only some of the models we reviewed were open-source, and in such cases, we relied on official demos or explanations for comparison. The scope of this work is limited to AI music generation tools and models that employ machine learning algorithms to create music. We will not cover the broader applications of AI in music, such as music classification, recommendation systems, and music analysis.
## 2 Music Composition Concepts
In this section, we will explore fundamental concepts that contribute to the structure and organization of a musical composition. Understanding their interplay is crucial for developing AI-generated music tools.
**Tone** represents a sound with a definite pitch, characterized by its frequency, amplitude, and timbre [80]. It serves as the fundamental unit in music composition, allowing the creation of melodies, chords, and other musical structures.
**Pitch (Key)** signifies the perceived frequency of a sound, determining its position on the high-low spectrum [77]. A musical piece is typically composed in a specific pitch, establishing the tonal center and governing the relationships between the notes [51].
**Timbre** often referred to as the tonal color or quality of a sound, is the characteristic that distinguishes different sound sources even when they have the same pitch and volume[75, 84].
**Harmony** refers to the simultaneous combination of different pitches or tones that generate a pleasing auditory sensation for the listener [65].
**Chords** refers to groups of notes that played concurrently, form the foundation of har
mony in music.
**Tempo** indicates the speed at which a musical composition is performed, typically measured in beats per minute (BPM) [55]. Tempo can significantly influence a piece's mood and emotional impact, with faster tempos often associated with excitement or energy and slower tempos linked to calmness or sadness [89]. AI-generated music tools can strategically adjust tempo to evoke specific emotions in listeners, tailoring the generated compositions to fit desired moods or feelings.
**Volume** denotes the perceived loudness of a sound, which is closely related to its amplitude or intensity. It is a scalar quantity that represents the magnitude of the acoustic energy being transmitted, typically measured in decibels (dB)[60].
**Style** encompasses the distinctive characteristics and techniques utilized by a composer or performer, thereby shaping the unique identity of their musical creations [59]. When style is applied to AI-generated music tools, analyzing and learning from existing musician-created music enables the emulation of styles from different composers or genres, consequently generating new compositions that reflect the unique artistic traits of the artist or historical era.
**Chorus** refers to a recurrent section within a song, often featuring a memorable melody and lyrics that convey piece's central theme [16].
**Polyphonic music** refers to music that consists of multiple independent melody lines played or sung simultaneously. These melody lines interact with each other to create harmonies, counterpoints, and textures that are richer and more complex than monophonic music, which consists of a single melody line.
**MIDI** (Musical Instrument Digital Interface) is a standard protocol for communication between electronic musical instruments, computers, and other digital devices. MIDI enables the exchange of musical information, such as notes, velocities, and control messages, between different devices and software applications. It allows musicians and producers to control and synchronize different instruments and devices, and to record and edit musical performances with precision and flexibility.
**Key Velocity** also known as key strike velocity or keystroke velocity, is a measurement of how forcefully a key is pressed on a MIDI keyboard or other MIDI controller. This value is usually expressed as a number between 0 and 127, with 0 indicating that the key was not pressed at all, and 127 indicating that the key was pressed with maximum force.
**ABC or abc notation[1]** is a shorthand notation system for writing music using ASCII characters. Developed in the 1970s and 1980s, it is commonly used in the Celtic and folk music traditions to share and distribute traditional music. It uses letters and symbols to represent musical notes, rhythms, and other elements, making it an easy-to-learn and widely used method for sharing music in digital formats such as over the Internet.
**Pianoroll** is an interface in Digital Audio Workstations (DAWs) [53] enabling manipulation of Musical Instrument Digital Interface (MIDI) data. It utilizes a grid where the X-axis denotes time and the Y-axis represents pitch. The duration and intensity (velocity) of notes are adjustable, making them integral to the structure of musical compositions and vital for developing AI-generated music tools.
**Chromagram** is a visualization that illustrates the intensity of various pitches, in a musical piece, over time. As demonstrated in Figure 1, each pitch class (C to B) corresponds to a unique row on the y-axis and the x-axis represents time. The color at each point in this 2D space reveals the energy level of a pitch class at the given time, brighter colors indicating higher energies. Figure 1 Chromagram depicts the melody of "Twinkle Twinkle Little Star".
**Accompaniment** in music refers to the supportive, often harmonic, elements that provide a backdrop for the main melody or theme of a song.
### Interplay of concepts in AI-generated Music
AI-generated music tools can produce compositions with coherent and aesthetically pleasing progressions by incorporating knowledge of harmony and chords. Additionally, identifying patterns in choruses of popular songs allows AI-generated music tools to create new songs with catchy and memorable melodic hooks that resonate with listeners.
In conclusion, understanding the interplay of these fundamental musical concepts is vital for developing sophisticated AI-generated music tools capable of producing human-like compositions which are emotionally expressive and contextually appropriate. By effectively modeling and utilizing these concepts, an AI model can create compositions that potentially enrich the musical landscape with innovative structures and motifs, demonstrating a promising avenue for the future of music generation and technology. Furthermore, this knowledge bridges the gap between traditional music creation and AI-generated music, facilitating interdisciplinary collaboration and driving advancements in music technology.
## 3 Methods
### Data Collection
To create a comprehensive list of AI music generation tools and models, we employed a keyword search method across several platforms, along with the assistance of ChatGPT v3.5, ChatGPT v4, and Bard 1 to refine the keyword search list and web resources. To identify the list of keywords, first, three authors of the paper compile a list of keywords separately, and then they accumulate their results. Next, we use the listed Large Language Models (LLMs) to refine the list of keywords and refine the list of web resources to search for these keywords.
Figure 1: Chromagram of ”Twinkle Twinkle Little Star”, illustrating pitch class energy across time.
The list of web resources: _Google Search, Google News, Bing News, Google Scholar, Twitter, GitHub, YouTube, Reddit 2, Hacker News 3_, and _Huggingface_ 4
Footnote 2: [https://reddit.com](https://reddit.com)
Footnote 3: [https://news.ycombinator.com](https://news.ycombinator.com)
Footnote 4: [https://huggingface.co](https://huggingface.co)
The list of keywords: _AI music, AI music generation, Diffusion music generation, Neural Networks Music Generation, Machine Learning Music, Music Generation Models, Music Generation Algorithms, Music AI, Music Technology, Computer-generated Music, Deep Learning Music._
The prompt we have used on our LLM platform is as follows: _I am searching for music generated AI in the following platforms; [web resources] by using the following keywords; [keywords]. Do you have any platform or keyword recommendation that I missed?_
### Taxonomy of Music Generation Tools
We provide a visual representation of the progression of these music generation models from their inception to the present day in Figure 2. The timeline traces the development from the early non-neural-network methods to the latest AI-enabled, parameter-free models, clearly illustrating the significant advancements in music generation technology over the years.
Traditionally, music generation AI tools use a variety of methods, including Markov chain[33], rule-based models [94, 90], and evolutionary algorithms[107, 29, 102]. These
Figure 2: Timeline of Music Generation Models
methods are typically parameter-based, necessitating human input of relevant parameters or configurations to guide the music-generating process.
Later models have relied on neural networks. Since the quality of generated music via the neural network is significantly better than traditional methods, we separate their explanation in this survey. We classify neural network models into two categories: parameter-based and parameter-free models. Parameter-based models are those that require specific input parameters, such as the desired 'tempo' or 'key', to generate music. On the other hand, parameter-free models do not require any specific input parameters, and they are further subdivided into two categories: prompt-based and visual-based models. Prompt-based models allow users to input descriptive texts as prompts to generate music, while visual-based models use visual inputs such as images or videos.
Furthermore, Figure 3 presents a hierarchical taxonomy of the music generation models, which further elucidates the categorizations and subcategories of these models. This tree structure allows a more granular understanding of the differences and relationships between the various models, as well as their evolution over time.
#### 3.2.1 Non-Neural Network Approaches
Based on our studies and the assistance of large language models (Bard 6, ChatGPT 3.5 7, Claude1 8) we identified there are three non-neural network approaches to construct music; including, Markov Chains, Rule-based models, and Evolutionary algorithms. Table 1 illustrates these methods, their key functionalities, and their underlying models. The Classification column in our tables, as discussed in Section Taxonomy of Music Generation Tools, categorizes the music generation tools based on the type of input they require to generate music.
Footnote 6: [https://bard.google.com](https://bard.google.com)
Footnote 7: [https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt)
Footnote 8: [https://www.anthropic.com/product](https://www.anthropic.com/product)
Figure 3: A Hierarchical Taxonomy of Music Generation Models
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Tool/Method** & **Main Function** & **Non-Neural Network Model** & **Classification** \\ \hline Markov Melody Generator [39] & Analyzes patterns in existing music to create new compositions & Markov Chains[85, 6, 71] & Parameter-based \\ \hline Melisma Stochastic Melody Generator [88] & Generates melody sequences using stochastic processes & Markov Chains[88] & Parameter-based \\ \hline MuseScore [83] & Allows composition based on user-defined patterns & Rule-Based Systems[83] & Parameter-based \\ \hline APOPCALEAPS [90] & Generates pop music based on chance rules & Rule-Based Systems[94, 90] & Parameter-based \\ \hline GenJam [7] & Uses selection, mutation, and crossover of musical sequences to generate new music & Genetic Algorithm[7] & Parameter-based \\ \hline Procedural Piano Composition [17] & Evolves random music pieces using mood templates until their mood aligns with the template & Genetic Algorithm[17] & Parameter-based \\ \hline Marques’ Music Generator [58] & Uses melodic and musical theory concepts as a fitness function in a genetic algorithm & Genetic Algorithm[58] & Parameter-based \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of non-neural network music generation tools
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Tool Name** & **Main Function** & **Access Interface** & **Neural Network Model** & **Classification** \\ \hline Magenta[5] & Melody, drum, piano, chords generation using deep learning & Plug-in, command-line, Python, script & RNN-LSTM[61, 86]), VAE[48], GANSynth[27], WaveNet[97], Transformer networks[100] & Parameter-based \\ \hline Jukebox[23, 62] & Music generation with vocal synthesis & Python & VQ-VAE[98] & Parameter-based \\ \hline Music Generation with Sentiment[30] & Music generation with sentiment analysis & Python & mLSTM[50] & Parameter-based \\ \hline MuseNet[64] & Music generation with style transfer capabilities & Web-based & Transformer[100] (Decoder-only) & Parameter-based \\ \hline Music Transformer[41] & Music generation with long-term structure & Python & Transformer[100] & Parameter-based \\ \hline SDMuse[104] & Music editing and generation framework & Not released & SDE[93], Gaussian Diffusion Model\({}^{5}\) & Parameter-based \\ \hline Rifusion[31] & Audio clip generation from text and images & Web User Interface & Stable Diffusion v1.5[78] & Prompt-based \\ \hline Moúsai[82] & Generative music from textual prompts & Python & Diffusion Magnitude Automaticoencoder (DMAE)[66] & Prompt-based \\ \hline Noise2Music[44] & Generate high-quality music audio from text prompts & Not released & Diffusion Model[72, 91] & Prompt-based \\ \hline MusicLM[3] & High-fidelity music generation from text prompts & Not released & SoundStream, w2vBERT, and MuLan & Prompt-based \\ \hline CMT[24] & Video background music generation from video with
Markov chains are mathematical models that are used to analyze and predict the behavior of systems that exhibit a property called the Markov property [36]. This property states that the future state of a system depends only on its current state, and not on its past history. By predicting the probability of transitions between different musical notes or events, they have been used in melody generation to create new melodies that flow smoothly and naturally [6]. Later, Markov chains have also been used to create customized music based on user manual input of moods [71]. Recently, Shapiro et al. [85] employ Markov chains for music generation to analyze patterns in existing music and create new compositions.
Another group of works focuses on Rule-based music generation, which employs predefined rules to create musical data that follows specific patterns or styles. An instance of this approach is provided by Spangler [94], who introduced a system for real-time accompaniment. This system extracts and employs a set of harmonic rules from musical examples to generate new harmonies in response to an input melody, demonstrating the application of deterministic rule sets in music generation. On a different note, Sneyers and De Schreye [90] developed APOPCALEAPS, a system for generating and learning pop music that leverages a unique form of probabilistic rules. These rules, learned automatically from examples, allow for a range of potential musical outcomes, providing an element of randomness within defined parameters. This chance-based methodology has been shown to foster versatility and creativity in music generation.
A third group of works focused on evolutionary algorithms, specifically genetic algorithms, in the sphere of music generation. These algorithms engage in the selective identification of the best musical sequences, refining them through mutation and crossover, thus fostering the genesis of novel compositions. In 1994, Biles introduced GenJam (Genetic Jammer) [7], an interactive genetic algorithm that evolves jazz solos improvisations by evolving music measures populations, with the algorithm using real-time feedback from a human mentor as a fitness function. Later in 2000, Marques et al. [58] applied genetic and evolutionary algorithms to music generation, wherein melodic and musical theory concepts were used as a fitness function, enabling efficient navigation through large search spaces to find creative solutions. More recently, in 2021, Rocha de Azevedo Santos et al [17] employed the genetic algorithm to create a piano music composition system using mood templates. This system evolves random music pieces until their mood aligns with the template, using MIDI statistical features to calculate the music-template distance. Despite some constraints, such as the computer-generated feel due to the lack of rhythm regularity and synchronicity, these examples underscore the potential of evolutionary algorithms in the autonomous generation of music and their role in emulating modern composers' creative strategies.
#### 3.2.2 Neural Network-based Music Generation
In the following section, we undertake a systematic examination of music generation models driven by neural networks. The discourse is partitioned into three distinct subsections, each focused on a particular type of model: parameter-based models, prompt-based models, and visual-based models. In each case, we provide a thorough examination of the model architecture, delineating its key features and capabilities. Additionally, we offer a balanced assessment of the benefits and potential limitations of each model. Our objective is to provide a holistic view of the landscape of neural network-based music generation models, elucidating the various mechanisms by which they operate and their
respective merits and drawbacks.
Table 2 provides a summary of various music generation tools based on their main function, interface access, neural network models, and classification. Note that some models were not accessible due to restrictions implemented by the owner corporation.
## 4 Parameter-based Music Generation Tools
Parameter-based Music Generation is a specific category of models requiring certain input parameters to function effectively. The parameters in question could range from musical attributes like 'tempo' or 'key' to more abstract inputs such as'mood' or 'temperature'.
Unlike their Parameter-free counterparts, which function autonomously without the need for specific inputs, Parameter-based models offer a higher degree of control to the users, allowing them to steer the generated music in a direction that aligns with their preference. This is achieved by tailoring the input parameters which guide the music generation process.
### Magenta
#### 4.1.1 Overview
Magenta[5], an open-source research initiative, is as an expansive framework exploring the incorporation of machine learning in the artistic and musical creative processes. The authors described that their focus is to explore the possibility of developing intelligent tools and interfaces that facilitate the integration of music generation models into the creative process of artists and musicians. Their goal is not to replace the creative process but to enhance it.
Magenta's primary repositories, including its popular Python TensorFlow library, can be found on GitHub9, offering an array of neural network models. These encompass multiple Sequential (RNN/LSTM[87, 86, 61]) models tailored for generating drum and melody sequences, the MusicVAE[76], and GANSynth[27] which utilizes generative adversarial networks[35] for audio synthesis.
Footnote 9: [https://github.com/magenta](https://github.com/magenta)
In this paper, we present an examination of several music generation models created by Magenta, specifically, Music VAE, NSynth, Melody RNN, and Drums RNN. These models were selected due to their recognized influence within the field and their relevance to contemporary trends in music generation. This estimation of popularity was made through an indirect assessment using large language models, such as ChatGPT, to ascertain the frequency and context of their mentions within relevant literature and digital discourse.
#### 4.1.2 Architecture
Magenta contains several neural network models for music generation, which can be classified into three distinct model types: sequential models [87] such as recurrent neural networks (RNNs) and Long Short-Term Memory network, variational autoencoders (VAEs) [76], and neural synthesizer (NSynth)[28]. We will explain each model briefly in this section.
Sequence-based music generation models, such as Melody RNN [20], Improv RNN [19], and Polyphony RNN [21], are trained to learn the distribution of musical patterns and structures in a given dataset. These models are capable of generating new music by predicting the next note in a sequence of musical notes given the previous ones. The learned distribution of musical patterns and structures from training data enables the models to generate music in a variety of styles and genres.
VAEs are probabilistic generative models that learn the probability distribution of the input dataset [48]. Music VAE [76], a hierarchical recurrent VAE for music generation, and it is capable of generating new music by sampling from the learned distribution. The model is trained on a dataset of musical compositions of approximately 1.5 million MIDI files collected by the author[76], and it learns a low-dimensional representation of the input data, which can be used to generate new music with various styles and characteristics. The reconstruction quality was also evaluated on the publicly available Lakh MIDI Dataset [68].
NSynth[28] is a neural autoregressive model for music generation [26], uses a WaveNet [97] and an autoencoder to generate high-quality music with complex and diverse sound characteristics. Autoregressive models learn the probability distribution of the next note in a sequence given the previous ones. However, they require a large amount of computational resources and a large dataset of musical examples for training.
#### 4.1.3 Features and capabilities
The core foundation of Magenta revolves around the concept of music note sequences, which are an abstract representation of a series of notes with varying pitches, instruments, and strike velocities, similar to MIDI. As demonstrated on the official website [4], the primary usage of Magenta involves generating MIDI-style music with a single-track melody.
NoteSequence is a data structure that captures various aspects of music, such as timing, pitch, and instrument information. The NoteSequence library is an essential component of the Magenta project, and it focuses on the serializable NoteSequence representation and provides a variety of utilities for working with musical data. The main functionalities of the library include creating note sequences from different formats (e.g., MIDI, abc[1], MusicXML[74]), manipulating note sequences (e.g., slicing, quantizing), extracting components from note sequences (e.g., melodies, drum tracks, chords), exporting note sequences to different formats (e.g., MIDI or audio) and converting note sequences to representations useful for model training (e.g., one-hot tensors).
A summary of the functionalities of Magenta models is presented in Table 3.
In the following, we explain Magenta's different neural network models to achieve various tasks and their respective functions.
Nsynth: Neural Audio Synthesis (NSynth) model uses WaveNet-based autoencoder to generate audio[28]. Unlike traditional synthesizers that generate audio from hand-designed components, such as oscillators and wavetables, NSynth uses deep neural networks to generate sounds at the level of individual samples. By learning directly from data, NSynth provides musicians with intuitive control over timbre and dynamics, allowing them to explore new sounds that would be difficult or impossible to produce with a hand-tuned synthesizer.
One of the key features of NSynth is its ability to factorize the generation of music into notes and other musical qualities, such as timbre. It assumes that the generation
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline
**Model** & **Description** \\ \hline Arbitrary Image Stylization & A machine learning system for performing fast artistic style transfer that may work on arbitrary painting styles. \\ \hline Coconet & Trains a CNN to complete partial musical scores. \\ \hline Drums RNN & Applies language modeling to drum track generation using a LSTM. \\ \hline GANSynth & Uses a GAN-based model to generate audio[28]. \\ \hline Image Stylization & A ”Multistyle Pastiche Generator” that generates artistic representations of photographs. Described in ”A Learned Representation For Artistic Style”. \\ \hline Improv RNN & Generates melodies conditioned on an underlying chord progression using an LSTM. \\ \hline Melody RNN & Applies language modeling to melody generation using an LSTM. \\ \hline Music Transformer & Generates musical performances, either unconditioned or conditioned on a musical score. \\ \hline Music VAE & A hierarchical recurrent variational autoencoder for music. \\ \hline NSynth & Neural Audio Synthesis with WaveNet Autoencoders”. \\ \hline Onsets and Frames & Automatic piano music transcription model with a dual-objective approach. \\ \hline Performance RNN & Applies language modeling to polyphonic music using a combination of note on/off, timeshift, and velocity change events. \\ \hline Piano Genie & Learns a low-dimensional discrete representation of piano music using an encoder-decoder RNN. \\ \hline Pianoroll RNN-NADE & Applies language modeling to polyphonic music generation using an LSTM combined with a NADE. \\ \hline Polyphony RNN & Applies language modeling to polyphonic music generation using an LSTM. \\ \hline RL Tuner & Enhances an LSTM trained for monophonic melody generation using reinforcement learning (augmenting deep Q-learning with a cross-entropy reward). \\ \hline Score2Perf & Generates musical performances, either unconditioned or conditioned on a musical score. \\ \hline \end{tabular}
\end{table}
Table 3: List of the Magenta Models in Model Directory[22]
of music can be modeled by factoring the quality of the notes and the other musical qualities. To facilitate this, the developers built the NSynth dataset 10, which is a large collection of annotated musical notes sampled from individual instruments across a range of pitches and velocities. Alongside the dataset, the developers also released the WaveNet-style autoencoder model, which learns codes that meaningfully represent the space of instrument sounds.
Footnote 10: [https://magenta.tensorflow.org/datasets/nsynth](https://magenta.tensorflow.org/datasets/nsynth)
Melody-rnn: Melody RNN has four configurations: basic, mono, lookback, and attention. The basic configuration uses one-hot encoding to represent extracted melodies as input to the LSTM. For training, all examples are transposed to the MIDI pitch range [48, 84], and outputs are also in this range. The mono configuration is similar to the basic configuration but can use the full 128 MIDI pitches. The lookback configuration introduces custom inputs and labels that allow the model to recognize patterns that occur across one and two bars and helps the model recognize patterns related to an event's position within the measure. Finally, the attention configuration uses attention mechanisms to access past information without having to store that information in the RNN cell's state. This allows the model tolerant longer-term dependencies and generates melodies with longer arching themes.
To optimize the performance of the Melody RNN model, several hyperparameters can be adjusted. These hyperparameters include the batch size, which determines the number of sequences used in each training iteration, as well as the number of time steps in the generated sequence, learning rate, and the number of hidden units in each LSTM layer. Furthermore, the dropout keep probability is used to determine the probability that a given hidden unit will be retained during training, and the attention length specifies the length of the attention mechanism. Additionally, the number of training steps can be specified to determine when to stop the training loop, while the hyperparameter eval ratio is used to determine the fraction of examples reserved for evaluation during training. By tuning these hyperparameters, users can customize the model's performance to generate melodies that better suit their needs.
Music VAE: MusicVAE is another Magenta model based on Virtual Auto Encoder (VAE) [48], provides different modes of interactive musical creation, including random sampling from the prior distribution, interpolation between existing sequences and manipulation of existing sequences via attribute vectors or a latent constraint model. Representations of melodies/bass-lines and drums are based on those used by MelodyRNN and DrumsRNN (described in the following sections).
MusicVAE uses pre-trained checkpoints for different configurations, which are used to generate outputs from the command line.
GrooVAE is a variant of MusicVAE for generating and controlling expressive drum performances. It can be trained with Groove MIDI Dataset [34] of drum performances.
Drums RNN: This model generates drum tracks. While modeling drum tracks, it is considered a single sequence of events by mapping all of the different MIDI drums onto a smaller number of drum classes and representing each event as a single value. This value presents the set of drum classes that are struck. Unlike melodies, drum tracks are polyphonic, and multiple drums can be struck simultaneously.
The model provides two setups, one_drum and drum_kit, where the former encodes all drums to a single class, while the latter incorporates a nine-piece drum kit, including different drum types encoded as a 512 length one-hot vector, and binary counters augment the model's input. Drums RNN can operate either with a pre-trained model, where
users input the necessary parameters and potentially specify primer drums, or train a fresh model by converting MIDI files into NoteSequences, which transform into Sequence examples for training and evaluation.
#### 4.1.4 Advantages and Limitations
One main advantage of Magenta is that it is an open-source research project that aims to explore the role of machine learning as a tool in the creative process of art and music. It has developed a vast range of models and libraries since 2016 that provide various functions for creating, manipulating, and generating music and art.
Magenta provides a wide range of resources and libraries (listed in 2), which allows for flexibility in the input and manipulation of music and art data. In addition, each model has its unique approach and can generate different types of music. This provides a wealth of possibilities and functionalities for users to explore and utilize11. In addition, Magenta supports multiple formats, such as MIDI and MusicXML, which allows for interoperability with other music and art software.
However, Magenta also has some limitations. It is a complex project that requires a certain level of technical knowledge and expertise to use and understand. Some models and libraries may be geared toward specific use cases, limiting their overall versatility. Musicians may face challenges in learning and selecting the right tools due to the extensive range of models and libraries available. It should be noted that Magenta's main output is in the form of MIDI-style music sequences that may only be readily usable by individuals with prior knowledge of music production. Magenta cannot always produce the desired result, and the generated music may require further manual fine-tuning and adjustments afterward.
Footnote 11: As of April 2023, the Magenta Python library has 3,700+ forks on GitHub, while Magenta.js has 300+ forks, demonstrating widespread adoption.
### JukeBox
#### 4.2.1 Overview
Jukebox [23, 62] is another neural network based model, for music generation. It includes vocal elements in a diverse range of musical genres and styles. The output of the Jukebox model is presented as raw audio. This model utilizes VQ-VAE and Transformers to produce original compositions. It is noteworthy that Jukebox is capable of generating singing, which is a unique feature compared to other music generation models at the time of its release.
#### 4.2.2 Architecture
Jukebox uses hierarchical VQ-VAE [23] architecture to compress music into discrete codes. This method is similar to the VQ-VAE-2 [73] model used for image generation, but it has been modified to suit the needs of music generation better. The model uses random restarts to prevent codebook collapse, which is a common issue for VQ-VAEs. It separates decoders to maximize the use of bottlenecked top levels in hierarchical VQ-VAE, and a spectral loss to allow for the reconstruction of higher frequencies. The model has three levels, each compressing the audio by a different factor and reducing the detail in the
audio. However, it is still able to retain important information about the pitch, timbre, and volume of the audio.
Following this compression, a set of prior models, including a top-level prior and two upsampling priors, are trained to learn the distribution of music codes in the compressed space. These prior models use a simplified variant of Sparse Transformers, a form of self-attention, for efficient training. Serving as probabilistic representations of the underlying structure in the compressed music space, the prior models enable Jukebox to generate new music samples by capturing global patterns and finer-grained details through a hierarchical approach.
To train the model, a dataset of 1.2 million songs was collected, along with corresponding lyrics and metadata. Additionally, the model can be conditioned on the artist and genre, which allows it to generate music in a specific style. Artists and genres can be clustered together using t-SNE [99], which reflects the similarities and reveals unexpected associations between them.
#### 4.2.3 Features and Capabilities
Jukebox features two types of models; one with lyrics and one without lyrics. The models are typically named with a prefix that indicates the number of parameter upsamplers, such as "5b" or "5b_lyrics". This naming convention highlights the upgrade from the previous "1b" version, with the numerical value representing the increase in parameter upsamplers for enhanced performance.
It has several hyperparameters, and we list a few of the important ones. One of the hyperparameters is _speed_upsampling_, which allows for faster upsampling but may result in slightly "choppy" samples. Another hyperparameter is the _mode_, which can be either "primed" or "ancestral". The primed mode continues an existing song, while the ancestral mode generates a song from scratch. If the primed mode is selected, an audio file is required to specify the song Jukebox will continue.
Jukebox also provides the ability to select the artist and genre of the music to be generated. The available options for artists and genres can be found in the GitHub repository for 12. The 5b_lyrics and 5b models use the v2 genre lists, while the 1b_lyrics model uses the v3 genre lists. Up to five v2 genres can be combined, but combining v3 genres is impossible.
Footnote 12: [https://github.com/openai/jukebox](https://github.com/openai/jukebox)
Finally, Jukebox provides a hyperparameter called _sampling_temperature_, which determines the creativity and energy of the generated music. The higher the temperature, the more chaotic and intense the result. Keeping the temperature between.96 and.999 is recommended, but users can experiment with different values to achieve the desired results.
#### 4.2.4 Advantages and Limitations
The Jukebox model has several advantages that make it a powerful tool for music generation. One major advantage is its use of the VQ-VAE technique, which allows for the efficient compression of music into discrete codes. This allows for the generation of novel songs while retaining important information about the pitch, timbre, and volume.
The Jukebox model also has the ability to be conditioned on artist and genre, which allows it to generate music in a specific style. This allows for more control over the
generated music and can be useful for applications such as creating custom soundtracks or personalized playlists.
Another advantage of the Jukebox model is its use of a dataset of 1.2 million songs for training, paired with corresponding lyrics and metadata, which allows it to learn a wide variety of musical styles and patterns.
However, there are also some limitations to the Jukebox model. One limitation is that the downsampling process used to compress the audio can result in a loss of detail in the generated music. Additionally, the model's ability to generate music in a specific style is limited by the diversity of the dataset that it was trained on. If a specific style of music is not well represented in the dataset, the model may struggle to generate music in that style.
While Jukebox shares the common limitation of data-driven music generation models in producing variations of existing patterns, its unique challenge is the computational complexity involved in its hierarchical approach. The generating process might take a significant amount of resources for commonly accessible machines. This can be a barrier for developers and researchers who do not have access to large corporation GPU clusters.
### MuseNet
#### 4.3.1 Overview
MuseNet[64] is a deep neural network designed with the ability to create 4-minute musical compositions with up to 10 different instruments and has the potential to blend various styles.
#### 4.3.2 Architecture
MuseNet[64] utilizes a transformer model [92, 100], similar to GPT-2, predicting subsequent tokens in a sequence, applicable to both audio and text. Using the optimized kernels of Sparse Transformer [11], a 72-layer network with 24 attention heads is constructed, focusing on a context comprising 4096 tokens.
The model's large context window may contribute to its ability to capture and maintain long-term structural elements in music, crucial in generating coherent musical pieces[64]. Sequential data is used for training, with the aim to predict the subsequent note(s) given a set of notes [64]. The dataset used for training was sourced from various collections including ClassicalArchives [54], BitMidi [2], and the MAESTRO dataset [37].
Various encoding strategies were used to convert MIDI files into a model-friendly format. An encoding scheme combining pitch, volume, and instrument information into a single token was finally adopted, balancing expressivity and conciseness[64].
Further, the architecture incorporates several embeddings for structural context, including positional embeddings, time-tracking embeddings, and chord-specific embeddings[64]. During training, data augmentation techniques such as transposition, volume and timing augmentation, and mixup in the token embedding space were used [106]. Additionally, an "inner critic" mechanism is employed, training the model to discern between original dataset samples and its own past generations [64].
#### 4.3.3 Features and Capabilities
MuseNet offers users the opportunity to explore and interact with its capabilities through two modes: "simple" and "advanced". The simple mode provides users with a pre-generated set of uncurated samples, which allows them to experiment with different composers, styles, and starting points. This feature enables users to explore the diverse range of musical styles the model can generate. On the other hand, the advanced mode offers a more interactive experience where users can directly interact with the model to create entirely new pieces. Although this mode may require longer completion times, it offers the ability to generate a unique and original piece. By offering these two modes, MuseNet allows users to experiment with its capabilities and makes it accessible to both novice and experienced users.
To create more controlled generations, composer and instrumentation tokens were introduced during the training phase. These tokens provide contextual cues to the model about the expected style and instrumentation of the music, enabling users to guide the model's output in a desired direction [64].
#### 4.3.4 Advantages and Limitations
MuseNet exhibits a remarkable ability to retain the long-term structure of a musical piece, thereby effectively imitating the style or musician with precision. This proficiency is achieved through its architecture, which has been meticulously engineered to generate musically coherent and aesthetically pleasing compositions when parameters congruent with its training and encoding scheme are utilized. These parameters comprise appropriate prompts like composer and instrumentation tokens and a consolidated encoding of musical features such as pitch, volume, and instrument information into a single token. The precise amalgamation of these components endows the model with the capability to produce expressive yet concise musical sequences, effectively simulating natural compositions.
However, there are limitations to be considered when using MuseNet. Although the user can suggest specific instruments, MuseNet generates each note by calculating the probabilities across all possible notes and instruments. Therefore, the model may choose different instruments from what the user suggested. Moreover, MuseNet needs help with odd pairings of styles and instruments, such as Chopin with bass and drums. Therefore generations will be more natural if the instruments closest to the composer or band's usual style are selected. Lastly, MuseNet does not guarantee that the music generated is free from external copyright claims.
### Music Transformer
#### 4.4.1 Overview
Music Transformer is another neural network model designed to generate long sequences of music with long-term structure[41]. It uses the transformer architecture and is capable of scaling to musical sequences on the order of minutes.
#### 4.4.2 Architecture
The Music Transformer model uses discrete tokens to represent music, with the specific vocabulary determined by the training dataset. The Transformer decoder is a generative
model, relying on self-attention mechanisms with learned or sinusoidal position information. Each layer consists of a self-attention sub-layer followed by a feedforward sub-layer. The attention layer uses the scaled dot-product attention mechanism, and the feedforward sub-layer performs two point-wise dense layers. The model employs relative position representations to allow attention to be informed by the distance between positions in a sequence. The Music Transformer reduces the intermediate memory requirement of close attention through a "skewing" procedure[41]. Also, it uses relative local attention for very long sequences by chunking the input sequence into non-overlapping blocks.
#### 4.4.3 Features and Capabilities
Music Transformer is available as both TensorFlow 13 and PyTorch 14 versions with easy-to-use APIs. It can generate music with long-term structure and scales to sequences of music on the order of minutes. In addition, it has a reduced memory footprint, allowing it to generate longer sequences of music. The model is trained on sequential data and can generate natural compositions when the correct hyper-parameters are selected.
Footnote 13: [https://github.com/jason9693/MusicTransformer-tensorflow2.0](https://github.com/jason9693/MusicTransformer-tensorflow2.0)
Footnote 14: [https://github.com/jason9693/musictransformer-pytorch](https://github.com/jason9693/musictransformer-pytorch)
#### 4.4.4 Advantages and limitations
Music Transformer has the advantage of generating long music sequences with long-term structure and has been modified to reduce memory requirements. However, the melody produced by the tool can be monotonous if underfitting or insufficient training data exists. This could limit the tool's usefulness for complex music generation tasks and may require additional optimization or training with larger and more diverse datasets to improve the quality of the generated music.
### SDMuse
#### 4.5.1 Overview
SDMuse is a unified stochastic differential music editing and generation framework that can generate and modify existing musical pieces at a fine granularity proposed in 2022[104, 105].
#### 4.5.2 Architecture
The framework follows a two-stage pipeline with a hybrid representation of pianoroll and MIDI-event. In the first stage, SDMuse generates and edits pianoroll through an iterative denoising process based on a stochastic differential equation (SDE) using a diffusion model generative prior[93]. In the second stage, the framework refines the generated pianoroll and predicts MIDI-event tokens auto-regressively.
#### 4.5.3 Features and Capabilities
SDMuse can compose a whole musical piece from scratch and modify existing musical pieces in various ways, such as combinations, continuation, inpainting, and style transfer. The framework can also extract fine-grained control signals from the musical piece itself
without requiring extra data annotation, making it more efficient.
The proposed framework takes advantage of the two most common music representations, pianoroll, and MIDI-event. Pianoroll-based methods use pianorolls to represent music scores, while MIDI-event-based methods convert musical pieces to a MIDI-event token sequence. The hybrid representation used in SDMuse is more appropriate for extracting and controlling perceptive information like structure while generating and modeling precise music performance details, such as velocity and fine-grained onset position.
SDMuse involves several fine-grained control signals, including note density, pitch distribution, and chord progression sequence during the training process of the diffusion model to enable unconditional and conditional music generation/editing at the same time. These control signals can be extracted from the musical piece itself, making it more efficient.
#### 4.5.4 Advantages and Limitations
Although we could not experiment with SDMuse practically due to the lack of a demonstrative version available for use, the model's capabilities in generating and editing multi-track music have been demonstrated in the academic paper. Specifically, the model can perform fine-grained editing using stroke-based generation and editing, inpainting, outpainting, combination, and style transfer. Furthermore, the effectiveness of SDMuse in generating and editing music has been evaluated in various experiments using the ailabs1k7 pop music dataset [40]. Despite the lack of first-hand experimentation, the reported results and methodology provide insight into the potential of SDMuse as a music generation and editing tool.
Overall, SDMuse's effectiveness in generating and editing music pieces has been demonstrated through experiments on various music editing and generation tasks using the ailabs1k7 pop music dataset.
### Music Generation with Sentiment based on mLSTM
#### 4.6.1 Overview
The Music Generation with Sentiment [30] is a model that can compose music with a specified sentiment. It is based on mLSTM, which was initially designed for generating Amazon product reviews based on positive or negative emotions [67]. In this model, a music piece is represented as a sequence of words and punctuation marks from a vocabulary that represents events retrieved from the MIDI file. The model can also be used for sentiment analysis of symbolic music, enabling users to categorize music based on emotional content.
#### 4.6.2 Architecture
The model is trained on a dataset called VGMIDI, which is composed of 823 pieces extracted from video game soundtracks in MIDI format. The pieces are piano arrangements of the soundtracks and vary in length from 26 seconds to 3 minutes. Among these pieces, 95 are annotated according to a 2-dimensional model that represents emotion using a valence-arousal pair. Valence indicates positive versus negative emotion, and arousal indicates emotional intensity [81]. The valence-arousal model is also one of the most common dimensional models used to label emotion in music.
It uses mLSTM and logistic regression to compose music with a sentiment. The sentiment
in music is determined by various characteristics such as melody, harmony, tempo, and timbre. Additionally, the same labeled data can be used to explore affective algorithmic music composition in both classification (multiclass and/or binary) and regression problems.
#### 4.6.3 Features and Capabilities
The model generates music that aligns with a specific mood or emotion, providing users with a convenient tool for generating music with a desired sentiment. Moreover, the model can be utilized for sentiment analysis of symbolic music, facilitating users to examine and classify music based on emotional content.
#### 4.6.4 Advantages and Limitations
According to the results in the paper, the generative mLSTM model combined with logistic regression demonstrated substantial classification accuracy (89.83%); the model achieved 84% accuracy for generating positive music pieces and 67% accuracy for negative ones. However, the negative pieces were found to be more ambiguous, indicating room for further improvement. The developers acknowledge this as a limitation of the model and plan to improve its ability to generate less ambiguous negative pieces in future work. Additionally, authors propose that their future work expand the model's capability to generate music with specific emotions (e.g., happy, sad, suspenseful) and valence-arousal pairs (real numbers), and also compose soundtracks in real-time for oral storytelling experiences.
## 5 Prompt-Based Music Generation Tools
Prompt-based music generation converts textual inputs into music. These tools utilize machine learning and language processing algorithms to interpret text prompts, transforming the embedded semantics and sentiments into harmonious musical compositions.
### Riffusion
#### 5.1.1 Overview
Riffusion[31] is a fine-tuned diffusion model that generates audio clips from text-based prompts and images of spectrograms. The model architecture is based on the Stable Diffusion model, an open-source AI model that generates images from text.
#### 5.1.2 Architecture
The architecture of Riffusion is primarily built upon the Stable Diffusion v1.5 model15, a deep learning architecture that excels in generative tasks. In essence, this model uses a variant of denoising autoencoders combined with a diffusion process to model the data distribution. This architecture has been fine-tuned for the unique task of generating audio from spectrogram images.
Footnote 15: [https://huggingface.co/runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
The backbone of the model's input is the spectrograms that represent sound in a visual form. These spectrograms are processed by a neural network architecture designed
to understand and generate complex images. Given the nature of spectrograms, with their intricate structures representing different audio frequencies over time, a network that can handle such data complexity is crucial. Hence, the architecture of Stable Diffusion, which has been proven to generate intricate image structures, becomes an apt choice.
The model architecture is equipped with Image-to-Image transformation capability, a critical feature that allows the model to condition its output not only based on a text prompt but also on other images. This feature effectively enables the architecture to adapt to the structural nuances of different input images, driving its ability to generate a variety of audio outputs. In addition, the architecture also includes Looping and Interpolation mechanisms, which require the model to navigate smoothly through the latent space, thus preserving the coherence in audio output.
#### 5.1.3 Features and Capabilities
Riffusion provides a web-based user interface that enables users to select a prompt or input customized text containing keywords, such as "jazz" or "guitar," and generates beats based on these keywords. The platform is capable of generating an infinite number of prompt variations based on different seeds. Additionally, Riffusion can generate both short music and loopable jams, and model checkpoints are provided for user convenience. The Riffusion web-based user interface 16 provides an accessible and user-friendly approach for generating beats. Users can adjust parameters such as seed image and denoising level to vary the music they generate and save the music as short riffs.
Footnote 16: [https://www.rifusion.com/](https://www.rifusion.com/)
#### 5.1.4 Advantages and Limitations
Riffusion, as a music generating tool based on diffusion models, boasts several advantages. Firstly, it has a user-friendly interface that allows users to easily generate music from text input without the need for coding or complicated installation procedures. Additionally, Riffusion produces high-quality music with minimal noise, making it an excellent option for those seeking high-quality music output.
However, the inherent architecture of the Riffusion platform results in certain limitations in user control over the musical output. Specifically, the music generated is primarily based on the provided text input and seed images, which guide the stable diffusion model in generating the output audio. This reliance on text prompts and seed images means that the user's ability to control the specifics of the musical output is contingent upon the quality and relevance of these inputs to the desired output. Furthermore, the number of seed images available for generating music is limited, which may result in a lack of variety in the music created. Another area for improvement of Riffusion is its inability to download longer compositions, with users only able to download short tracks. Additionally, the looping feature of the short jams may result in repetitiveness.
While Riffusion's lack of flexibility may not suit users who require a high degree of customization, it is essential to note that this design choice was made to create an accessible and straightforward music generation tool that can quickly create music based on user input, without requiring extensive musical knowledge or technical skills.
### Noise2Music
#### 5.2.1 Overview
Noise2Music explores the utilization of diffusion models in generating high-quality music audio from text prompts[44]. Similar to MusicLM, the model is also based on MuLan[43], which is a joint embedding model that connects music to unconstrained natural language music descriptions.
#### 5.2.2 Architecture
The End-to-End Diffusion architecture utilizes a series of diffusion models in succession to generate the final music clip. This process involves the use of pseudo-labeling for training data generation, where a large unlabeled set of music audio is used to train two deep models. A large language model is employed to generate a vast collection of generic music descriptive sentences as caption candidates, while a pre-trained music-text joint embedding model is utilized to assign the captions to each music clip via zero-shot classification [63]. This method generates the training set by pseudo-labeling, enabling the model to handle complex, fine-grained semantics beyond simple music label conditioning. The Noise2Music models have been shown to have strong generative ability, capable of producing music pieces that go beyond basic music label conditioning, utilizing the semantic content of the captions provided to generate complex and detailed music.
#### 5.2.3 Features and Capabilities
Noise2Music is a music generation model that demonstrates generative ability grounded on semantically rich text prompts. It can generate music based on a range of key musical attributes, including genre, instrument, tempo, mood, vocal traits, and era. Additionally, it can generate music based on creative prompts.
#### 5.2.4 Advantages and Limitations
Noise2Music has advantages in providing high-quality music from rich text prompts, offering creativity for artists and content creators. However, there are limitations to the model, such as potential biases learned from the training sets, which can manifest in subtle and unpredictable ways. Furthermore, misappropriation is also a risk when the created content matches the training data's examples. Therefore, responsible model development practices, duplication checks, and efforts to identify and address potential safety issues are necessary for improving these generative models. The model was not released due to the limitations and risks.
### Mousai
#### 5.3.1 Overview
Mousai [82] is a text-to-music generation tool based on latent diffusion model [79].
#### 5.3.2 Architecture
Mousai employs a two-stage cascading diffusion methodology to generate high-quality stereo music from textual descriptions. The system generates music at 48kHz, lasting
multiple minutes.
The first stage uses a Diffusion Magnitude Autoencoder (DMAE) [66] to compress the audio waveform by a factor of 64. This significant reduction in data size does not compromise the high fidelity of the audio reproduction. The DMAE conditions the diffusion process on a compressed latent vector of the input, thereby creating a powerful generative decoder.
The second stage of Mousai generates a novel latent space using the diffusion model, conditioned on text embeddings. These embeddings are produced from a given description using a frozen transformer language model. Mousai's two stages employ an efficient one dimensional U-Net architecture with different configurations. The latent text-to-audio diffusion stage applies diffusion to the compressed space created in the first stage and uses cross-attention blocks to provide the conditioning text embedding.
Mousai was trained on a diverse dataset of 2,500 hours of stereo music, sampled at 48kHz. The metadata such as title, author, album, genre, and year of release were used as the textual description. To enhance the robustness of the conditioning, each element of the metadata was dropped with a probability of 0.1. Training could be completed within a week using an A100 GPU with a batch size of 32. During inference, a novel audio source of about 88 seconds can be synthesized in less than 88 seconds using a consumer GPU.
#### 5.3.3 Features and Capabilities
The standout feature of Mousai is the ability to generate multiple minutes of high-quality stereo music at a 48kHz resolution purely from textual descriptions. This capability serves to bridge the gap between textual ideas and musical creation, providing a rich platform for translating imaginative concepts into audibly engaging music.
The input to the model can include not only descriptive text but also musical features such as temporal dimension, long-term structure, overlapping sound layers, and subtle nuances. This holistic approach to input parsing allows for a comprehensive understanding of the musical intention, resulting in high fidelity musical output.
Training on an extensive and diverse dataset of 2,500 hours of stereo music samples equips Mousai with a broad 'vocabulary' of musical elements. This, coupled with the capacity to create a diverse range of music, elevates the versatility and adaptability of the model.
Open source samples of music generated using Mousai are provided as a testament to its capabilities 17.
Footnote 17: URL: [https://anonymous0.notion.site/anonymous0/Mo-sai-Text-to-Audio-with-Long-Context-Latent-Diffusion-b43dbc71caf94b5898f9e8de714ab5dc](https://anonymous0.notion.site/anonymous0/Mo-sai-Text-to-Audio-with-Long-Context-Latent-Diffusion-b43dbc71caf94b5898f9e8de714ab5dc)
#### 5.3.4 Advantages and Limitations
Mousai's key advantage lies in its capacity to generate long high-quality stereo music from textual descriptions. This ability to translate linguistic semantics into musical language opens up vast possibilities for musical creativity and innovation.
However, the model's complexity and the demand for substantial computational resources might present challenges for users with limited processing power. Furthermore, the interpretability of Mousai could be a hurdle, given the complexity and the non-intuitive nature of the diffusion processes it uses to generate music.
The model's proficiency in generating specific music genres might also be limited due to the distribution of its training data. While it has been exposed to a wide range of genres such as electronic, pop, metal, and hip hop, it has less experience with others like blues and classical. As a result, the quality of generated music in these less exposed genres might not be as robust.
### MusicLM
#### 5.4.1 Overview
MusicLM is a recently developed music generation model [3] that is capable of generating high-fidelity music from rich text descriptions. The model has gained attention due to its ability to produce music that can last for minutes and its high audio quality performance. Lamentably, MusicLM model has not been officially released as an open-source project, the team has provided demo audios and the MusicCaps dataset that contains 5.5k music-text pairs. However, some open-source implementation of MusicLM architecture is proposed such as: [8].
#### 5.4.2 Architecture
The MusicLM model by Agostinelli et al. [3] is composed of three pre-trained models, SoundStream[103], w2vBERT[13], and MuLan[42]. These models are used to extract audio representations for use in text-conditioned music generation. In particular, SoundStream is used to extract self-supervised audio representations from monophonic audio data. These acoustic tokens are used for high-fidelity synthesis. w2vBERT is used to extract semantic tokens from the audio data, which facilitates long-term coherent generation. MuLan is used to represent the conditioning for the music generation. During training, MusicLM uses the MuLan music embedding, while at inference time, the MuLan text embedding is used.
The discrete audio representations from SoundStream and the discrete text representations from MuLan are used in a hierarchical sequence-to-sequence modeling task for text-conditioned music generation. The first stage learns the mapping from the MuLan audio tokens to the semantic tokens. These semantic tokens, derived from pre-trained audio data models, facilitate the representation and modeling of extensive musical structures. The second stage predicts the acoustic tokens conditioned on both MuLan audio and semantic tokens. Decoder-only Transformers are used for modeling both stages, and the acoustic modeling stage is further split into coarse and fine modeling stages to avoid long token sequences.
During training, SoundStream and w2vBERT are trained on the Free Music Archive (FMA) dataset18. In contrast, the tokenizers and autoregressive models for the semantic and acoustic modeling stages are trained on a large dataset containing five million audio clips. The stages are trained with multiple passes over the training data, using 30 and 10-second random crops of the target audio for the semantic and acoustic stages, respectively. During inference, the MuLan text embedding is used as the conditioning signal, and temperature sampling is used for the autoregressive sampling in all stages. The temperature values are chosen based on subjective inspection to provide a good trade-off between the diversity and temporal consistency of the generated music.
Footnote 18: [https://github.com/mdeff/fma](https://github.com/mdeff/fma)
#### 5.4.3 Features and Capabilities
The MusicLM website showcases the ability of an AI-powered music generation model to produce audio based on concise textual descriptions, including information on music genre, instrument, tempo, and emotion. The model has demonstrated its potential in generating both short (30 seconds) and long (5 minutes) music pieces. Additionally, the model offers a story mode feature, where it generates music based on a sequence of text prompts that convey changes in mood or plot. This feature demonstrates the model's capability to seamlessly transition between different levels of mood in its generated music. Furthermore, the model exhibits flexibility in its conditioning, allowing for the generation of music from a provided melody consistent with the provided text description, as well as from paintings with accompanying captions. The model also offers the ability to generate music for different levels of musician experience.
#### 5.4.4 Advantages and Limitations
MusicLM model has garnered significant attention due to its ability to produce music that can last for minutes, and its high audio quality performance. One of the main advantages of MusicLM is that it can generate complex music that has a structure and coherence similar to that of music created by human composers. This means that the model has the potential to be used for a variety of applications, from generating background music for videos and games to assisting music composers in the creative process.
Another advantage of MusicLM is its ability to generate long pieces of music, which is not always possible with other music generation models. This is particularly useful for applications such as creating background music that requires a continuous flow of music for an extended period. Additionally, the model can produce high-quality audio, which is crucial for ensuring that the generated music meets the desired standards.
However, the significant limitation of MusicLM is that the official model and checkpoints have not been released as an open-source project due to royalty risks.
### MusicGen
#### 5.5.1 Overview
MusicGen [14], a component of the Audiocraft [15] library, is a recent music generation model developed by using the transformer architecture. This model generates music samples based on textual descriptions or melodic features, thereby establishing a unified framework for language and music comprehension. Its potential applications encompass various areas, including music composition and soundtrack creation.
#### 5.5.2 Architecture
MusicGen is a single-stage auto-regressive Transformer model [14], trained over a 32kHz EnCodec tokenizer with four codebooks sampled at 50 Hz. It differs from methods such as MusicLM by not requiring a self-supervised semantic representation, and generating all four codebooks simultaneously. By incorporating a small delay between the codebooks, parallel prediction of the codebooks is possible, reducing the auto-regressive steps to 50 per second of audio.
It includes a Text conditioning, which is performed by calculating a conditioning tensor from a matching textual description for the input audio. Different text representation
methods are utilized, including pretrained text encoders such as T5 [70], instruct-based language model FLAN-T5 [12], and joint text-audio representation CLAP [101].
Its melody conditioning adopts an iterative refinement approach, controlling the melodic structure by conditioning on the input's chromagram and text description. An information bottleneck, introduced by selecting the dominant time-frequency bin at each timestep, helps prevent overfitting and eliminates the need for supervised data.
#### 5.5.3 Functionality and Capabilities
MusicGen enables users to generate music samples based on textual descriptions or melodic features. It offers control over the generation process, catering to different musical preferences and requirements. The model supports various music styles and can handle different levels of abstraction in textual descriptions while allowing for melody adjustments during the generation process.
MusicGen includes four pre-trained models: small (300M), medium (1.5B), melody (1.5B, with text-to-music and text+melody-to-music capabilities), and large (3.3B, text-to-music only). This range of models enables users to choose the model that best suits their specific use cases and resource constraints.
MusicGen was selected and seamlessly integrated as the music generation component of AudioCraft[15], an AI audio toolkit. Implemented interfaces to interact with MusicGen are availble through a Jupyter notebook 19, local Gradio 20, and HuggingFace Space 21. To operate the model locally, a GPU is required, with a recommendation of 16GB memory, but smaller GPUs can still generate shorter sequences or operate with the small model.
Footnote 19: [https://github.com/facebookresearch/audiocraft/blob/main/demo.ipynb](https://github.com/facebookresearch/audiocraft/blob/main/demo.ipynb)
Footnote 20: [https://colab.research.google.com/drive/1fxGqfe96RBUvGxZ1XXN07s3DthrKUl4-?usp=sharing](https://colab.research.google.com/drive/1fxGqfe96RBUvGxZ1XXN07s3DthrKUl4-?usp=sharing)
Footnote 21: [https://huggingface.co/spaces/facebook/MusicGen](https://huggingface.co/spaces/facebook/MusicGen)
#### 5.5.4 Advantages and Limitations
MusicGen stands out as a state-of-the-art single-stage controllable music generation model, with the ability to be conditioned on both text and melody. It uses simple codebook interleaving strategies to generate high-quality music, which also results in a reduction in the number of autoregressive time steps compared to the more traditional flattening approach. Furthermore, MusicGen's performance has been thoroughly analyzed across different model sizes, conditioning methods, and text pre-processing techniques, demonstrating its versatility [14].
In the domain of text-to-music generation, MusicGen has shown superiority over other models such as Mousai[82], Riffusion[31], MusicLM[3], and Noise2Music[44]. Its high-quality audio output exhibits better adherence to the provided text description. When it comes to melody generation, MusicGen effectively generates music conditioned on a given melody, even when chroma is dropped at inference time, a robustness attribute that further distinguishes it.
MusicGen has received high marks on both objective and subjective evaluation metrics, including the Frechet Audio Distance (FAD)[47], Kullback-Leibler Divergence (KL), and the CLAP score[101, 45]. These quantitative findings are bolstered by qualitative assessments from human listeners who rate the audio quality and relevance to the text input highly.
However, MusicGen does have some limitations. Its generation method doesn't offer fine-grained control over adherence of the generated music to the conditioning; this largely
depends on the conditioning framework (CF). Additionally, while data augmentation for text conditioning is straightforward, audio conditioning requires more investigation concerning data augmentation methods, types, and the amount of guidance necessary.
Ethically, MusicGen presents certain challenges similar to many large-scale generative models. The training dataset used has a significant proportion of Western-style music, which may result in a potential lack of diversity in the generated music. However, through the simplification of its design, such as using a single-stage language model and a reduced number of auto-regressive steps, the model hopes to expand its application to new datasets.
## 6 Visual-based Music Generation Tools
Visual-based Music Generation tools signify a novel trend in the field, utilizing visual inputs like images or videos to generate musical pieces. These models transcend traditional music generation boundaries, creating an auditory reflection of visual stimuli. As part of the larger parameter-free models category, they generate music autonomously from the given visual prompts, offering unique possibilities in the music generation landscape.
### Controllable Music Transformer
#### 6.1.1 Overview
Controllable Music Transformer (CMT)[24] is a transformer-based approach that specializes in generating background music that matches the given video in terms of rhythm and mood. It proposes a novel rhythmic relation between video and music, connecting timing, motion speed, and motion saliency from video with beat, simu-note density, and simu-note strength from music, respectively. In this way, CMT can generate background music that matches the given video in terms of rhythm and mood.
#### 6.1.2 Architecture
CMT uses a transformer-based approach with 12 self-attention layers and eight attention heads to analyze the rhythm of the video and the music and establish connections between them. The model is trained on the Lakh Pianoroll Dataset [25, 69], which contains over 174,000 multi-track pianorolls used to represent MIDI music. The CMT model combines rhythmic features extracted from both the video and the MIDI music, which are represented as compound words, and embeds them with beat-timing encoding to create a sequence of music tokens. The transformer model then predicts the attributes of each token, such as pitch, duration, and instrument type, to generate the final music. Finally, in the inference stage, CMT replaces the rhythmic features with those from the video to create music that matches the video's rhythm.
#### 6.1.3 Features and Capabilities
CMT can generate background music that matches the given video in terms of rhythm and mood. It allows local control of rhythmic features and global control of music genres and instruments. After finishing training, the CMT model has already understood the meaning behind strength and density attributes, and thus we only need to replace those two attributes when appropriate in the inference stage to make the music more harmonious
with the given video. CMT introduces a hyperparameter to control the compatibility between music and video, making the generated music more suitable than human-made music. Furthermore, CMT introduces beat-timing encoding to leverage time or length information from the video and genre/instrument type selection to choose different initial tokens in the inference stage.
_Advantages and Limitations_ The proposed CMT model has demonstrated satisfactory performance[24] in generating background music that closely aligns with the rhythm and mood of the given video. It accomplishes this by controlling the density and strength attributes of the music, which contributes to creating a more harmonious music-video combination. Additionally, CMT introduces a hyper-parameter that allows users to control the degree of compatibility between the music and video. However, it is essential to note that generating background music for videos longer than two minutes can be computationally expensive, which may limit its applicability to certain use cases. Furthermore, while CMT produces highly compatible music with the video, the melodiousness of the generated music still falls short of the music in the training set.
### V-MusProd: Video Music Production AI Model
#### 6.2.1 Overview
V-MusProd [109] is a model tailored to generate a fitting musical composition for a given video, leveraging a unique decoupling method for chords, melody, and accompaniment. This approach extracts and translates semantic, color, and motion features from the video to guide the music generation process, ensuring a congruous relationship between the video's visual content and the AI-generated music.
#### 6.2.2 Architecture
V-MusProd, our novel music generation framework, is composed of a video controller and a music generator. The video controller extracts visual and rhythmic features, which then serve as the contextual input for the music generator. The music generator follows a decoupling process for generating music, with three independently trained stages: Chord, Melody, and accompaniment. The final music piece is composed by merging the melody and accompaniment tracks generated at the inference time.
The video controller employs meaningful feature extraction techniques from the video to simplify the learning process. We extract semantic, color, and motion features separately to guide the music generation model. Semantic features are extracted using a pretrained CLIP2Video model to encode raw video frames into semantic feature tokens. Color features, representing the color distribution in a non-linear manifold, serve as control signals for chord generation. Motion features, determined by computing RGB differences, are used to ascertain the music tempo.
Semantic and color features are processed through separate transformer encoders and then concatenated at the length dimension. A learnable embedding is added to distinguish between color feature or semantic feature tokens, which are then fed into a transformer encoder for inter-modality and temporal fusion. The fused output serves as keys and values of cross-attention in the Chord Transformer.
The music generator, consisting of a Chord Transformer, a Melody Transformer, and an Accompaniment Transformer, is designed to generate symbolic music conditioned on the extracted video feature.
Chord Transformer adopts a transformer decoder architecture to learn the long-term dependency of input video feature sequences. Melody Transformer, following an encoder-decoder transformer architecture, generates a note sequence as the output melody based on the input chord sequence. Accompaniment Transformer, similarly utilizing an encoder-decoder transformer, generates the accompaniment sequence based on both the chords and melody. The final music piece is formed by merging the generated accompaniment with the melody.
#### 6.2.3 Features and Capabilities
V-MusProd represents a significant step forward in video-conditional music generation. It combines the visual cues from video content and music generation in an unprecedented way, giving the created music a rich and contextually grounded feel.
V-MusProd's distinct attribute is the partitioning of musical elements into chords, melody, and accompaniment. This approach allows for in-depth manipulation and control over each individual component, enhancing the flexibility and precision of the music generation process.
V-MusProd also leverages a diverse range of features from the video content itself. It employs semantic, color, and motion attributes to develop a strong association between the video content and the generated music. This ensures that the produced music is not just a standalone output but rather an audible representation of the video input.
Furthermore, the model's architecture is designed to adapt to an unconditional setting, extending its capabilities beyond just video-conditional music generation. This highlights the model's versatility and potential for broader applications.
#### 6.2.4 Advantages and Limitations
V-MusProd showcases substantial improvements over previous video music generation models[32, 49, 95, 96, 108]. These earlier models either primarily rely on video rhythmic features or are limited to specific video types. However, V-MusProd demonstrates broader applicability. It can generate music directly from video inputs without the need for paired video-music data or additional annotations, marking a significant advancement in the field. This capability not only simplifies the generation process but also stands as a testament to the model's innovative and robust design.
Objective evaluations highlight the enhanced performance of V-MusProd over CMT [24] in terms of video-music correspondence and music quality. Subjective evaluations, encompassing a user study involving music composition experts and non-experts, also substantiate this finding, with V-MusProd outperforming CMT in nearly all metrics.
However, V-MusProd has some limitations. Currently, the dataset and the music generated by V-MusProd contain only piano tracks, indicating a lack of exploration of other instruments such as drums, guitars, and strings. Furthermore, the model does not explicitly account for high-level emotional features or repetitive structures of music phrases, a potential area for further improvement. The three stages of V-MusProd's method are trained separately instead of end-to-end, a factor that might impact the model's overall performance and efficiency. Despite these limitations, V-MusProd serves as a promising baseline for future research in the field of video-conditional music generation.
### Foley Music
#### 6.3.1 Overview
Foley Music[32] is a tool that generates music that matches the visual content of a video clip. The tool uses a deep neural network to learn the relationship between the audio effects and the music that it generates.
#### 6.3.2 Architecture
In this work, the authors propose generating music from videos by identifying two crucial intermediate representations: body key points and MIDI events. The generation process is formulated as a motion-to-MIDI translation problem, and a Graph-Transformer framework is presented to predict MIDI event sequences that correspond to the body movements accurately. The proposed system comprises three main components: a visual encoder, a MIDI decoder, and an audio synthesizer. To begin with, the visual encoder takes the video frames as input and extracts keypoint coordinates using a Graph Convolutional Network (GCN). This helps to capture the body dynamics and produce a latent representation over time. The MIDI decoder then takes this video sequence representation and generates a sequence of MIDI events. Lastly, the generated MIDI event sequence is converted to the corresponding waveform with an off-the-shelf music synthesizer tool.
#### 6.3.3 Features and Capabilities
The Foley Music system is built on PyTorch, a popular deep learning framework, and is capable of generating music clips for a variety of instruments including accordion, bass, bassoon, cello, guitar, piano, tuba, ukulele, and violin. To prepare the audio data for music generation, MIDI events are extracted from audio recordings. The system's performance is demonstrated in an experiment that trains it on a 6-second video clip from a large-scale music performance video dataset. The training process requires the use of optimizers and regularizers to ensure high-quality results. As mentioned, a software synthesizer is required to obtain the final generated music waveforms.
#### 6.3.4 Advantages and Limitations
Experimental results reveal that the model is capable of working effectively on various types of music performance videos. The results indicate that the framework successfully establishes correlations between visual and music signals using body keypoints and MIDI representations.
Notably, the MIDI representations used in the framework are transparent and fully interpretable, providing flexibility in music editing. MIDI representation can be conveniently modified to generate music in diverse styles by utilizing the MIDI representations. However, this requires additional music synthesizers to render the music, as the framework does not include a neural synthesizer.
## 7 Commercial Music Generation Tools
The commercial market is flooded with a variety of music generation tools that cater to users who desire to create music easily and quickly, but might not obtain prior musical knowledge or coding skills. These tools are typically equipped with web-based user
interfaces allowing users to manipulate parameters such as emotion, tempo, and length to generate music according to their preferences. However, these tools can be broadly categorized based on their user interface and interaction model: Pre-set Parameter Tools, Interactive Generation Tools, and Auto-generation Tools.
Parameter-based models include platforms that allow users to select various parameters such as mood, genre, or activity as the guiding elements for music generation. Some platforms in this category offer a more advanced layer of customization, enabling users to adjust the structure and components of the generated music, such as Mubert22, Boomy23, Ecrett Music24, and Soundraw.io25.
Footnote 22: [https://mubert.com](https://mubert.com)
Footnote 23: [https://boomy.com](https://boomy.com)
Footnote 24: [https://ecrettmusic.com](https://ecrettmusic.com)
Footnote 25: [https://soundraw.io](https://soundraw.io)
Prompt-based models, such as SongR26, offer a unique interaction where users can input their lyrics or melodic ideas to guide the music generation process.
Footnote 26: [https://app.songr.ai](https://app.songr.ai)
Style-driven models adapt the composition process based on user-provided influences, which can include an existing music file or a specific emotional impact. Additionally, these models may also offer preset styles to guide the composition, such as Aiva.ai27.
Footnote 27: [https://www.aiva.ai](https://www.aiva.ai)
Despite their ease of use and accessibility, these tools often do not provide detailed information about their underlying model architecture, leaving users in the dark about the methodologies or algorithms behind the music they generate. This lack of transparency, coupled with potentially limited customization options, could mean these tools fall short for users with more specialized or complex musical needs.
Nonetheless, commercial music generation tools hold value for their ability to quickly and conveniently generate music. The absence of in-depth model descriptions, however, renders a comprehensive evaluation of these tools' performance or effectiveness challenging. While this paper will not delve into the specifics of commercial music models, their contribution to making music generation more accessible and convenient for users is recognized.
## 8 Conclusion
In this survey we described models that allow music generated on parameters, prompts, and video clips. Our analysis highlights each tool's unique advantages and limitations, such as flexibility, sophistication, and quality of the generated music. For example, some tools provide flexibility in developing MIDI files but require additional software synthesizers to process the music. Other tools (especially the prompt-based diffusion models) generate sophisticated music but need more flexibility in generating music with different instruments. A challenge remains the ability to generate longer pieces with a good musical pattern. We also acknowledge that this review is not exhaustive due to the rapidly evolving nature of this field. Nonetheless, the range of AI music generation tools available shows promise in revolutionizing the music industry, enhancing creativity, and broadening the range of musical expression. We anticipate the emergence of more sophisticated models will overcome the current limitations and provide more flexible, user-friendly, and high-quality AI music generation tools. |
2310.19445 | A Federated Learning Framework for Stenosis Detection | This study explores the use of Federated Learning (FL) for stenosis detection
in coronary angiography images (CA). Two heterogeneous datasets from two
institutions were considered: Dataset 1 includes 1219 images from 200 patients,
which we acquired at the Ospedale Riuniti of Ancona (Italy); Dataset 2 includes
7492 sequential images from 90 patients from a previous study available in the
literature. Stenosis detection was performed by using a Faster R-CNN model. In
our FL framework, only the weights of the model backbone were shared among the
two client institutions, using Federated Averaging (FedAvg) for weight
aggregation. We assessed the performance of stenosis detection using Precision
(P rec), Recall (Rec), and F1 score (F1). Our results showed that the FL
framework does not substantially affects clients 2 performance, which already
achieved good performance with local training; for client 1, instead, FL
framework increases the performance with respect to local model of +3.76%,
+17.21% and +10.80%, respectively, reaching P rec = 73.56, Rec = 67.01 and F1 =
70.13. With such results, we showed that FL may enable multicentric studies
relevant to automatic stenosis detection in CA by addressing data heterogeneity
from various institutions, while preserving patient privacy. | Mariachiara Di Cosmo, Giovanna Migliorelli, Matteo Francioni, Andi Mucaj, Alessandro Maolo, Alessandro Aprile, Emanuele Frontoni, Maria Chiara Fiorentino, Sara Moccia | 2023-10-30T11:13:40Z | http://arxiv.org/abs/2310.19445v1 | # A Federated Learning Framework for Stenosis Detection
###### Abstract
This study explores the use of Federated Learning (FL) for stenosis detection in coronary angiography images (CA). Two heterogeneous datasets from two institutions were considered: Dataset 1 includes 1219 images from 200 patients, which we acquired at the Ospedale Riuniti of Ancona (Italy); Dataset 2 includes 7492 sequential images from 90 patients from a previous study available in the literature. Stenosis detection was performed by using a Faster R-CNN model. In our FL framework, only the weights of the model backbone were shared among the two client institutions, using Federated Averaging (FedAvg) for weight aggregation. We assessed the performance of stenosis detection using Precision (\(Prec\)), Recall (\(Rec\)), and F1 score (\(F1\)). Our results showed that the FL framework does not substantially affects clients 2 performance, which already achieved good performance with local training; for client 1, instead, FL framework increases the performance with respect to local model of +3.76%, +17.21% and +10.80%, respectively, reaching \(Prec=73.56\), \(Rec=67.01\) and \(F1=70.13\). With such results, we showed that FL may enable multicentric studies relevant to automatic stenosis detection in CA by addressing data heterogeneity from various institutions, while preserving patient privacy.
Keywords:Federated Learning Coronary Angiography Stenosis detection Computer Assisted Diagnosis.
## 1 Introduction
Coronary artery disease (CAD) provokes stenoses, coronary segments with narrowed lumen, causing blood flow reduction and eventually leading to ischemia
and heart attacks. CAD, which is a leading cause of mortality worldwide, is currently assessed through coronary angiography (CA), an imaging technique that uses X-rays and contrast dye to visualize coronary arteries and assess blood flow dynamics [17]. Precise and timely stenosis detection is crucial for effective CAD diagnosis and treatment. CA interpretation relies on clinician's expertise and requires tackling challenges such as complex vessel anatomy, stenosis variability (in shape, pattern and severity), presence of movement and shadowing artifacts, as well as varying imaging equipment and contrast agent levels [5].
Computer-aided decision support systems for stenosis detection from CA have the potential to improve CAD assessment and reduce clinician variability. In recent years, deep learning (DL) has shown great potential in automating stenosis detection from CA images. Many approaches propose a multi-stage framework, in which stenoses are identified following vessel enhancement [11] or segmentation [24]. In [24], a U-Net++ model with a feature pyramid network is proposed to automatically segment coronary arteries and from the artery centerline, diameters are calculated and stenotic levels are measured. In [11], after key frame detection using vessel extraction, a classification model identifies the stenosis presence in the key frame and through Class Activation Mapping (CAM) stenosis is qualitatively localized. The study considers right CA only.
To avoid error accumulation though image pre-processing steps, several studies develop DL methods to localize stenoses directly from CA images without relying on vessel analysis [1, 2, 14]. In [1], similarly to [11], CAM is employed to localize stenosis on top of an image-level stenosis classification performed by an Inception-v3. Several object detection models are trained in [2] and tested over one-vessel stenotic CA series to explore trade off between accuracy and performance efficiency. Inspired by quantum computing, [14] incorporates a quantum network in a ResNet network to detect stenosis by performing binary classification on fixed-size patches obtained from CA images. Some approaches [23, 20, 4, 15] consider sequence of images to take advantage of temporal information intrinsic in CA. The work in [23] proposes a hierarchical attentive multi-view learning model to capture the pixel correlation from CA sequences for quantifying stenosis. In [20], a method is proposed that detects candidate stenoses using a deconvolutional single-shot detector (DSSD) on frames of a CA sequence selected by a U-Net model. Then, the seq-fps module takes advantage of the temporal information within the X-ray sequence to suppress false positives and generate the final stenosis detection results. The work in [15] includes sequence feature fusion and sequence consistency alignment into stenosis detection network to capture the spatio-temporal information using 166 CA sequence. To gather spatio-temporal features, a transformer-based module is used in [4] and to learn long-range context a feature aggregation network is developed. Most of these studies use proprietary datasets which have their own acquisition and annotation processes, or open-source datasets which are made available for research community (for stenosis detection only [2] shares dataset). This limits generalizability as DL model results relies on the dataset-specific characteristics such as imaging equipment, protocols, and patient variability.
When multicentric studies are considered [1, 15, 23], datasets are treated separately, simply testing their model on each dataset overlooking the potential influence of one dataset over the other, or as a single dataset, without considering issues related to sharing medical imaging data across multiple centers. Data privacy concerns, regulatory compliance, technical compatibility, and varying imaging protocols are addressed among the main challenges to face for DL-based decision support tool development for medical image analysis [3, 13]. Federated learning (FL) [18] provides an opportunity for data-private multi-institutional collaborations, leveraging model learning sharing among datasets interactions while preserving their privacy. Each client's raw data are stored locally and remain under control of and private to that institution, while only model updates leave the client enabling the aggregation of learned patterns into a single global model. In addition, different FL strategies can be adopted to increase data privacy further, like partial model sharing [22], to proactively avoid data leakage during communication between the server and multiple clients. FL has already shown to be successful in medical image analysis [21, 9]: it has been used to alleviate small sample size and lack of annotation problems, facilitate domain adaptation and mitigate domain shift. Despite its potential, no applications in stenoses detection have been proposed so far.
The present work explores FL for stenosis detection from two datasets with the aim of addressing challenges posed by data heterogeneity, variable acquisition protocols and inter-patient variability. Our contributions include:
* FL framework to address stenosis detection from CA;
* Privacy-preserving FL by partial aggregation of the detection model to capture and share intrinsic CA features;
* Collection of a new dataset of CA, composed by 1219 images from 200 patients and acquired in the clinical practice from three clinicians, who performed image annotation.
## 2 Material and Methods
Our FL framework relies on a learning paradigm between two hospitals (i.e., the local clients of the federation) and one central server.
To perform stenosis detection a Faster R-CNN model [16] is deployed. Faster R-CNN integrates a Convolutional Neural Network (CNN) as backbone for feature extraction, a Region Proposal Network (RPN) for proposal generation, and subsequent layers for accurate object classification and localization within proposed Region of Interest (ROI). In the present study, the model developed is a Faster R-CNN [16] with ResNet50 Feature Pyramid Network (FPN) as backbone network [7]. This version of Faster R-CNN benefits from the hierarchical features extracted by the ResNet50, the multi-scale feature representation provided by FPN, an extension of the RPN with two convolutional layers on top of the ROI heads for classification and regression of the stenotic regions. The Faster R-CNN, pre-trained on Coco dataset [8], is trained to minimize a multi-task loss as a weighted combination of cross-entropy loss for the identification of
stenosis presence and regression loss for stenosis localization. Since the backbone is responsible for feature extraction and representation, we consider that sharing its parameters in the FL framework could allow the clients to collectively learn generalized and meaningful features of the CA images.
As shown in Fig. 1, the server initiates the federated process sending the Faster R-CNN model to both the clients, including the pre-trained backbone weights. Then, for each round of the federated computation, each client trains the model received on its local private data and sends back to the server only the updated backbone weights. The server receives the weight updates from each client and aggregates them to create a global model. The aggregation technique adopted relies on Federated Averaging (FedAvg) [10], which computes a weighted average of the backbone parameters collected by the server. Since only the backbone weights are shared, the server combines them without interacting with the rest of the Faster R-CNN model.
FedAvg aggregation strategy is defined as \(\alpha w_{1}+\beta w_{2}\), where \(\alpha\) and \(\beta\) represent two parameters that determine the clients' contribution to the aggregation process and \(w_{1}\) and \(w_{2}\) are the backbone weights of client 1 and client 2, respectively.
Figure 1: Representation of proposed Federated Learning (FL) framework: (a) the server initially sends the Faster R-CNN model pre-trained on Coco dataset to both clients; (b) for each round, the model is trained locally on the training set of the specific client; (c) updated weights of the model backbone are sent back to the server; (d) the server receives and aggregates the updated weights from both clients; (e) the server sends the result of aggregation back to the two clients. Communication happens between server and clients aggregating at each round the backbone weights (\(w_{1}\) for client 1 and \(w_{2}\) for client 2) with Federate Averaging (FedAvg) defined as \(\alpha w_{1}+\beta w_{2}\), where \(\alpha\) and \(\beta\) represent two parameters that determine the clients contribution to the aggregation process.
### Datasets
The datasets used in this study exhibit inherent dissimilarities in the acquisition protocols and annotation approaches. This results in a visible data heterogeneity: as depicted in Fig. 2, the intensity distributions at the two institutions underline variation in imaging characteristics posing domain shifts challenges, thus leading to reduced model generalization, increased bias and ineffective transfer learning. Domain shift arises especially from image characteristics, such as imaging equipment, image quality, frame selection process, and patient demographics.
Both datasets are made of gray-scale images with 512x512 pixels and all images acquired from the same patient are carefully considered as part of the same set (training or testing).
#### 2.1.1 Dataset 1
The dataset consists of 1219 CA images, which we acquired at the Ospedali Riuniti of Ancona (Italy). The images are provided by 200 patients, who signed informed consent, underwent CA procedure and presented one or more stenotic regions along the coronary arteries (up to 4 stenoses per image). From each patient exam, a few relevant frames are selected by the clinicians of this study taking into account the presence of high contrast dye, various viewpoints, and potentially the diastolic phase. The data acquisition is conducted in compliance with the Helsinki Declaration and under the supervision of three expert clinicians, which evaluate the presence of the stenotic regions and provide the annotations. For model training and testing, 1106 (90%) images from 175 patients compose the training set and 112 (10%) images from 25 patients the test set.
#### 2.1.2 Dataset 2
The dataset is provided by Danilov et al. [2] and contains CA series comprising a total of 7492 images. The images are obtained from 100 patients,
Figure 2: Domain shift between the two coronary angiography (CA) datasets in terms of pixel value distribution.
who underwent CA at the Research Institute for Complex Problems of Cardiovascular Diseases (Kemerovo, Russia). All patients had confirmed one-vessel CAD. From each patient exam, images containing contrast passage through a stenotic vessel are extracted in sequences, discarding non-informative images. The manual annotation of the presence or absence of stenotic lesions for each image was performed by a single operator, as described in [2]. For model training and testing, 6660 (90%) images from 80 patients are used as training set, and 832 (10%) images from 10 patients as test set. The split was given by [2].
### Experimental protocol
In Faster R-CNN training and for all experiments, we used Adam optimizer with a constant learning rate set to 0.0001 and batch size equal to 16. Pixel values in the input images were normalized in the range [0,1], and the images were randomly augmented via horizontal flip. To handle the different sizes of Dataset 1 and Dataset 2 and ensure comparable number of training steps, the FL framework performed 20 rounds. In each round, the training process of the Faster R-CNN was performed locally for 20 epochs for Dataset 1 and 4 epochs for Dataset 2. For the first round, a warm-up strategy was implemented to promote weight stabilization and the local training was performed more extensively for 40 and 16 epochs for Dataset 1 and Dataset 2, respectively. FedAvg aggregation is performed by assigning weights proportionally to the dataset sizes, defining \(\alpha=1\) and \(\beta=6\). Even though our clients strongly differ in terms of number of images, we consider that giving more importance to the larger dataset could be beneficial for the smaller one.
We compared the performance of proposed FL framework with that obtained by training the Faster R-CNN model locally. The local models were trained for 200 epochs for client 1 and 50 epochs for client 2. We further performed the following ablation study:
**FL1:**: _Faster R-CNN weight aggregation_
To probe the effectiveness of sharing only the weights of the backbone, focusing on extracting and sharing relevant features, we performed also the FL of the whole Faster R-CNN model to evaluate the impact on stenosis detection performance.
**FL2:**: _Faster R-CNN backbone weight aggregation with equal client contribution_
Considering the strong difference in terms of size between the two datasets, we explored also the effect of applying weighting techniques to the aggregation process to ensure a fair and balanced representation of both clients (\(\alpha=1\) and \(\beta=1\)). In this way, we examine whether the discrepancy in terms of size between the two datasets may lead to any performance penalty in detriment of smaller datasets.
**FL3:**: _Faster R-CNN backbone weight aggregation with the exclusion of Batch Normalization layers parameters_
Based on the study of [6], which demonstrated that keeping local Batch Normalization parameters not synchronized with the global model reduces
feature shifts in non-Independent and Identically Distributed (IID) data as in our case (see Fig. 2), we evaluated if the exclusion of the statistical non-trainable parameters of the Batch Normalization layers of the backbone could mitigate the discrepancy between the clients.
To assess the performance of stenosis detection, we computed Precision (\(Prec\)), Recall (\(Rec\)) and F1 score (\(F1\)) over each client test set. We considered a prediction as True Positive (\(TP\)) if it achieved an Intersection over Union (\(IoU\)) value with respect to the ground truth annotation greater than or equal to 0.5. Conversely, a predicted bounding box with a \(IoU\) value less than 0.5 was considered a False Positive (\(FP\)). When a stenosis did not have any corresponding prediction, it is regarded as a False Negative (\(FN\)). To ensure the lowest number of missing predictions, for each FL framework training and for each model training best configuration was selected in terms of average \(Rec\) over the client' test set.
The overall implementation was performed in Python 3.8.10 with PyTorch v.2.0.0, Torchvision v.0.15.1 and Flower v.2.0.0 libraries. Our model training and all experiments conducted were performed via 8 GPU bank where one or more GPUs were assigned to each client and one GPU was assigned to the server.
## 3 Results and Discussion
\(Prec\), \(Rec\) and \(F1\) values achieved from proposed FL framework are reported in Table 3 in comparison with performance obtained from single client's local training and from **FL1**, **FL2** and **FL3**. Table 3 shows that client 1 performance improved significantly within the FL framework: the \(Prec\), \(Rec\) and \(F1\) values increased considerably compared to local training of +3.76%, +17.21% and +10.80% respectively, demonstrating the positive effect of the interaction with client 2. The sharing of the backbone weights only is noteworthy for client 1: a higher \(Rec\) value compared to **FL1** (+7.66%) is achieved, suggesting a reduction of \(FN\) predictions. Client 2 had a greater contribution in the backbone aggregation process and this introduced additional insights and increased the extraction of intrinsic features which boosted the ability of client 1 to detect stenotic regions. In Table 3, it is also evident that client 2 performance remained relatively
\begin{table}
\begin{tabular}{c|c c c c|c c c c} & \multicolumn{3}{c|}{**Dataset 1**} & \multicolumn{3}{c}{**Dataset 2**} \\ \cline{2-9} & \(Prec\) & \(Rec\) & \(F1\) & \(Round\) & \(Prec\) & \(Rec\) & \(F1\) & Round \\ \hline
**Local** & 70.89 & 57.14 & 63.28 & - & **93.50** & 82.83 & 87.84 & - \\
**FL1** & 76.25 & 62.24 & 68.54 & 2 & 92.61 & 82.71 & 87.38 & 2 \\
**FL2** & **77.33** & 59.18 & 67.05 & 3 & 92.54 & 83.43 & 87.75 & 1 \\
**FL3** & 69.32 & 62.24 & 65.59 & 1 & 92.91 & 83.43 & **87.92** & 5 \\
**Proposed** & 73.56 & **67.01** & **70.13** & 2 & 91.62 & **84.03** & 87.66 & 1 \\ \end{tabular}
\end{table}
Table 1: Mean values of Precision (\(Prec\)), Recall (\(Rec\)) and F1 score (\(F1\)): from top to bottom single clients’ local model, **FL1**, **FL2**, **FL3** and proposed FL framework performances are reported.
stable and unaffected within the FL process: it consistently achieved comparable results across **FL1**, **FL2**, **FL3** and proposed FL framework. The \(Prec\) value was highest when the model was trained locally, indicating that the introduction of a FL framework may not significantly improve performance for client 2. For what concerns **FL3**, differently from [6], client 1 did not benefit from the exclusion from the aggregation process of the Batch Normalization layers parameters, while client 2 exhibited good performance under this setting. With **FL3** experiment, we focused on investigating the specific impact of Batch Normalization exclusion as addressed in [6]; however, we are aware that in recent research [19] alternatives to Batch Normalization have been explored to mitigate challenges associated with non-IID data distribution in FL scenarios. In future work, we plan to expand our study exploring also alternative normalization techniques.
Figure 3 shows the trends of \(Prec\), \(Rec\) and \(F1\) obtained from the proposed FL framework and from **FL1**, **FL2** and **FL3** considering different number of rounds. The proposed FL framework confirmed to enhance client 1 performance. On the other hand, client 2 demonstrated remarkable results throughout the different rounds with minimal variation, suggesting that FL framework did not significantly impact over its performance. By increasing the number of rounds, the proposed FL framework and also **FL1**, **FL2** and **FL3** exhibited sings of overfitting. It is evident from the declining performance of client 1, especially in terms of \(Rec\) and \(F1\). The \(Prec\), instead, could have not manifested a similar decline, as it is less susceptible to overfitting compared to the other metrics. Client 2 performance, then, showed stable trends, without improvements at increasing
Figure 3: Mean values of Precision (\(Prec\)), Recall (\(Rec\)) and F1 score (\(F1\)) at each round of **FL1**, **FL2**, **FL3** and proposed FL framework.
number of rounds. In addition to current ablation studies (**FL1**, **FL2**, **FL3**), we consider also to explore in the future the impact of uniform training epochs across communication rounds for both clients, to evaluate the effect of FedAvg aggregation in standard settings and its influence on convergence behaviour.
Figure 4 shows clients performance highlighting the difference between their datasets. Client 1 (first row), with its smaller and highly variable dataset, exhibited poorer performance, as visible from the presence of several missing predictions and less precise bounding box overlapping. In contrast, for client 2 (second row) stenoses are almost always recognized and localized accurately in the images. In Fig. 4 also two images belonging to the same patient, one from Dataset 1 and one from Dataset 2, are displayed as third and forth images of the row: for each patient CA procedure, during the dataset annotation process, at client 1 only significant frames were selected, whereas at client 2 sequential frames were all annotated, disregarding only the frames in which no stenosis was visible. This difference further accentuated datasets differences and the lower variability of Dataset 2 compared to Dataset 1.
The overall results showed the effectiveness of proposed FL framework in improving stenosis detection performance of client 1, by leveraging information provided by client 2. Sharing backbone weights allowed us to transfer knowledge from dataset 2, overcoming limits given by a small dataset size. On the other hand, client 2, already performing well locally, was not substantially impacted by the FL framework. However, sharing information with Dataset 1, with its smaller but highly variable nature, could have been helpful for client 2 in making its model more robust and more generalizable. In addition, the adoption of a partial model sharing approach though the aggregation of backbone weights only, as
Figure 4: Samples of stenosis predictions from Dataset 1 (first row) and Dataset 2 (second row) obtained with the proposed FL framework: ground truth is shown in green box, prediction with confidence score in red.
in [22], enhanced further data privacy protection, extracting only intrinsic and general features from CA images.
FL offers several potential advantages for stenosis detection in CA. First, it enables collaborative learning across multiple institutions, allowing the inclusion of diverse datasets and facilitating the development of more generalized models [18]. This is particularly crucial in the context of stenosis detection, as datasets can be very heterogeneous and exhibit wide variations in terms of imaging protocols, annotation procedures, patient populations, and equipment used. By involving multiple institutions and datasets, multicentric studies could provide a more comprehensive and representative view of clinical practice. Moreover, FL offers privacy-preserving capabilities: by performing training process locally on individual clients data, FL mitigates the need for data sharing while still enabling collaboration and knowledge sharing among different institutions [18]. This is a critical aspect in medical imaging research, where strict privacy regulations, ownership, regulatory compliance, and ethical considerations come into play [12]. The significance of multicentric studies and privacy-preserving approaches is further emphasized by the growing interest and attention from organizations such as the European Commission. A document to promote ethical principles in DL model design and deployment, called Ethics Guidelines for Trustworthy AI 7, was published in 2018 from the European Commission, which also encourages the use of FL in DL development providing useful information about FL impact on data protection 8.
Footnote 7: [https://ec.europa.eu/futurium/en/ai-alliance-consultation.1.html](https://ec.europa.eu/futurium/en/ai-alliance-consultation.1.html)
Footnote 8: [https://edps.europa.eu/press-publications/publications/techsonar/](https://edps.europa.eu/press-publications/publications/techsonar/) federated-learning_en
Overall, our study highlights the importance of considering data heterogeneity and privacy concerns in the development of stenosis detection models and even though further research is needed to optimize the FL process and include multiple institutions for a wider representation of data heterogeneity, it opens the way for an efficient clinicians support to stenosis detection form CA, ultimately leading to improved patients clinical outcomes.
## 4 Conclusion
In this study, we explored the use of FL for stenosis detection in CA. Training with data from different institutions is particularly relevant in this context, where datasets exhibit wide variations in imaging protocols, annotation procedures, patient populations, and equipment used, in addition to the intrinsic CA imaging challenges. Our FL framework, by sharing the Faster R-CNN backbone weights, improved stenosis detection accuracy for client 1 achieving an increase in \(Prec\), \(Rec\) and \(F1\) of +3.76%, +17.21% and +10.80% respectively, while client 2, which already achieved high stenosis detection ability training the model locally, did not benefit significantly from a FL framework. We hope our study may pave the way for future studies on privacy-preserving computer-assisted algorithms for CAD diagnosis. |
2303.01332 | Self-Supervised Few-Shot Learning for Ischemic Stroke Lesion
Segmentation | Precise ischemic lesion segmentation plays an essential role in improving
diagnosis and treatment planning for ischemic stroke, one of the prevalent
diseases with the highest mortality rate. While numerous deep neural network
approaches have recently been proposed to tackle this problem, these methods
require large amounts of annotated regions during training, which can be
impractical in the medical domain where annotated data is scarce. As a remedy,
we present a prototypical few-shot segmentation approach for ischemic lesion
segmentation using only one annotated sample during training. The proposed
approach leverages a novel self-supervised training mechanism that is tailored
to the task of ischemic stroke lesion segmentation by exploiting color-coded
parametric maps generated from Computed Tomography Perfusion scans. We
illustrate the benefits of our proposed training mechanism, leading to
considerable improvements in performance in the few-shot setting. Given a
single annotated patient, an average Dice score of 0.58 is achieved for the
segmentation of ischemic lesions. | Luca Tomasetti, Stine Hansen, Mahdieh Khanmohammadi, Kjersti Engan, Liv Jorunn Høllesli, Kathinka Dæhli Kurz, Michael Kampffmeyer | 2023-03-02T15:10:08Z | http://arxiv.org/abs/2303.01332v2 | # Self-Supervised Few-Shot Learning for Ischemic Stroke Lesion Segmentation
###### Abstract
Precise ischemic lesion segmentation plays an essential role in improving diagnosis and treatment planning for ischemic stroke, one of the prevalent diseases with the highest mortality rate. While numerous deep neural network approaches have recently been proposed to tackle this problem, these methods require large amounts of annotated regions during training, which can be impractical in the medical domain where annotated data is scarce. As a remedy, we present a prototypical few-shot segmentation approach for ischemic lesion segmentation using only one annotated sample during training. The proposed approach leverages a novel self-supervised training mechanism that is tailored to the task of ischemic stroke lesion segmentation by exploiting color-coded parametric maps generated from Computed Tomography Perfusion scans. We illustrate the benefits of our proposed training mechanism, leading to considerable improvements in performance in the few-shot setting. Given a single annotated patient, an average Dice score of 0.58 is achieved for the segmentation of ischemic lesions.
Luca Tomasetti\({}^{1}\) Stine Hansen\({}^{2}\) Mahdieh Khanmohammadi\({}^{1}\) Kjersti Engan\({}^{1}\)\({}^{1}\) Liv Jorunn Hellesli\({}^{1,3}\) Kathinka Dehli Kurz\({}^{1,3}\) Michael Kampffmeyer\({}^{2}\)\({}^{1}\) Department of Electrical Engineering and Computer Science, University of Stavanger, Norway
\({}^{2}\) Department of Physics and Technology, UiT The Arctic University of Norway, Norway
\({}^{3}\) Department of Radiology, Stavanger University Hospital, Norway
Acute Ischemic Stroke, Few-Shot Segmentation, Self-Supervision, Supervoxels
## 1 Introduction
Globally, neurological disorders are the leading cause of disability-adjusted life years and the second leading cause of death [1]. Cerebral stroke is a major contributor to this burden, being the number one cause of neurological disability and the third-leading cause of death and disability combined [2, 3]. In response, the World Health Organisation has identified the optimization of brain health as a fundamental step to ensure human health and well-being [1]. Towards this goal, we focus on understanding and segmenting ischemic brain tissue in patients suspected of acute ischemic stroke (AIS), which comprises the majority of strokes [4].
Computed Tomography (CTP) Perfusion (CTP) is often used as the primary diagnostic modality in the initial stages of an AIS [5]. A CTP study is a 4D examination of an area of the brain where an ischemic stroke lesion (ISL) is suspected. The study collects a series of 3D scans over the same portion of the brain during injection of an iodinated contrast agent over time. Based on the brain tissue changes over time, color-coded parametric maps (PMs) are calculated. These PMs highlight the possible variations in the brain tissue, allowing neuroradiologists to diagnose and plan treatments rapidly [6, 7]. In particular, we consider the cerebral blood flow (CBF), cerebral blood volume (CBV), time-to-peak (TTP), and time-to-maximum (TMax) PMs and maximum intensity projection (MIP), which are all highly relevant indicators for ISL segmentation [6]. Fig. 1 shows a set of PMs and MIP for an image slice and the corresponding labeled ISL.
Numerous applications have been explored to tackle this neurological disorder, traditionally using thresholding of different PMs [8, 9], but with lacking consensus on which PMs and threshold combination. Newer methods are relying on machine learning (ML) approaches [7, 10] or deep neural network (DNN) architectures [11, 12] and have generated promising results for segmenting ISLs. Nevertheless, all these architectures are supervised learning methods and require large amounts of training examples manually annotated by expert neuroradiologists. This can be problematic in the medical domain, where annotated training data is often scarce due to the time-consuming nature of the labeling process. Moreover, manual annotations might only give a coarse representation of the ISL, excluding all the minor details in the area, which can lead to imperfect predictions if used during model training.
A promising alternative, that has been gaining momentum in the last years, is self-supervised few-shot segmentation (FSS) [14, 15] where DNN algorithms only require a few labeled training images. This can be crucial in the medical domain for reducing the need for annotated data and enabling the models to also learn rare cases for which annotated data can be scarce. These models are trained and tested in episodes where a few labeled support images are used to guide the segmentation of the unlabeled query images. To bypass the need of manually annotating the training data, the training episodes are constructed in a self-supervised manner by leveraging automatically generated superpixel/supervoxel-based pseudolabels. Then, during inference, only a few manually labeled support images are required to segment the unlabeled query images.
Ouyang et al. [14] introduced a novel FSS framework for detecting organs in medical images using superpixel-based pseudolabels rather than manual annotations during training. Hansen et al. [15] further extended the self-supervision task to leverage 3D information via supervoxels and proposed a novel anomaly detection-inspired approach for prototypical FSS to increase the model's robustness to the heterogeneous background, resulting in the current state-of-the-art self-supervised FSS model. During each training episode, two random slices containing the same random supervoxel are sampled to
Figure 1: Parametric maps (a-e) of a single brain slice plus the labeled ISL area (II). In this patient, there is an ischemic area in the vascular territory on the left middle cerebral artery (pointed by a white arrow).
act as the support and query slice, with the supervoxel region acting as the pseudolabel for the self-supervised task. Random transformations are then applied to the support or query image-pseudolabel pair and the objective is to segment the query's pseudolabel using as reference the support pair. This gives the model a procedure to train unsupervised, i.e., without using labeled data.
These recent methods rely on the extraction of superpixels/supervoxels from the raw input images, which works well when the task is to segment classes that are relatively homogeneous and distinctly different from the background (e.g., liver or kidneys in abdominal CT scans) and where regions of interest thus are likely to be grouped in supervoxels. However, their performance decreases considerably when the class to segment is not distinctly different from the background, as is the case in ISL segmentation. For this reason, we propose an extension of the approach developed by Hansen et al. [15] that is particularly suited for the task of ISL segmentation. Unlike Hansen et al. [15], we do not generate supervoxels directly from the raw 4D CTP study, but instead we leverage domain knowledge generating supervoxels as targets for the self-supervised learning task from the PMs. We argue that while supervoxels generated from raw 4D CTP studies cannot correctly describe the boundaries inside the brain tissue, supervoxels extracted from the PMs can capture informative sub-regions, thus resulting in improved training targets. To the best of our knowledge, this is the first study exploiting self-supervised mechanisms for ISL segmentation.
The main contributions of this work can be summarized as follows:
1. We propose a novel self-supervised mechanism specifically tailored for ISL segmentation that leverages supervoxels extracted from a set of stacked PMs.
2. Exploiting the self-supervised mechanism, we propose a prototypical FSS framework for ISL segmentation and demonstrate the benefit of leveraging PMs for self-supervision.
## 2 Data Material
We have analyzed 152 CTP scans from distinct patients acquired with a Siemens CT scanner between January 2014 and August 2020. Patients were divided into two groups based on the level of vessel occlusion: 77 patients had a large vessel occlusion (LVO) and 60 patients had a non-large vessel occlusion (Non-LVO). In addition, we included 15 patients admitted with stroke-like symptoms, but after diagnostic work-up were determined not to have suffered from an ischemic stroke, defined as without ischemic stroke (WIS). A split of 89/30/33 patients was conducted for training, validation, and testing. The CTP scans and the relative PMs were obtained using the "syngo.via" software from Siemens Healthineers (Siemens Medical Solutions, USA). Two expert neuroradiologists with 17 and 4.5 years of clinical experience manually annotated the ischemic areas using an in-house software. The annotations were delineated by examining the entire set of PMs (CBV, CBF, TTP, TMax) and MIP generated from a CTP study. Moreover, they used as assistance follow-up examination, including Magnetic Resonance Imaging (MRI) for most patients, performed within 1 to 3 days after the initial CT examination to assess the final infarction areas.
### Pre-processing steps
Each CTP study underwent a series of pre-processing steps to extrapolate brain tissue from the raw scans and to enhance the contrast. The pre-processing steps can be summarized as follow:
1. Each study is converted into Hounsfield unit (HU) values to have a known quantitative scale for describing radio-density.
2. Co-registration of each slice is executed using the first slice as a frame of reference.
3. Brain tissue is extracted using an algorithm by Najm et al. [16]. This algorithm was selected due to its public availability and proven efficiency for CT scans.
4. Gamma correction and histogram equalization are performed after the brain's extraction for contrast enhancement.
## 3 Method
In this work, we propose a PM-based self-supervised few-shot segmentation model for ISLs in patients suspected of AIS1. The task is formulated as a binary segmentation, where the foreground corresponds to the area of interest. Instead of extracting pseudolabels from the raw input as in previous approaches [14, 15], which for ISLs will lead to sub-optimal performance due to the homogeneous nature of the brain region, we leverage domain-knowledge to improve the self-supervision task. In particular, we extract pseudolabels from the color-coded PMs that capture more informative sub
Figure 2: General overview of the various steps involved in the method. Supervoxel regions are generated from the color-coded PMs with a modified 3D version of Felzenszwalb’s image segmentation algorithm [13]. A series of pre-processing steps (details in Sec. 2.1) are implemented to extract brain tissue. Two random slices containing the same supervoxel are then sampled to act as the support and query slice, with the supervoxel region acting as the pseudolabel for the self-supervised task. Random transformations are then applied to either the support or query volume to provide the support-query pair that is used as input to train the ADNet model.
regions. After generating the PMs from the 4D CTP input data and extracting pseudolabels, we take inspiration from the ADNet model proposed in [15] to learn a binary classifier that can segment the CTP input based on one annotated image slice. Fig. 2 gives a general overview of all the steps involved in the proposed method.
### Parametric Map-Based Self-Supervision network
The proposed PM-based self-supervised network is an extension of the ADNet [15] and is trained using supervoxel-based pseudolabels extracted from the unlabeled training images. A supervoxel can be considered a group of neighboring voxels in an image with a similar nature. To obtain the supervoxels, we follow prior approaches in FSS [14, 15] and use the Felzenszwalb's algorithm [13], as it provides more diverse supervoxels compared to alternative approaches such as Simple Linear Iterative Clustering (SLIC) [17]. We leverage the 3D version of Felzenszwalb's algorithm to produce supervoxels from the 3D image volume2 and extend the Euclidean distance calculation within the algorithm to support that each voxel in the image is represented by a vector, thus accepting 4D input tensors \(I\in\mathbb{R}^{(W\times H\times Z\times M)}\) and returning a 3D volume \(O\in\mathbb{R}^{(W\times H\times Z)}\) with the corresponding segmented regions. The four dimensions of the Felzenszwalb's input \(I\) are defined as width (\(W\)), height (\(H\)), depth (\(Z\)), and an additional dimension (\(M\)), corresponding to the number of modalities 3, hence \(M=5\); we named the Felzenszwalb's input \(I_{\text{PMs}}\) and the corresponding output \(O_{\text{PMs}}\). A CTP scan after the pre-processing steps, which corresponds to the input of the proposed method, is called \(I_{\text{CTP}}\in\mathbb{R}^{(W\times H\times Z\times M)}\), where the \(M\) corresponds to the time dimension (\(M=T=30\)). Let \(X_{s}\) and \(X_{q}\) represent the support and query 2D+time slices extracted from the same brain volume \(I_{\text{CTP}}\), respectively. The algorithm relies on one parameter, \(\rho\), which controls the supervoxel size. Experiments were performed to tune \(\rho\) to improve the segmentation accuracy of the ischemic areas.
Footnote 2: Code is publicly available at [https://github.com/sha168/Felzenszwalb-supervoxel-segmentation](https://github.com/sha168/Felzenszwalb-supervoxel-segmentation)
Footnote 3: CBV, CBF, TTP, TMax, MIP in our proposed approach
An illustration of the ADNet architecture is shown in Fig. 3. During training, the proposed model accepts support \(\mathcal{S}=\{X_{s},L_{s}\}\) and query \(\mathcal{Q}=\{X_{q},L_{q}\}\) pairs as input, where \(L_{s}\) and \(L_{q}\) are the 2D binary label images for \(X_{s}\) and \(X_{q}\). In particular, each training episode is constructed from one unlabeled CTP scan by first choosing a random supervoxel extracted from the PMs to act as the foreground class, and then samples two 2D image slices (+time) containing this supervoxel to act as support and query image (\(X_{s}\) and \(X_{q}\)). Training labels \(L_{s}\) and \(L_{q}\) are constructed from the supervoxel segmentations of the corresponding PMs: the selected supervoxel is foreground while the union of the remaining supervoxels acts as background class. Random transformations are applied to one of the support or query volumes, based on a 50% probability, to encourage invariance to shape, and intensity differences [14]. \(X_{s}\) and \(X_{q}\) are fed to a shared feature encoder to extract high-level features \(F_{s}\) and \(F_{q}\), respectively. The support foreground mask \(L_{s}\) is used to perform masked average pooling (MAP) of the support features \(F^{s}\) to compute a foreground prototype \(p\in\mathbb{R}^{d}\), where \(d\) is the dimension of the embedding space. To segment the query volume, the negative cosine similarity (\(CS\)) is computed between the prototype \(p\) and the feature map \(F_{q}\). The predicted foreground mask \(\hat{y}_{q}\) is found by thresholding the computed \(CS\) with a learned parameter \(Th\) and the loss proposed in [15] is optimized given the query label \(L_{q}\). During inference phase, we follow the evaluation protocol of [15] and sample one single 2D+time slice \(X_{s}\) from a support patient. The selected slice is always the middle slice of the support volume to avoid limited or non-existing ischemic regions at the edges of the volumes. \(L_{s}\) is the corresponding 2D slice with labeled ISL. The support pair \(\{X_{s},L_{s}\}\) is used to segment ISLs in the entire query patient one slice at a time. For a detailed explanation of the steps involved in the training and inference phases, we refer the reader to [15].
## 4 Experiments
The proposed method is implemented to segment ISLs in patients suspected of AIS. In the following, we illustrate the benefit of leveraging the domain knowledge in the form of color-coded PMs for supervoxel extraction, rather than the raw CTP. For this, we compare the proposed method to a baseline approach that directly applies ADNet to the raw CTP scans, called _CTP-Baseline_. To further demonstrate the need to jointly leverage both the raw CTP scans and the PMs, we compare to another baseline that utilizes PMs both as input and for the pseudolabel generation, which we term _PMs-Baseline_.
BaselinesOur _CTP-Baseline_ model uses supervoxels generated from raw 4D CTP studies. For the Felzenszwalb's algorithm the 4D input is \(I_{\text{CTP}}\). The same input is used in the model. The _PMs-Baseline_ method utilizes supervoxels generated directly from a set of 3D PMs stacked over the fourth dimension (Sec. 3.1). The 4D input, for producing supervoxels and for the _PMs-Baseline_, is \(I_{\text{PMs}}\).
EvaluationTo demonstrate the applicability of our proposed method, we performed evaluations on the following datasets:
* **CTP-ALL** comprises the full dataset consisting of all three groups involved in this study (152 subjects).
* **CTP-LVO** contains only patients within the LVO group (77 subjects). Patients from this group are a clinically significant share of patients with AIS, considering the extension of the ischemic area and the grim natural course of the condition. This dataset is implemented to illustrate the variation effects during training.
Following standard practice in the literature [14, 15], we evaluate the performance by measuring the Dice score (DS); furthermore, we measure the predictions with the Matthews Correlation Coefficient (MCC) [18], which has been proven to provide a better measurement for imbalanced datasets [19]. Moreover, we employ the absolute difference in volumes \(\Delta V(V_{y},V_{\hat{y}})=|V_{y}-V_{\hat{y}}|\) between the predicted volume \(V_{\hat{y}}\) and the manually annotated volume \(V_{y}\). This metric is essential for the WIS group because of the lack of labeled areas in this group, which makes it impossible to rely on the DS or the MCC.
We evaluate the effect of leveraging supervoxels obtained from the PMs via a comparison of the performance of our proposed method and the baseline approaches. In order to select the best hyperparameter \(\rho\) for the three models, we further performed a sensitivity study where we analyzed the parameter \(\rho\) in relation to the aggregate DS and the \(\Delta V\) from the set of validation patients. These two-step experiments help create a fair comparison among the analyzed models.
## 5 Results & Discussion
Table 1 compares our proposed method with the baseline approaches for the test datasets split into the three groups (LVO, Non-LVO, WIS). Moreover, a comparison with the current state-of-the-art supervised method [12] is presented. Results are given as the mean
Figure 3: Visual overview of the ADNet architecture [15].
(+ standard deviation) of \(22\) inference runs with a single slice from different patients as support. The proposed method and baseline results are shown with the best-selected \(\rho\) parameter chosen over a set of experiments on the validation set. In Table 1, we can see that our proposed model outperforms the baseline approaches for both datasets. The overall best performance is obtained by our proposed method using the **CTP-LVO** dataset for training, which indicates that the use of a single group of patients during the training phase gives a better understanding of the ischemic region.
The qualitative results in Fig. 4 support the quantitative results in Table 1. Training the models with images from the **CTP-LVO** dataset results in satisfactory segmentations for the ISLs, especially for the LVO group. With the adoption of the **CTP-ALL** as the training dataset, the results follow the correct structure of the ISL but with a tendency of under-segmentation in comparison with the method trained with the **CTP-LVO** dataset. Outcomes from our proposed approaches (third and last columns in Fig. 4) are not perfect, but they strongly reflect the area in the brain with an ISL, especially for patients in the LVO group. The _CTP-Baseline_ model brings unnecessary under-segmentation (fourth column in Fig. 4) or excessive oversegmentation (first column in Fig. 4), depending on the dataset used for training. The _PMs-Baseline_ approach yields more stable segmentations across different support images. Employing \(I_{\text{PMs}}\) as input in creating the supervoxels and using them in combination with \(I_{\text{CTP}}\) as input for the model provides a superior balance in the predicted segmentation regions; however, none of the methods are suitable for predicting ISL in Non-LVO patients. The meager nature of these areas makes it harder for the model to understand and adequately segment the ISL. The proposed approach, trained with the **CTP-LVO** dataset, achieved the best metrics among all the analyzed groups. This implies that the proposed PMs-Based Self-Supervision method is more suitable to segment ischemic regions and that there is a benefit to using PMs for supervoxel generation rather than relying on supervoxels extracted from raw CTP studies. Although the performances are not as high as the supervised method [12], which utilizes the entire labeled dataset during training, the achieved segmentation accuracy for the LVO group is still encouraging. This accuracy was attained using only a single labeled 2D+time slice, emphasizing the significance of employing domain knowledge in a limited labeled data setting
## 7 Compliance with Ethical Standards
This study was performed in line with the principles of the Declaration of Helsinki. Approval was granted by the Regional Ethic Committee Project 2012/1499.
## 8 Acknowledgments
This work was supported by The Research Council of Norway (RCN), through its Centre for Research-based Innovation funding scheme [grant number 309439]; RCN FRIPRO [grant number 315029]; RCN IKTPLUSS [grant number 303514]; and the UiT Thematic Initiative. The authors would like to thank the Neuroscience Research Group, SUS, under the supervision of prof. Martin W. Kurz for providing clinical patient information.
|
2305.17806 | On Entanglement Measures: Discrete Phase Space and Inverter-Chain Link
Viewpoint | In contrast to abstract statistical analyses in the literature, we present a
concrete physical diagrammatic model of entanglement characterization and
measure with its underlying discrete phase-space physics. This paper serves as
a pedagogical treatment of this complex subject of entanglement measures. We
review the important inherent concurrence property of entangled qubits, as well
as underscore its emergent qubit behavior. From the discrete phase space point
of view, concurrence translates to translation symmetry of entangled binary
systems in some quantitative measure of entanglement. Although the focus is on
bipartite system, the notion is readily extendable to multi-partite system of
qubits, as can easily be deduced from the physical inverter-chain link model. A
diagrammatic analysis of the entanglement of formation for any multi-partite
qubit system is given | Felix A. Buot | 2023-05-28T19:54:02Z | http://arxiv.org/abs/2305.17806v1 | # On Entanglement Measures: Discrete Phase Space and Inverter-Chain Link Viewpoint
###### Abstract
In contrast to abstract statistical analyses in the literature, we present a concrete physical diagrammatic model of entanglement characterization and measure with its underlying discrete phase-space physics. This paper serves as a pedagogical treatment of this complex subject of entanglement measures. We review the important inherent concurrence property of entangled qubits, as well as underscore its emergent qubit behavior. From the discrete phase space point of view, concurrence translates to translation symmetry of entangled binary systems in some quantitative measure of entanglement. Although the focus is on bipartite system, the notion is readily extendable to multi-partite system of qubits, as can easily be deduced from the physical inverter-chain link model. A diagrammatic analysis of the entanglement of formation for any multi-partite qubit system is given.
## 1 Introduction
Quantum entanglement has developed from a mere intellectual curiosity [1] of the fundamental structure of quantum mechanics1 to become an important and practical resource for quantum information processing in the evolving theory of quantum information and ultra-fast computing. Thus, the quantitative measure of entanglement has developed into one of the most active fields of theoretical and experimental research. Here we will try to shed more light on some of the important concepts in the quantification of quantum entanglement by using a concrete simple mechanical model of a bipartite system of qubits or chain of
qubits. This treatment will be in contrast with mostly abstract and statistical treatment of entanglement measure in the literature.
For the two-qubit entanglement, we will focus on the so-called entanglement of formation and concurrence, two of the most important concepts to characterize entanglement resource. Here we consider a qubit as a two-state system. Moreover, we also consider an entangled qubit as effectively a two-state system, an _emergent qubit_ as depicted in our physical inverter-chain link model. A two-state system has a unity entropy. Thus, a maximally entangled state has entropy equal to \(1\). From the discrete phase space point of view, any two-state system can be considered to possess two lattice-site states. Discrete Fourier transform or Hadamard transform implements unitary superposition of the two _lattice-site states_ (also referred to as '_Wannier functions'_) to yield a sort of _crystal-momentum states_ (also referred to as '_Bloch functions'_). For example, take the \(\left|00\right\rangle\) and \(\left|11\right\rangle\) '_lattice-site states_', then the Hadamard bijective discrete transformation gives the '_crystal-momentum states_', \(\Phi^{+}\) and \(\Phi^{-}\), which are two of the Bell basis states (or '_Bloch function'_ states).
## 2 Bell Basis Deduced from Inverter-Chain Link Model
We sketch here the derivation of the Bell basis states from our physical model, as depicted in Fig. 1. The derivation is based in considering each entangled diagram as a two-state (binary system) of _emergent_ qubit. Clearly, the entanglement of two _bare_ qubits is divided into two orthogonal spaces of triplet 2 and singlet entanglement states.
Footnote 2: The use of the term ”triplet” is actually a misnomer here since the entangled system is not free to assume a singlet or zero spin state. Thus, this term is used here only as a label.
We also have the inverse relationship
\[\left(\begin{array}{c}\left|00\right\rangle\\ \left|11\right\rangle\end{array}\right)=\frac{1}{\sqrt{2}}\left(\begin{array} []{cc}1&1\\ 1&-1\end{array}\right)\left(\begin{array}{c}\left|\Phi^{+}\right\rangle\\ \left|\Phi^{-}\right\rangle\end{array}\right) \tag{1}\]
\[\left(\begin{array}{c}\left|01\right\rangle\\ \left|10\right\rangle\end{array}\right)=\frac{1}{\sqrt{2}}\left(\begin{array} []{cc}1&1\\ 1&-1\end{array}\right)\left(\begin{array}{c}\left|\Psi^{+}\right\rangle\\ \left|\Psi^{-}\right\rangle\end{array}\right) \tag{2}\]
We refer to \(\left\{\left|00\right\rangle,\left|11\right\rangle\right\}\) as the "_Wannier functions_" space and the \(\left\{\left|\Phi^{+}\right\rangle,\left|\Phi^{-}\right\rangle\right\}\) as the corresponding "_Bloch functions_" space of a two-state _triplet_ entanglement system. Likewise, \(\left\{\left|01\right\rangle,\left|10\right\rangle\right\}\) as the "_Wannier functions_" space and the \(\left\{\left|\Psi^{+}\right\rangle,\left|\Psi^{-}\right\rangle\right\}\) as the corresponding "_Bloch functions_" space of a two-state singlet entanglement system.
By virtue of this bijective relationship, any function of Wannier functions will have a corresponding function of Bloch functions. For example, a maximally _mixed_ Wannier function states will generate a maximally mixed Bloch function states or the so-called maximally _mixed_ entanglement states. Mixed and pure states will be further discussed below.
## 3 Entangled Qubits as an Emergent Qubit
The virtue of our inverter-chain link model is that the emergent two-state property of entangled qubits is very transparent, since changing the state of one of the qubits also _immediately_ change the states of the rest of the entangled partner(s) 3. Here, we will rigorously justify our claim that entangled qubits behave, as a whole, as an _emergent_ qubit.
Footnote 3: The idea that entanglement is due to conservation of momentum does not hold for triplet entanglement since its two states give opposing spin-angular momentum. However, one may interpret that quantum superposition of the two-opposing angular momentum states conserves the overall zero net spin-angular momentum, as supported by their eigenvalues similar to a single qubit. On the other hand, for singlet entanglement, the zero angular momentum is conserve in both two states. This seemingly apparent physical difference of the triplet and singlet entanglements underscores the importance of resolving the ”mysterious link” in the EPR inquiry, [1] in order to further advance theoretical physics.
Here, we employ the matrix representation of states and operators. We now represent \(\left|\Phi^{+}\right\rangle\) and as
\[\left|\Phi^{+}\right\rangle = \frac{1}{\sqrt{2}}\left(\begin{array}{c}1\\ 1\end{array}\right)\] \[\left|\Phi^{-}\right\rangle = \frac{1}{\sqrt{2}}\left(\begin{array}{c}1\\ -1\end{array}\right)\]
Figure 1: Physical diagrammatic model of ”triplet” (left) and singlet (right) entanglement. By construction, each diagram is viewed as a two-state system, respectively. The Bell basis are readily derived below each diagram, using the Hadamard transformation. The actual physical implementation of the chain of inverters may need frictionless male/female sliding tube coupling for large-angle swing, but this is beside the point.
Then we have
\[\left\langle\Phi^{+}\right|\sigma_{x}\left|\Phi^{+}\right\rangle = \frac{1}{2}\left(\begin{array}{cc}1&1\end{array}\right)\left( \begin{array}{cc}0&1\\ 1&0\end{array}\right)\left(\begin{array}{c}1\\ 1\end{array}\right)=1\] \[\left\langle\Phi^{-}\right|\sigma_{x}\left|\Phi^{-}\right\rangle = \frac{1}{2}\left(\begin{array}{cc}1&-1\end{array}\right)\left( \begin{array}{cc}0&1\\ 1&0\end{array}\right)\left(\begin{array}{c}1\\ -1\end{array}\right)=-1\]
proving the two eigenvalues, like a qubit property of the triplet system.
For the singlet, it is more convenient to change the phase of the second term by \(i\) so that we have
\[\left|\Psi^{+}\right\rangle = \frac{1}{\sqrt{2}}\left(\begin{array}{c}1\\ i\end{array}\right)\] \[\left|\Psi^{-}\right\rangle = \frac{1}{\sqrt{2}}\left(\begin{array}{c}1\\ -i\end{array}\right)\]
Then, we have
\[\left\langle\Psi^{+}\right|\sigma_{y}\left|\Psi^{+}\right\rangle = \frac{1}{2}\left(\begin{array}{cc}1&-i\end{array}\right)\left( \begin{array}{cc}0&-i\\ i&0\end{array}\right)\left(\begin{array}{c}1\\ i\end{array}\right)=1\] \[\left\langle\Psi^{-}\right|\sigma_{y}\left|\Psi^{-}\right\rangle = \frac{1}{2}\left(\begin{array}{cc}1&i\end{array}\right)\left( \begin{array}{cc}0&-i\\ i&0\end{array}\right)\left(\begin{array}{c}1\\ -i\end{array}\right)=-1\]
again proving the qubit property of the singlet system, where the two-state aspect which is quite apparent in Fig. (1).
The above manipulation for the singlet is a bit round-about trick for the singlet entanglement. The most straightforward manipulation is to recognize that the singlet system as an independent system with its own two states which is orthogonal to the triplet states. In this sense we can also represent the singlet states just like that of the triplet states, namely,
\[\left|\Psi^{+}\right\rangle = \frac{1}{\sqrt{2}}\left(\begin{array}{c}1\\ 1\end{array}\right)\] \[\left|\Psi^{-}\right\rangle = \frac{1}{\sqrt{2}}\left(\begin{array}{c}1\\ -1\end{array}\right)\]
Note that in the 'Bloch-state" space, \(\sigma_{z}\) in the "Wannier-state" space is transfrom to \(\sigma_{x}\) in entangled states,
\[\sigma_{x} = H\sigma_{z}H\] \[= \frac{1}{\sqrt{2}}\left(\begin{array}{cc}1&1\\ 1&-1\end{array}\right)\left(\begin{array}{cc}1&0\\ 0&-1\end{array}\right)\frac{1}{\sqrt{2}}\left(\begin{array}{cc}1&1\\ 1&-1\end{array}\right)\] \[= \left(\begin{array}{cc}0&1\\ 1&0\end{array}\right)\]
Thus, we also have for the singlet states
\[\left\langle\Psi^{+}\right|\sigma_{x}\left|\Psi^{+}\right\rangle = \frac{1}{2}\left(\begin{array}{cc}1&1\end{array}\right)\left( \begin{array}{cc}0&1\\ 1&0\end{array}\right)\left(\begin{array}{c}1\\ 1\end{array}\right)=1 \tag{3}\] \[\left\langle\Psi^{-}\right|\sigma_{x}\left|\Psi^{-}\right\rangle = \frac{1}{2}\left(\begin{array}{cc}1&-1\end{array}\right)\left( \begin{array}{cc}0&1\\ 1&0\end{array}\right)\left(\begin{array}{c}1\\ -1\end{array}\right)=-1 \tag{4}\]
Following this notion, Eqs. (3) and (4) can easily be extended to all entangled qubits, bipartite or multi-partite qubit systems to yield an emergent qubit. It is far more simpler to analyze the inverter-chain link diagrams or employ diagrammatic analyses.
## 4 Translational/Shift Invariance of Bell Basis
The translational invariance of maximally entangled Bell basis states demonstrates a unique characteristic of entangled qubits. This unique property has been used to detect or measure how much entanglement is present in arbitrary pure and mixed states. This is implemented in terms of the notion of concurrence, to be elaborated below.
The translational or shift operation (also referred to as the _flip_ operation in the literature) is demonstrated here for the Bell Basis states and their superpositions. First, let us consider all the Bell basis states. We have
\[\Phi^{+}=\frac{1}{\sqrt{2}}\left(\left|0\right\rangle\left|0\right\rangle+ \left|1\right\rangle\left|1\right\rangle\right)\]
Upon applying the translation (shift by \(+1\left(\mathrm{mod}\,2\right)\)) operator, \(T\left(+1\right)\), we obtain
\[T\left(+1\right)\Phi^{+} = \frac{1}{\sqrt{2}}\left(\left|0+1\right\rangle\left|0+1\right\rangle +\left|1+1\right\rangle\left|1+1\right\rangle\right)\] \[= \frac{1}{\sqrt{2}}\left(\left|1\right\rangle\left|1\right\rangle+ \left|0\right\rangle\left|0\right\rangle\right)\] \[= \Phi^{+}\]
where addition obeys modular arithmetic (\(mod\) 2). Similarly, we have
\[T\left(+1\right)\Phi^{-} = -\Phi^{-}\] \[T\left(+1\right)\Psi^{-} = -\Psi^{-}\] \[T\left(+1\right)\Psi^{+} = \Psi^{+}\]
We still consider the \(-\Phi^{-}\) and \(-\Psi^{-}\)as invariant since it differs from the unshifted Bell states by a global phase factor. In the literature this translation operation is the so-called flipping operation first used by Wooters, et al, [2, 3]
### Translational property of maximal superposition of Bell basis
Now let us consider the superposition of maximally entangled Bell basis states. We can readily see that applying the Hadamard transformation to two _Bloch_ states will yield the _Wannier_ states. We have
\[\frac{1}{\sqrt{2}}\left(\begin{array}{cc}1&1\\ 1&-1\end{array}\right)\left(\begin{array}{c}\Phi^{+}\\ \Phi^{-}\end{array}\right)=\left(\begin{array}{c}\left|0\right\rangle\left|0 \right\rangle\\ \left|1\right\rangle\left|1\right\rangle\end{array}\right) \tag{5}\]
Thus,
\[\frac{1}{\sqrt{2}}\left(\Phi^{+}+\Phi^{-}\right) = \left|0\right\rangle\left|0\right\rangle\] \[\frac{1}{\sqrt{2}}\left(\Phi^{+}-\Phi^{-}\right) = \left|1\right\rangle\left|1\right\rangle \tag{6}\]
The thing to notice is that although the superposition is made up of two maximally entangled Bell basis, the results are not shift invariant (i.e., generation of another _Wannier_ state yields _Wannier_ state located at another _site, which is_ orthogonal to the unshifted one). This means that the superposition given above yields unentangled states (product states) in Eq. (6). In what follows, we will see that only the following combination of entangled basis states yields another entangled basis states as long as they both belong to either 'even' or 'odd' spaces, i.e., by diagrammatic construction, we have,
\[triplet^{\pm}\otimes triplet^{\pm} = triplet^{\pm}\mbox{ or }singlet^{\pm} \tag{7}\] \[triplet^{\pm}\otimes singlet^{\pm} = singlet^{\pm}\mbox{ or }triplet^{\pm}\] (8) \[singlet^{\pm}\otimes singlet^{\pm} = triplet^{\pm}\mbox{ or }singlet^{\pm} \tag{9}\]
Equation (8) is quite interesting because it holds on a complete expansion of a direct product of two qubit states. These relations can easily be deduced from the physical diagrammatic model, see Fig. 1. For example, if we define the operation as a superposition such as
\[\frac{1}{\sqrt{2}}\left(\Phi^{+}+\Psi^{+}\right) = \frac{1}{\sqrt{2}}\left(\frac{1}{\sqrt{2}}\left(\left|1\right\rangle \left|1\right\rangle+\left|0\right\rangle\left|0\right\rangle\right)+\frac{1}{ \sqrt{2}}\left(\left|0\right\rangle\left|1\right\rangle+\left|1\right\rangle \left|0\right\rangle\right)\right) \tag{10}\] \[= \frac{1}{2}\left[\left(\left|1\right\rangle\left|1\right\rangle+ \left|0\right\rangle\left|0\right\rangle\right)+\left(\left|0\right\rangle \left|1\right\rangle+\left|1\right\rangle\left|0\right\rangle\right)\right]\] \[= \left[\frac{1}{\sqrt{2}}\left(\left|1\right\rangle+\left|0 \right\rangle\right)\otimes\frac{1}{\sqrt{2}}\left(\left|1\right\rangle+\left| 0\right\rangle\right)\right]\]
We see that Eq. (10) is a direct product state of two qubits. We will see in what follows that this is an entangled state and corresponds to Eq. (8) of the \(singlet^{+}\) or \(triplet^{+}\) of the model diagrams depending on the actual linkage.
Similarly, we have,
\[\frac{1}{\sqrt{2}}\left(\Phi^{-}+\Psi^{-}\right) = \frac{1}{\sqrt{2}}\left(\frac{1}{\sqrt{2}}\left(\left|1\right\rangle \left|1\right\rangle-\left|0\right\rangle\left|0\right\rangle\right)+\frac{1}{ \sqrt{2}}\left(\left|0\right\rangle\left|1\right\rangle-\left|1\right\rangle \left|0\right\rangle\right)\right)\] \[= \frac{1}{2}\left[\left(\left|1\right\rangle\left|1\right\rangle- \left|0\right\rangle\left|0\right\rangle\right)+\left(\left|0\right\rangle \left|1\right\rangle-\left|1\right\rangle\left|0\right\rangle\right)\right]\] \[= \left[\frac{1}{\sqrt{2}}\left(\left|1\right\rangle+\left|0\right\rangle \right)\otimes\frac{1}{\sqrt{2}}\left(\left|1\right\rangle-\left|0\right\rangle \right)\right]\]
corresponds to Eq. (8) of the \(singlet\) or \(triplet\) of the model diagrams, depending on the actual linking.
However, the following combinations of 'even' and 'odd' entangled states result in _unentangled_ states, namely,
\[\frac{1}{\sqrt{2}}\left(\Phi^{+}+\Psi^{-}\right) = \frac{1}{\sqrt{2}}\left(\frac{1}{\sqrt{2}}\left(\left|1\right\rangle \left|1\right\rangle+\left|0\right\rangle\left|0\right\rangle\right)+\frac{1}{ \sqrt{2}}\left(\left|0\right\rangle\left|1\right\rangle-\left|1\right\rangle \left|0\right\rangle\right)\right)\] \[= \frac{1}{2}\left[\left(\left|1\right\rangle\left|1\right\rangle+ \left|0\right\rangle\left|0\right\rangle\right)+\left(\left|0\right\rangle \left|1\right\rangle-\left|1\right\rangle\left|0\right\rangle\right)\right]\]
and
\[\frac{1}{\sqrt{2}}\left(\Phi^{-}+\Psi^{+}\right) = \frac{1}{\sqrt{2}}\left(\frac{1}{\sqrt{2}}\left(\left|1\right\rangle \left|1\right\rangle-\left|0\right\rangle\left|0\right\rangle\right)+\frac{1}{ \sqrt{2}}\left(\left|0\right\rangle\left|1\right\rangle+\left|1\right\rangle \left|0\right\rangle\right)\right)\] \[= \frac{1}{2}\left[\left(\left|1\right\rangle\left|1\right\rangle- \left|0\right\rangle\left|0\right\rangle\right)+\left(\left|0\right\rangle \left|1\right\rangle+\left|1\right\rangle\left|0\right\rangle\right)\right]\]
by virtue of the failure to have global sign factors. All these claims are justified through the concept of concurrence, an inherent property of entangled qubits, to be discussed in what follows. Moreover, this feature of failing to have global sign factor is also reflected in the failure to represent by our inverter-chain link diagrams.
## Bipartite System
Let the Hilbert spaces of a bipartite system consisting of \(A\) and \(B\) be denoted by \(\mathcal{H}_{A}\) and \(\mathcal{H}_{B}\), respectively. A bipartite system is a system with Hilbert space equal to the direct product of \(\mathcal{H}_{A}\) and \(\mathcal{H}_{B}\), i.e.,
\[\mathcal{H}_{AB}=\mathcal{H}_{A}\otimes\mathcal{H}_{B}\]
Let the density matrix for the whole system be denoted by \(\rho\). Then the reduced density matrix of a subsystem \(A\) is given by the partial trace,
\[\rho_{A}=Tr_{{}_{B}}\ \rho\]
The entanglement entropy, \(S_{A}\), is the _von Newmann entropy_ of the reduced density matrix, \(\rho_{A}\),
\[S_{A}=-Tr\ \rho_{A}\log\rho_{A}\]
**Example 1**: _Let \(\Omega\) be the number of distinct states in subsystem A. Assume a uniform distribution among states, hence \(\rho_{A}\) has eigenvalues \(\frac{1}{\Omega}\), i.e., \(\rho_{A}\) can be represented by a diagonal \(\Omega\times\Omega\) matrix with identical matrix elements given by \(\frac{1}{\Omega}\). Thus, in taking the trace we can use the eigenvalues of the reduced density matrix operator, \(\rho_{A}\). Therefore, we have_
\[S_{A} = -Tr\frac{1}{\Omega}\log\frac{1}{\Omega}\] \[= -\Omega\left(\frac{1}{\Omega}\log\frac{1}{\Omega}\right)\] \[= -\log\frac{1}{\Omega}\] \[= \log\ \Omega\]
_Upon multiplying by the Boltzmann constant, \(k_{B}\), we obtain_
\[k_{B}S_{A}=k_{B}\log\ \Omega\]
_which is the Boltzmann thermodynamic entropy, based on ergodic theorem._
**Example 2**: _Two qubit system._
A qubit is simply a quantum bit whose number of distinct eigenstates is 2. We denote these eigenstates as \(\left|0\right\rangle\) and \(\left|1\right\rangle\), i.e., a two-state system. If each subsystem \(A\) or \(B\) is a single qubit, then the Hilbert space of the whole system is span by the following 4 direct product states, namely,
\[\left|00\right\rangle,\ \left|01\right\rangle,\ \left|10\right\rangle,\ \left|11\right\rangle\]
Here, the first bit refers to subsystem \(A\) and the second bit refers to subsystem \(B\), where we write
\[\left|ij\right\rangle=\left|i\right\rangle_{A}\ \left|j\right\rangle_{B}\ \epsilon\ \mathcal{H}_{A}\otimes\mathcal{H}_{B}\]
Now, the density matrix of a pure state is,
\[\rho=\left|\psi\right\rangle\left\langle\psi\right|\]
then
\[\rho^{2} = \left|\psi\right\rangle\left|\psi\right\rangle\left\langle\psi \right|\left|\psi\right\rangle\left\langle\psi\right|\] \[= \left|\psi\right\rangle\left\langle\psi\right|\] \[= \rho\]
An operator whose square is equal to itself must have an eigenvalue equal to unity. Let us write for pure state of the two qubits as,
\[\left|\psi\right\rangle = \frac{1}{\sqrt{2}}\left(\left|00\right\rangle+\left|11\right\rangle\right)\] \[\left\langle\psi\right| = \frac{1}{\sqrt{2}}\left(\left\langle 00\right|+\left\langle 11 \right|\right)\] \[\left|\psi\right\rangle\left\langle\psi\right| = \frac{1}{2}\left(\left|00\right\rangle+\left|11\right\rangle \right)\left(\left\langle 00\right|+\left\langle 11\right|\right)\]
We have
\[\left\langle\psi\right|\left|\psi\right\rangle = \frac{1}{2}\left(\left\langle 00\right|+\left\langle 11\right| \right)\left(\left|00\right\rangle+\left|11\right\rangle\right)\] \[= \frac{1}{2}\left(\left\langle 00\right|\left|00\right\rangle+ \left\langle 11\right|\left|11\right\rangle\right)\] \[= 1\]
Thus, indeed,
\[\rho^{2} = \left|\psi\right\rangle\left\langle\psi\right|\left|\psi\right\rangle \left\langle\psi\right|\] \[= \left|\psi\right\rangle\left\langle\psi\right|\] \[= \rho\]
We refer to \(\left|\psi\right\rangle\) as a maximally entangled state in the sense that the first qubit is exactly "mapped" to the second qubit. The reduced density matrix for subsystem \(A\) is
\[\rho_{A} = Tr_{B}\ \rho\] \[= \frac{1}{2}\left\langle 0_{B}\right|\left(\left|00\right\rangle+ \left|11\right\rangle\right)\left(\left\langle 00\right|+\left\langle 11 \right|\right)\left|0_{B}\right\rangle\] \[+\frac{1}{2}\left\langle 1_{B}\right|\left(\left|00\right\rangle +\left|11\right\rangle\right)\left(\left\langle 00\right|+\left\langle 11 \right|\right)\left|1_{B}\right\rangle\] \[= \frac{1}{2}\left(\left|0_{A}\right\rangle\left\langle 0_{A} \right|+\left|1_{A}\right\rangle\left\langle 1_{A}\right|\right)\]
Now clearly
\[\rho_{A}^{2} = \frac{1}{4}\left(\left|0_{A}\right\rangle\left\langle 0_{A} \right|+\left|1_{A}\right\rangle\left\langle 1_{A}\right|\right)\left(\left|0_{A} \right\rangle\left\langle 0_{A}\right|+\left|1_{A}\right\rangle\left\langle 1_{A} \right|\right)\] \[= \frac{1}{4}\left(\left|0_{A}\right\rangle\left\langle 0_{A} \right|+\left|1_{A}\right\rangle\left\langle 1_{A}\right|\right)\] \[= \frac{1}{2}\rho_{A}\]
so that \(\rho_{A}\) is not a pure state but mixed, i.e., a mixture of two pure states, \(\left|0_{A}\right\rangle\) and \(\left|1_{A}\right\rangle\). Note that the mixed states density matrix do not possess _off-diagonal elements_. From the last identity, the eigenvalues of \(\rho_{A}\) is \(\frac{1}{2}\) and since the subsystem \(A\) is a single qubit with two distinct states, then eigenvalues of \(\rho_{A}\) corresponds to \(\frac{1}{\Omega}\) of our first example above. Thus we refer to \(\rho_{A}\) as "uniformly" mixed often referred to as "_maximally mixed_" with the initial state \(\left|\psi\right\rangle\) as "_maximally entangled"_.
## 5 Entanglement of Formation of Multi-Partite Qubit Systems
The entanglement entropy of subsystem \(A\) can be calculated using the eigenvalues of the \(2\times 2\) matrix of \(\rho_{A}\), which is \(\lambda_{A}=\frac{1}{2}\). Therefore we have for the
entanglement entropy or entanglement of formation given by,
\[S_{A} = -Tr\rho_{A}\log\rho_{A} \tag{11}\] \[= -Tr\frac{1}{2}\log\frac{1}{2}\] \[= -2\left(\frac{1}{2}\log\frac{1}{2}\right)\] \[= \log 2\equiv\log 2^{1}\]
where exponent base \(2\), here \(1\), is the number of qubits that is entangled with system \(B\). This will be made clear in the next example.
**Example 3**: _A four qubit system:_
_If each subsystem \(A\) or \(B\) has two qubits, then the Hilbert space of the whole system is span by the following \(16\) direct product states,i.e., \(2^{4}=16\) states. The maximally entangled pure state is determined by the following eight diagrams [4], see Fig.2, and their flipped or translated states,_
_Any combination or superposition of both 'even' triplet and singlet states comprised a maximally entangled state, i.e.,_
\[\left(\begin{array}{c}\Phi_{1}^{+}\\ \Phi_{1}^{-}\end{array}\right)=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}1&1 \\ 1&-1\end{array}\right)\left(\begin{array}{c}\left|0\right\rangle\left|0 \right\rangle\left|0\right\rangle\left|0\right\rangle\\ \left|1\right\rangle\left|1\right\rangle\left|1\right\rangle\left|1\right\rangle \end{array}\right)\]
\[\left(\begin{array}{c}\Phi_{8}^{+}\\ \Phi_{8}^{-}\end{array}\right)=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}1&1 \\ 1&-1\end{array}\right)\left(\begin{array}{c}\left|0\right\rangle\left|1 \right\rangle\left|0\right\rangle\left|1\right\rangle\\ \left|1\right\rangle\left|0\right\rangle\left|1\right\rangle\left|0\right\rangle \end{array}\right)\]
_Thus, from the states given above, we have for example the maximally entangled
Figure 2: Two-state eight diagrams for entangled four qubits. The so-called flip operation yields the second state for each of the above diagrams. The entangled basis is constructed by the superposition, via the Hadamard transformation, of each diagram and its corresponding flipped diagram.
state,_
\[\left|\psi\right\rangle = \frac{1}{\sqrt{4}}\left(\left|00,00\right\rangle+\left|01,01\right\rangle +\left|10,10\right\rangle+\left|11,11\right\rangle\right) \tag{12}\] \[= \frac{1}{\sqrt{2}}\left(\Phi_{1}^{+}+\Phi_{8}^{+}\right)\]
_and similarly, we can also form another entangled state,_
\[\left|\psi\right\rangle = \frac{1}{\sqrt{4}}\left(\left|00,00\right\rangle+\left|01,01 \right\rangle-\left|10,10\right\rangle-\left|11,11\right\rangle\right) \tag{13}\] \[= \frac{1}{\sqrt{2}}\left(\Phi_{1}^{-}+\Phi_{8}^{-}\right)\]
_where the first two bits belongs to subsystem \(A\) and the second two bits belong to subsystem \(B\). We note that_
\[\left\langle\psi\right|\left|\psi\right\rangle = \frac{1}{4}\left(\left\langle 00,00\right|+\left\langle 01,01 \right|+\left\langle 10,10\right|+\left\langle 11,11\right|\right)\left(\left|00,00 \right\rangle+\left|01,01\right\rangle+\left|10,10\right\rangle+\left|11,11 \right\rangle\right)\] \[= 1\]
_Therefore_
\[\rho^{2} = \left(\left|\psi\right\rangle\left\langle\psi\right|\right)^{2}\] \[= \left|\psi\right\rangle\left\langle\psi\right|=\rho\]
_So \(\left|\psi\right\rangle\) is a pure state._
_In general, the following maximally entangled state corresponds to a chain of entangled basis states, namely,_
\[\left|\psi^{+}\right\rangle=\frac{1}{\sqrt{8}}\left(\Phi_{1}^{+}+\Phi_{2}^{+}+ \Phi_{3}^{+}+\Phi_{4}^{+}+\Phi_{5}^{+}+\Phi_{6}^{+}+\Phi_{7}^{+}+\Phi_{8}^{+}\right)\]
_as well as_
\[\left|\psi^{-}\right\rangle=\frac{1}{\sqrt{8}}\left(\Phi_{1}^{-}+\Phi_{2}^{-}+ \Phi_{3}^{-}+\Phi_{4}^{-}+\Phi_{5}^{-}+\Phi_{6}^{-}+\Phi_{7}^{-}+\Phi_{8}^{-}\right)\]
_of a four qubit system._
_The density matrix operator for the whole \(4\)-qubit system can be written as,_
\[\rho=\frac{1}{\sqrt{2}}\left(\left|\Phi_{1}^{-}+\Phi_{8}^{-}\right\rangle \right)\frac{1}{\sqrt{2}}\left(\left\langle\Phi_{1}^{-}+\Phi_{8}^{-}\right|\right)\]
\[\rho=\frac{1}{4}\left[\left|00,00\right\rangle+\left|01,01\right\rangle+\left| 10,10\right\rangle+\left|11,11\right\rangle\right]\left[\left\langle 00,00 \right|+\left\langle 01,01\right|+\left\langle 10,10\right|+\left\langle 11,11 \right|\right]\]
_The reduced density matrix operator for subsystem \(A\) is again obtained by taking partial trace with respect to subsystem \(B\)._
\[\rho_{A} = Tr_{B}\ \rho\] \[= \left\langle 00_{B}\right|\rho\left|00_{B}\right\rangle+\left\langle 0 1_{B}\right|\rho\left|01_{B}\right\rangle+\left\langle 10_{B}\right|\rho \left|10_{B}\right\rangle+\left\langle 11_{B}\right|\rho\left|11_{B}\right\rangle\] \[= \frac{1}{4}\left|00_{A}\right\rangle\left\langle 00_{A}\right|+ \left|01_{A}\right\rangle\left\langle 01_{A}\right|+\left|10_{A}\right\rangle \left\langle 10_{A}\right|+\left|11_{A}\right\rangle\left\langle 11_{A}\right|\]
_To determine the eigenvalues for \(\rho_{A}\), we take its square,_
\[\rho_{A}^{2} = \frac{1}{16}\left(\left|00_{A}\right\rangle\left\langle 00_{A} \right|+\left|01_{A}\right\rangle\left\langle 01_{A}\right|+\left|10_{A}\right\rangle \left\langle 10_{A}\right|+\left|11_{A}\right\rangle\left\langle 11_{A}\right|\right)\] \[\times\left(\left|00_{A}\right\rangle\left\langle 00_{A}\right|+ \left|01_{A}\right\rangle\left\langle 01_{A}\right|+\left|10_{A}\right\rangle \left\langle 10_{A}\right|+\left|11_{A}\right\rangle\left\langle 11_{A}\right|\right)\] \[= \frac{1}{16}\rho_{A}\]
_So we have_
\[\rho_{A}^{2}=\frac{1}{16}\rho_{A}\]
_and the eigenvalues of \(\rho_{A}\) is \(\frac{1}{4}\). We can now calculate the entanglement entropy of subsystem \(A\). We have_
\[S_{A} = -Tr\rho_{A}\log\rho_{A}\] \[= -4\left[\frac{1}{4}\log\frac{1}{4}\right]\] \[= -\log\frac{1}{4}\] \[= \log 2^{2}\]
_The exponent \(2\) correspond to the number of qubits that is entangled with subsystem \(B\). In general, for maximally entangled bipartite system \(A\) and \(B\), each having \(k\) number of qubits \(S_{A}\) is given by_
\[S_{A}=\log 2^{k}\]
_Now of course, for this bipartite system,_
\[S_{A}=S_{B}\]
_which simply means a complete matching of configurations of each system \(A\) and \(B\), respectively, otherwise some degrees of freedom will be hanging, not matched or cannot be entangled._
**Example 4**: _Tripartite sytem of three qubits:_
_The following diagrams represent the entangled tripartite system of qubits. The entangled basis states are as follows,_
\[\left(\begin{array}{c}\Xi^{+}\\ \Xi^{-}\end{array}\right)=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}1&1\\ 1&-1\end{array}\right)\left(\begin{array}{c}\left|0\right\rangle\left|1 \right\rangle\left|1\right\rangle\\ \left|1\right\rangle\left|0\right\rangle\left|0\right\rangle\end{array}\right)\]
\[\left(\begin{array}{c}\Theta_{3}^{+}\\ \Theta_{3}^{-}\end{array}\right)=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}1 &1\\ 1&-1\end{array}\right)\left(\begin{array}{c}\left|0\right\rangle\left|0 \right\rangle\left|0\right\rangle\\ \left|1\right\rangle\left|1\right\rangle\left|1\right\rangle\end{array}\right)\]
\[\left(\begin{array}{c}\Omega^{+}\\ \Omega^{-}\end{array}\right)=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}1&1\\ 1&-1\end{array}\right)\left(\begin{array}{c}\left|0\right\rangle\left|1 \right\rangle\left|0\right\rangle\\ \left|1\right\rangle\left|0\right\rangle\left|1\right\rangle\end{array}\right)\]
\[\left(\begin{array}{c}\Gamma^{+}\\ \Gamma^{-}\end{array}\right)=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}1&1\\ 1&-1\end{array}\right)\left(\begin{array}{c}\left|0\right\rangle\left|0 \right\rangle\left|1\right\rangle\\ \left|1\right\rangle\left|1\right\rangle\left|0\right\rangle\end{array}\right)\]
A chain or superposition of any choice of entangled _even_ basis states also form a maximally entangled state, e.g.,
\[\left|\Psi^{4}\right\rangle = \frac{1}{\sqrt{4}}\left[\Xi^{+}+\Theta_{3}^{+}+\Omega^{+}+\Gamma^ {+}\right]\] \[\left|\Psi^{3}\right\rangle = \frac{1}{\sqrt{3}}\left[\Xi^{+}+\Theta_{3}^{+}+\Omega^{+}\right]\] \[\left|\Psi^{2}\right\rangle = \frac{1}{\sqrt{2}}\left[\Xi^{+}+\Theta_{3}^{+}\right]\] \[\left|\Psi^{1}\right\rangle = \Xi^{+}\]
are all, respectively, maximally entangled states with emergent two-state or qubit properties.
**Example 5**: _Here, we need to define what we mean by entanglement of formation as the entanglement entropy of unentangling the three-qubit tripartite system into three separate one-qubit systems. It is important to point out that any entangled qubits, irrespective of their number, identically behave like a single qubit, i.e., behaving exactly like a two-state system. Thus, the natural order of disentangling is as follows: First, one entangles one qubit from the remaining two entangled qubits. The entanglement of formation is equivalent to one qubit. Next, one disentangled the remaining entangled two qubits. This further give an entanglement of formation of one qubit. Thus, the total entanglement of formation is \(1+1=2\) qubits._
Figure 3: Two-state four diagrams for entangled three qubits of a tripartite system. Flip operations yield the respective second states.
### On the monogamy inequality in multi-partite entanglement
The reasoning we have given above yields exact equality of the monogamy, usually given as _inequality_ in the literature as deduced from statistical analysis, e.g., one given by Kim [5], The exact equality relation for the entanglement of formation is diagrammatically shown in Fig. 4
The exact equality comes about by construction, since on both sides of Fig. 4, two 2-qubit entangled states are being untangled to obtain the entanglement of formation of this multi-partite system. This comes about through the knowledge that any multi-partite entangled qubits behave as an emergent qubit.
Similarly, for an entangled 4-partite system, the entanglement of formation is determined schematically by the diagrams of Fig.5, where the unentangling operations to be done on the left side are itemized on the unentangling operations of _three_ entangled 2-qubits on the right side of the equality sign,
This sort of diagrammatic analysis of the entropy of entanglement of formation can straightforwardly be employed to all multi-partite qubit systems by a
Figure 4: Figure reproduced from Ref. [5]. In our diagrammatic analysis the right-hand side of the figure yields \(1+1=2\) as the entanglement of formation, being the sum of two two-qubit entanglements. Thus, in our diagrammatic analysis, we obtained exact monogamy equality not inequality.
Figure 5: In our diagrammatic analysis the right-hand side of the figure yields \(1+1+1=3\) as the entanglement of formation, being the sum of three two-qubit entanglements. Thus, in our diagrammatic analysis, we obtained exact equality not inequality.
simple counting argument as is done here. Although our diagrammatic analysis is based on maximally entangled qubits, we believe the diagrammatic construction holds also for any arbitrarily entangled multi-partite qubit systems. For example, for 18 multi-partite entangled qubits in Fig. 6, we have the entanglement of formation schematically depicted by the equality where there is 17 number of monogamy in the right-hand side yielding 17 qubits of entropy of entanglement of formation.
## 6 Expansion of Product States in Bell Basis
Let us consider \(\left|00\right\rangle\), \(\left|01\right\rangle\), \(\left|10\right\rangle\), \(\left|11\right\rangle\) as our direct product basis in expanding \(\left|\psi\right\rangle\),
\[\left|\psi\right\rangle = \alpha_{0}\left|00\right\rangle+\alpha_{1}\left|01\right\rangle+ \alpha_{2}\left|10\right\rangle+\alpha_{3}\left|11\right\rangle\] \[= \sum_{i}\beta_{i}e_{i}\]
where \(e_{i}\) denotes the Bell basis. We have the Bell basis given by
\[e_{1} = \left|\Phi^{+}\right\rangle=\frac{1}{\sqrt{2}}\left(\left|0\right \rangle\left|0\right\rangle+\left|1\right\rangle\left|1\right\rangle\right)\] \[e_{2} = \left|\Phi^{-}\right\rangle=\frac{1}{\sqrt{2}}\left(\left|0 \right\rangle\left|0\right\rangle-\left|1\right\rangle\left|1\right\rangle\right)\] \[e_{3} = \left|\Psi^{+}\right\rangle=\frac{1}{\sqrt{2}}\left(\left|0 \right\rangle\left|1\right\rangle+\left|1\right\rangle\left|0\right\rangle\right)\] \[e_{4} = \left|\Psi^{-}\right\rangle=\frac{1}{\sqrt{2}}\left(\left|0 \right\rangle\left|1\right\rangle-\left|1\right\rangle\left|0\right\rangle\right) \tag{14}\]
Figure 6: In our diagrammatic analysis the right hand side of the figure yields \(1+1+1,......1=17\) qubits as the entanglement of formation, being the sum of 17 two-qubit entanglements of formation. There are 18 multi-party entangled qubits in the left-hand side.
Standard basis can be express in terms of Bell basis, a re-statement of Eqs. (1) and (2), namely,
\[\left|0\right\rangle\left|0\right\rangle = \frac{1}{\sqrt{2}}\left(e_{1}+e_{2}\right)\] \[\left|1\right\rangle\left|1\right\rangle = \frac{1}{\sqrt{2}}\left(e_{1}-e_{2}\right)\] \[\left|0\right\rangle\left|1\right\rangle = \frac{1}{\sqrt{2}}\left(e_{3}+e_{4}\right)\] \[\left|1\right\rangle\left|0\right\rangle = \frac{1}{\sqrt{2}}\left(e_{3}-e_{4}\right)\]
so we can also expand \(\left|\psi\right\rangle\) as
\[\left|\psi\right\rangle = \beta_{1}e_{1}+\beta_{2}e_{2}+\beta_{3}e_{3}+\beta_{4}e_{4}\] \[= \beta_{1}\left(\frac{1}{\sqrt{2}}\left(\left|0\right\rangle \left|0\right\rangle+\left|1\right\rangle\left|1\right\rangle\right)\right)+ \beta_{2}\left(\frac{1}{\sqrt{2}}\left(\left|0\right\rangle\left|0\right\rangle -\left|1\right\rangle\left|1\right\rangle\right)\right)\] \[+\beta_{3}\left(\frac{1}{\sqrt{2}}\left(\left|0\right\rangle \left|1\right\rangle+\left|1\right\rangle\left|0\right\rangle\right)\right)+ \beta_{4}\left(\frac{1}{\sqrt{2}}\left(\left|0\right\rangle\left|1\right\rangle -\left|1\right\rangle\left|0\right\rangle\right)\right)\]
\[\left|\psi\right\rangle = \frac{1}{\sqrt{2}}\left(\beta_{1}+\beta_{2}\right)\left|0\right\rangle \left|0\right\rangle+\frac{1}{\sqrt{2}}\left(\beta_{1}-\beta_{2}\right)\left|1 \right\rangle\left|1\right\rangle\] \[+\frac{1}{\sqrt{2}}\left(\beta_{3}+\beta_{4}\right)\left|0\right\rangle \left|1\right\rangle+\frac{1}{\sqrt{2}}\left(\beta_{3}-\beta_{4}\right)\left|1 \right\rangle\left|0\right\rangle\]
Therefore, equating coefficients, we have the equality,
\[\left|\psi\right\rangle = \alpha_{0}\left|00\right\rangle+\alpha_{1}\left|01\right\rangle +\alpha_{2}\left|10\right\rangle+\alpha_{3}\left|11\right\rangle\] \[= \frac{1}{\sqrt{2}}\left(\beta_{1}+\beta_{2}\right)\left|0\right\rangle \left|0\right\rangle+\frac{1}{\sqrt{2}}\left(\beta_{1}-\beta_{2}\right)\left|1 \right\rangle\left|1\right\rangle\] \[+\frac{1}{\sqrt{2}}\left(\beta_{3}+\beta_{4}\right)\left|0\right\rangle \left|1\right\rangle+\frac{1}{\sqrt{2}}\left(\beta_{3}-\beta_{4}\right)\left|1 \right\rangle\left|0\right\rangle\]
yielding
\[\alpha_{0} = \frac{1}{\sqrt{2}}\left(\beta_{1}+\beta_{2}\right)\] \[\alpha_{1} = \frac{1}{\sqrt{2}}\left(\beta_{3}+\beta_{4}\right)\] \[\alpha_{2} = \frac{1}{\sqrt{2}}\left(\beta_{3}-\beta_{4}\right)\] \[\alpha_{3} = \frac{1}{\sqrt{2}}\left(\beta_{1}-\beta_{2}\right)\]
Then
\[\sum_{0}^{3}\left|\alpha_{i}\right|^{2} = \sum_{1}^{4}\left|\beta_{i}\right|^{2}=1\] \[= \left|\frac{1}{\sqrt{2}}\left(\beta_{1}+\beta_{2}\right)\right|^{2 }+\left|\frac{1}{\sqrt{2}}\left(\beta_{1}-\beta_{2}\right)\right|^{2}+\left| \frac{1}{\sqrt{2}}\left(\beta_{3}+\beta_{4}\right)\right|^{2}+\left|\frac{1}{ \sqrt{2}}\left(\beta_{3}-\beta_{4}\right)\right|^{2}\] \[= \frac{1}{2}\left(2\left(\beta_{1}^{2}+\beta_{2}^{2}\right)\right) +\frac{1}{2}\left(2\left(\beta_{3}^{2}+\beta_{4}^{2}\right)\right)=\sum_{1}^{4 }\left|\beta_{i}\right|^{2}=1\]
Summarizing, the bijective relation between the Bell basis states and the product states given in Eq. (14) reflects the discrete phase-space physics of these two-state systems.
## 7 Chiral Degrees of Freedom and Entangled Qubits
Observe that the pseudo-spin variables in semiconductor Bloch equation is defined by the following expressions,
\[S_{0} = \left(\rho_{cc}+\rho_{vv}\right)\] \[S_{x} = \rho_{vc}+\rho_{cv}\] \[S_{y} = -i\left(\rho_{vc}-\rho_{cv}\right)\] \[S_{z} = \left(\rho_{cc}-\rho_{vv}\right)\]
This maps to
\[e_{0} = \left|\Phi^{+}\right\rangle=\frac{1}{\sqrt{2}}\left(\left|1 \right\rangle\left|1\right\rangle+\left|0\right\rangle\left|0\right\rangle\right)\] \[e_{x} = \left|\Psi^{+}\right\rangle=\frac{1}{\sqrt{2}}\left(\left|0 \right\rangle\left|1\right\rangle+\left|1\right\rangle\left|0\right\rangle\right)\] \[ie_{y} = i\left|\Psi^{-}\right\rangle=\frac{1}{\sqrt{2}}\left(\left|0 \right\rangle\left|1\right\rangle-\left|1\right\rangle\left|0\right\rangle\right)\] \[e_{z} = \left|\Phi^{-}\right\rangle=\frac{1}{\sqrt{2}}\left(\left|1 \right\rangle\left|1\right\rangle-\left|0\right\rangle\left|0\right\rangle\right)\]
We refer to the basis \(\left(e_{0},e_{x},e_{y},e_{z}\right)\) as the pseudo-spin component basis, which differ from the Bell basis in \(\left|\Psi^{-}\right\rangle\) by the factor \(i\) in \(\left|\Psi^{-}\right\rangle\). An important observation that follows from this is that the entangling of chiral degrees of freedom creates another chiral degrees of freedom, e.g., Eq. (13).
This is a very important observation. This will aid in understanding the _entanglement-induced delocalization_, for example, in topological insulators where entanglement occurs at the edge of the sample. This is treated by the quantum transport approach in Ref. [6].
The Concurrence Concept and Emergent Qubit
By virtue of our diagrammatic construction, coupled with discrete phase space Hadamard transformation, in deriving the entangled basis states, the concurrence concept defined by Wooters [2, 3], as well as the two-state (qubit) properties of entangled multi-partite qubits, naturally coincides with the mathematical description of our physical model of qubit entanglement. In other words, the essence of the quantum description of an entangled qubits in Fig. 1 consists of the superposition of product'site states' and their corresponding translated'site states' of all qubits. This means a superposition of the two states of the entangled two qubits of Fig.1. One observe that irrespective of how many qubits are entangled, the resulting entangled state is an _emergent_ two-state system and therefore behave just like one qubit with two-states, namely, the first state and its _translated_ state [4]. Therefore its associated entropy is just one, Eq. (11).4
Footnote 4: This is also deduced when concurrence \(C=1\)[3].
The concept of concurrence is basically contained by construction of our physical model of qubit entanglement. It is defined by
\[\left|\left\langle\Psi\right|\left|\tilde{\Psi}\right\rangle\right|=C \tag{15}\]
where \(C\) is the quantitative value of concurrence. This number lies between \(0\) and \(1\), \(0\leq C\leq 1\). Here \(\tilde{\Psi}\) is the corresponding translated qubits of \(\Psi\).
## 9 A Natural Measure of Entanglement
The two properties of any entangled qubits, namely, concurrence and its _emergent_ qubit behavior, lead Wooters [2, 3] to introduce a measure of entanglement, as incorporated in the two formulas,
\[E\left(C\right)=H\left(\frac{1}{2}+\frac{1}{2}\sqrt{1-C^{2}}\right) \tag{16}\]
\[H\left(x\right)=-x\ln x-\left(1-x\right)\ln\left(1-x\right) \tag{17}\]
Equation (16) basically say that if there is complete concurrence, i.e., \(C=1\), then Eq. (17) says that the system behave as an emergent qubit or two-state system, as depicted clearly in our diagrams. Thus, for maximally entangled multi-partite qubits, we have \(C=1\),
\[E\left(C\right) = H\left(\frac{1}{2}\right)\] \[H\left(\frac{1}{2}\right) = \frac{1}{2}\ln 2+\frac{1}{2}\ln 2\] \[= 1\]
affirming that entangled qubits behave as a two-state system or as an _emergent qubit_, yielding entropy equals one.
### Entropic distance in entanglement measure
From the above developments, one can introduce an entanglement entropic distance by the formula
\[\mathcal{E}=\left|Tr\left(\rho\ln\rho\right)-1\right|\]
where \(Tr\left(\rho\ln\rho\right)\) is evaluated using Eqs. (16) and (17). This distance has a range: \(0\leq\mathcal{E}\leq\)1. When the concurrence, \(C=1\), \(Tr\left(\rho\ln\rho\right)=1=H\left(\frac{1}{2}\right)\), then we have the entropy-distance from maximally entangled state, \(\mathcal{E}=0\). When \(C=0\), \(Tr\left(\rho\ln\rho\right)=0=H\left(1\right)\), then we have the entropy-distance from maximally entangled state, \(\mathcal{E}=1\).
An example where the \(C<1\) may occur in the following singlet state, with less entanglement,
\[\left|\psi\right\rangle=\alpha\left|01\right\rangle+\beta\left|10\right\rangle\]
where
\[\alpha^{2}+\beta^{2}=1\]
Then
\[C = \left|\left\langle\psi\right|\left|\tilde{\psi}\right\rangle \right|=\left(\alpha\left\langle 01\right|+\beta\left\langle 10\right|\right) \left(\alpha\left|10\right\rangle+\beta\left|01\right\rangle\right)\] \[= \alpha\beta+\beta\alpha\leq 1\]
## 10 Mixed States and Pure States
In discussing mixed and pure states, one makes use of density-matrix operators. This is the domain of abstract statistical treatment usually found in the literature, perhaps following the statistical tradition of Bell's theorem. To elucidate the basic physics, we will here avoid abstract statistical treatment and only discuss specific situations and examples.
### Mixed states and mixed entanglements
Consider the maximally mixed state of a triplet system,
\[\hat{\rho}_{W}=\frac{1}{2}\left(\left|00\right\rangle\left\langle 00\right|+ \left|11\right\rangle\left\langle 11\right|\right)\]
From Eqs. (1), we obtain the mixed entanglements given by
\[\hat{\rho}_{B}=\frac{1}{2}\left(\left|\Phi^{+}\Phi^{+}\right\rangle\left\langle \Phi^{+}\Phi^{+}\right|+\left|\Phi^{-}\Phi^{-}\right\rangle\left\langle\Phi^{-} \Phi^{-}\right|\right)\]
Similarly, consider the maximally mixed state of a singlet system,
\[\hat{\rho}_{W}=\frac{1}{2}\left(\left|01\right\rangle\left\langle 01\right|+ \left|10\right\rangle\left\langle 10\right|\right)\]
From Eqs. (2), we obtain the mixed entanglements given by
\[\hat{\rho}_{B}=\frac{1}{2}\left(\left|\Psi^{+}\Psi^{+}\right\rangle\left\langle \Psi^{+}\Psi^{+}\right|+\left|\Psi^{-}\Psi^{-}\right\rangle\left\langle\Psi^{ -}\Psi^{-}\right|\right)\]
### Mixed state from pure state (entangled state)
Consider the example of a bipartite of two qubit system. Consider a pure entangled state,
\[\left|\psi\right\rangle=\frac{1}{\sqrt{2}}\left(\left|00\right\rangle+\left|11 \right\rangle\right).\]
Then the density matrix is,
\[\rho=\frac{1}{2}\left(\left|00\right\rangle+\left|11\right\rangle\right)\left( \left\langle 00\right|+\left\langle 11\right|\right), \tag{18}\]
where the first qubit belongs to party \(A\) and the second qubit belongs to party \(B\).5 We have
Footnote 5: What we mean by party \(A\) and \(B\) is in the general sense since any entangled number of qubits behave as an emergent qubit.
\[\rho^{2}=\rho\]
so that Eq. (18) is a pure state. Again, we have, by tracing the party \(B\) we obtain,
\[\rho_{A} = Tr_{B}\ \rho\] \[= \frac{1}{2}\left\langle 0_{B}\right|\left(\left|00\right\rangle +\left|11\right\rangle\right)\left(\left\langle 00\right|+\left\langle 11 \right|\right)\left|0_{B}\right\rangle\] \[+\frac{1}{2}\left\langle 1_{B}\right|\left(\left|00\right\rangle +\left|11\right\rangle\right)\left(\left\langle 00\right|+\left\langle 11 \right|\right)\left|1_{B}\right\rangle\] \[= \frac{1}{2}\left(\left|0_{A}\right\rangle\left\langle 0_{A} \right|+\left|1_{A}\right\rangle\left\langle 1_{A}\right|\right)\]
Now clearly
\[\rho_{A}^{2} = \frac{1}{4}\left(\left|0_{A}\right\rangle\left\langle 0_{A} \right|+\left|1_{A}\right\rangle\left\langle 1_{A}\right|\right)\left(\left|0_{A} \right\rangle\left\langle 0_{A}\right|+\left|1_{A}\right\rangle\left\langle 1_{A} \right|\right)\] \[= \frac{1}{4}\left(\left|0_{A}\right\rangle\left\langle 0_{A} \right|+\left|1_{A}\right\rangle\left\langle 1_{A}\right|\right)\] \[= \frac{1}{2}\rho_{A}\]
so that \(\rho_{A}\) is not a pure state but mixed, i.e., a mixture of two pure states, \(\left|0_{A}\right\rangle\) and \(\left|1_{A}\right\rangle\).6 Note that the mixed states density matrix do not possess _off-diagonal elements_. The tracing operation basically eliminates the contribution of the "_off-diagonal_" terms. From the last identity, the eigenvalues of \(\rho_{A}\) is \(\frac{1}{2}\) and since the subsystem \(A\) is an emergent qubit, in general with two distinct states, then eigenvalues of \(\rho_{A}\) corresponds to \(\frac{1}{\Omega}\) of our first example above. Thus we refer to \(\rho_{A}\) as "uniformly" mixed often referred to as "_maximally mixed_" with the inital state \(\left|\psi\right\rangle\) as "_maximally entangled"_.
Footnote 6: If the parties \(A\) and \(B\) are entangled states, then what we have obtain are also mixed entanglement through the inverse relationships, Eq. (1) and (2).
Concluding Remarks
The discrete phase-space viewpoint for qubit entanglement has been very fruitful in analytically deriving the entanglement basis states for any number of entangled qubits as demonstrated by the author in his book [7], as well as in Ref. [4]. Moreover, the inverter-chain mechanical model of entanglement link has been demonstrated to faithfully implement the discrete phase space viewpoint [4]. The crucial observation that arise from this inverter-chain link model is that any multi-partite qubit entangled system has the property of an _emergent_ qubit, which has been rigorously justified. This aspect of the model readily lead us to the equality relations of the entropy of entanglement formation in the so-called monogamy inequality of entanglement formation, using statistical arguments [5] discussed in the literature. In this paper, mixed states and mixed entanglements are related by the Hadamard transformations. In addition, we show that mixed state can be extracted from pure entangled state, where the party \(A\) or \(B\) need not be a single qubit themselves but can each be an entangled multi-partite qubit system, respectively, by virtue of _emergent_ qubit property of entangled qubit systems.
The natual measure of entanglement should be based on entropy of entanglement formation, concurrence, and entropic distance from maximally entangled reference.
|
2304.11186 | Gravitational Wave Flux and Quadrupole Modes from Quasi-Circular
Non-Spinning Compact Binaries to the Fourth Post-Newtonian Order | This article provides the details on the technical derivation of the
gravitational waveform and total gravitational-wave energy flux of non-spinning
compact binary systems to the 4PN (fourth post-Newtonian) order beyond the
Einstein quadrupole formula. In particular: (i) we overview the link between
the radiative multipole moments measured at infinity and the source moments in
the framework of dimensional regularization; (ii) we compute special
corrections to the source moments due to "infrared" commutators arising at the
4PN order; (iii) we derive a "post-adiabatic" correction needed to evaluate the
tail integral with 2.5PN relative precision; (iv) we discuss the relation
between the binary's orbital frequency in quasi-circular orbit and the
gravitational-wave frequency measured at infinity; (v) we compute the
hereditary effects at the 4PN order, including those coming from the recently
derived tails-of-memory; and (vi) we describe the various tests we have
performed to ensure the correctness of the results. Those results are collected
in an ancillary file. | Luc Blanchet, Guillaume Faye, Quentin Henry, François Larrouturou, David Trestini | 2023-04-21T18:00:08Z | http://arxiv.org/abs/2304.11186v4 | Gravitational Wave Flux and Quadrupole Modes from Quasi-Circular Non-Spinning Compact Binaries to the Fourth Post-Newtonian Order
###### Abstract
This article provides the details on the technical derivation of the gravitational waveform and total gravitational-wave energy flux of non-spinning compact binary systems to the 4PN (fourth post-Newtonian) order beyond the Einstein quadrupole formula. In particular: (i) we overview the link between the radiative multipole moments measured at infinity and the source moments in the framework of dimensional regularization; (ii) we compute special corrections to the source moments due to "infrared" commutators arising at the 4PN order; (iii) we derive a "post-adiabatic" correction needed to evaluate the tail integral with 2.5PN relative precision; (iv) we discuss the relation between the binary's orbital frequency in quasi-circular orbit and the gravitational-wave frequency measured at infinity; (v) we compute the hereditary effects at the 4PN order, including those coming from the recently derived tails-of-memory; and (vi) we describe the various tests we have performed to ensure the correctness of the results. Those results are collected in an ancillary file.
+
Footnote †: preprint: DESY-23-044
## I Introduction
The post-Newtonian (PN) approximation is a paramount tool for computing the generation of gravitational waves (GWs) by isolated sources. This perturbation technique relies on two joint expansions, valid for systems that have slow velocity and are self-gravitating, and which are thus linked by weak gravitational fields. In consequence, it is ideally suited to study the inspiral phase of compact binary systems, made of black holes (BHs) or neutron stars (NSs). While the derivation of lowest-order PN results, starting from the famous Einstein quadrupole formula [1; 2; 3], is fairly straightforward, the systematic expansion to high PN orders entails a number of subtle issues. One is the control of physical non-linear effects in the propagation of GWs from source to detector. Other more technical difficulties are linked to the appearance of divergent integrals in the calculations, and the subsequent need to implement a proper regularization scheme.
There are two modern ways of carrying out the PN approximation. The first one is to embed it into the more general PN-MPM framework [4; 5; 6; 7; 8], consisting of a multipolar and post-Minkowskian (MPM) expansion for the field exterior to the isolated source, subsequently matched to the inner PN field of the source. This method achieved the completion of the gravitational waveform to 3.5PN order for non-spinning objects [9; 10; 11; 12; 13; 14; 15; 16; 17; 18], _i.e._, up to the \((v/c)^{7}\) correction beyond the Einstein quadrupole formula. A variant of the PN-MPM framework is the DIRE formalism, developed to 2PN order [19]. The second way relies on an effective field theory (EFT) [20; 21], extracting the radiation sector from a derivative expansion at the level of the action [22]. This more recent framework has reached the 2PN precision [23]. We refer the interested reader to the reviews [24; 25; 26; 27] for more details on the PN scheme.
The main output of the PN calculation is the observable waveform (phase and amplitude modes) as an expansion series in terms of the PN parameter \(x=(\frac{Gm!}{c^{3}})^{2/3}\), where \(\Omega\) is the half of the GW frequency. This is obviously essential for the data analysis of GW detectors [28; 29], notably for the current LIGO-Virgo-Kagra network [30; 31; 32], but also for future generations of detectors, such as the Laser Interferometer Space Antenna (LISA) [33] or the
Einstein Telescope (ET) [34]. In addition, the PN results are of great interest for numerical relativity, as they are naturally used for comparisons with numerical results (see _e.g._[35; 36; 37; 38]), as well as for the calibration of initial data (see _e.g._[39; 40; 41; 42]). They also play a crucial role for comparisons with gravitational self-force (GSF) results, notably second-order ones [43; 44; 45].
On the other hand, the PN framework is essential when it comes to experimentally testing the theory of gravitation. By drawing precise predictions and confronting them against observations, it allows performing "agnostic" tests of general relativity (GR) in the Solar system, for binary pulsars and even for transiting exoplanets [46]. With GWs, the idea is to treat each coefficient in the PN expansion of the GW phase as an independent parameter, and fit it against GW data [47; 48]. This method has been applied to data collected from LIGO-Virgo detectors, and has already permitted probing the non-linear structure of GR by confirming the signature of GW tails to the 1.5PN order [49; 50; 51]. This _modus operandi_ is very promising, notably regarding the possibility of multi-band detections, combining LISA and detectors on ground [52]. In the same spirit, the prediction for the tidal effects affecting the binary NSs [53; 54; 55; 56; 57; 58; 59; 60], which first contribute to the waveform at the 5PN order (in the spin-less case), have been used at leading order to put a constraint on the equation of state of matter composing neutron stars [61], and could be used to distinguish usual BHs from "exotic" compact objects with ET. In addition to the agnostic tests, the PN framework is also used to perform "educated" tests, by computing observable quantities in specific alternative theories. It has been developed, for instance, in the context of massless scalar field theories [62; 63; 64; 65], and used to make predictions concerning the gravitational radiation [66; 67; 68]. Similar studies have been conducted for various alternative theories [69; 70; 71; 72; 73].
This paper presents the last step of the completion of the gravitational waveform for non-spinning, compact binary systems evolving with 4PN precision, in the case of quasi-circular orbits (and in GR). The first step was the computation of the equations of motion to 4PN order, from which one deduces in particular the binary's gravitational binding energy. The equations of motion were obtained by several groups using different methods: (i) the Arnowitt-Deser-Misner (ADM) Hamiltonian formalism [74; 75; 76; 77] led to complete results except for one "ambiguity" parameter, which was fixed by resorting to a comparison with gravitational self-force results [78]; (ii) the Fokker Lagrangian formalism in harmonic coordinates [79; 80; 81; 82] yielded for the first time a complete result without ambiguity parameter; and (iii) the effective field theory (EFT) approach [83; 84; 85; 86; 87; 88; 89; 90] recovered the complete and unambiguous result. The second step was the computation and proper regularization, using the PN-MPM formalism, of the source mass quadrupole moment to 4PN [91; 92; 93], the current quadrupole to 3PN [94] and the mass octupole to the same order [95]. The next step, relying on previous works as well, was the control of the various non-linear interactions between moments occuring through the 4PN order [16; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 1777; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 212; 213; 214; 215; 216; 217; 218; 219; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 2777; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 287; 289; 288; 289; 281; 285; 288; 289; 286; 287; 289; 288; 289; 291; 280; 289; 281; 280; 281; 282; 284; 286; 287; 288; 289; 289; 292; 285; 287; 289; 293; 288; 289; 281; 286; 287; 288; 289; 294; 289; 295; 296; 297; 298; 299; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 312; 313; 314; 315; 316; 317; 318; 319; 320; 321; 324; 325; 326; 327; 328; 333; 334; 335; 351; 329; 336; 337; 338; 390; 338; 391; 339; 339; 340; 307; 308; 309; 341; 342; 343; 344; 352; 344; 353; 354; 355; 356; 357; 358; 359; 360; 361; 362; 363; 364; 365; 366; 367; 368; 369; 370; 371; 372; 373; 374; 375; 376; 377; 38; 377; 388; 389; 392; 393; 394; 395; 396; 397; 398; 399; 40; 411; 42; 43; 444; 450; 444; 451; 452; 453; 46; 47; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 640; 64; 65; 65; 66; 66; 67; 68; 69; 70; 60; 610; 61; 62; 64; 65; 66; 68; 67; 69; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 82; 89; 80; 83; 85; 87; 89; 80; 84; 86; 88; 87; 89; 81; 89; 910; 85; 88; 88; 89; 911; 86; 89; 80; 87; 88; 81; 89; 92; 82; 83; 88; 89; 80; 86; 89; 80; 87; 81; 88; 82; 89; 80; 89; 81; 84; 88; 85; 89; 82; 81; 86; 87; 83; 89; 84; 89; 85; 86; 87; 88; 89; 89; 900; 88; 89; 92; 80; 87; 88; 89; 80; 89; 93; 81; 89; 80; 81; 89; 80; 81; 80; 82; 81; 83; 84; 86; 81; 85; 89; 86; 87; 88; 89; 80; 89; 94; 81; 80; 83; 88; 89; 80; 81; 84; 89; 81; 85; 86; 87; 89; 82; 87; 88; 89; 80; 89; 83; 81; 89; 80; 84; 89; 81; 85; 86; 89; 83; 87; 86; 89; 80; 87; 88; 89; 80; 89; 82; 89; 80; 89; 83; 82; 89; 80; 83; 84; 89; 86; 89; 87; 84; 88; 89; 89; 900; 89; 89; 80; 89; 81; 89; 80; 89; 800; 81; 89; 81; 80; 81; 89; 80; 89; 81; 81; 82; 89; 83; 86; 81; 89; 80; 83;
Non-linear effects in the GW propagation
In this work,1 we compute the GW energy flux of compact binaries on generic orbits and the dominant GW mode \((\ell,\mathrm{m})=(2,2)\) to the 4PN order, as well as the energy flux on quasi-circular orbits to the 4.5PN order. Let us recall that the general multipole decomposition of the flux reads [104]
Footnote 1: The conventions employed throughout are as follows: we work with a mostly plus signature; Greek letters denote spacetime indices and Latin letters denote purely spatial indices; we use the multi-index notations, with symmetric and trace-free (STF) mass and current moments \(\mathrm{M}_{L}\equiv\mathrm{M}_{ii_{2}\ldots i_{\ell}}\) and \(\mathrm{S}_{L}\equiv\mathrm{S}_{i_{1}i_{2}\ldots i_{\ell}}\); a superscript \((n)\) indicates \(n\) time derivatives; hats and angular brackets denote a symmetric trace-free (STF) projection, for instance \(\hat{x}_{L}=x_{(L)}=\mathrm{STF}[x_{L}]\) with \(x_{L}=x_{i_{1}}\cdots x_{i_{\ell}}\); the d’Alembertian operator is defined with respect to the flat Minkowski metric \(\Box\equiv\eta^{\mu\rho}\partial_{\mu\nu}=\Delta-c^{-2}\partial_{t}^{2}\); symmetrization and antisymmetrization are weighted, _e.g._\(A_{(ij)}=(A_{ij}+A_{ji})/2\) and \(A_{[ij]}=(A_{ij}-A_{ji})/2\); the usual Levi-Civita symbol is denoted \(\epsilon_{ijk}\) with \(\epsilon_{123}=1\); finally, as usual, we dubb “nPN” a quantity of order \(\mathcal{O}(c^{-2n})\).
\[\mathcal{F}=\sum_{\ell\geqslant 2}\frac{G}{c^{2\ell+1}}\bigg{[}a_{\ell}\, \mathrm{U}_{L}^{(1)}\mathrm{U}_{L}^{(1)}+\frac{b_{\ell}}{c^{2}}\,\mathrm{V}_{ L}^{(1)}\mathrm{V}_{L}^{(1)}\bigg{]}\,, \tag{1}\]
where \(\mathrm{U}_{L}\) and \(\mathrm{V}_{L}\) (for \(\ell\geqslant 2\)) respectively denote the mass and current radiative multipole moments (which are STF in their indices), and the numerical coefficients are given by
\[a_{\ell}=\frac{(\ell+1)(\ell+2)}{(\ell-1)\ell\,\ell!\;(2\ell+1)!}\qquad\text{ and}\qquad b_{\ell}=\frac{4\ell(\ell+2)}{(\ell-1)\,(\ell+1)!\,(2\ell+1)!!}\,. \tag{2}\]
The asymptotic waveform, to order \(1/R\) in the distance to the source, when expressed in a (Bondi type) radiative coordinate system \((T,X^{i})\), with \(\mathbf{X}\equiv R\,\mathbf{N}\), and written as a perturbation to the ordinary metric \(g_{ab}\), reads [104]
\[h_{ab}^{\mathrm{TT}}=\frac{4G}{c^{2}R}\perp_{abij}(\mathbf{N})\sum_{\ell\geqslant 2 }\frac{1}{c^{\ell}\,\ell!}\bigg{[}N_{L-2}\,\mathrm{U}_{ijL-2}(u)-\frac{2\ell} {c(\ell+1)}N_{kL-2}\epsilon_{kl(i}\mathrm{V}_{j)lL-2}(u)\bigg{]}+\mathcal{O} \left(\frac{1}{R^{2}}\right)\,, \tag{3}\]
where \(\perp_{abij}(\mathbf{N})\) is the usual transverse and traceless (TT) projector and the multipole moments are defined at the retarded time \(u\equiv T-R/c\). The GW amplitude \((\ell,\mathrm{m})\) modes can be read off the radiative moments \(\mathrm{U}_{L}\) and \(\mathrm{V}_{L}\), and for instance the dominant mode \((2,2)\) can be computed directly from the mass quadrupole moment \(\mathrm{U}_{ij}\) (in the case of planar, non precessing, orbits; see [16; 17; 105]). Therefore, the knowledge of \(\mathrm{U}_{L}\) and \(\mathrm{V}_{L}\) at the relevant PN order will lead to the desired results.
To describe the non-linear effects in the GW propagation from the source to the observer, we connect the radiative moments to the so-called canonical moments \(\mathrm{M}_{L}\) and \(\mathrm{S}_{L}\). In turn, the canonical moments are expressed in terms of source moments \(\mathrm{I}_{L}\) and \(\mathrm{J}_{L}\), as well as "gauge" moments, \(\mathrm{W}_{L}\), \(\mathrm{X}_{L}\), \(\mathrm{Y}_{L}\) and \(\mathrm{Z}_{L}\). The source and gauge moments describe in our formalism the physics of the PN source. This section is thus devoted to the relations between the radiative moments and the source ones, at the required PN order. We refer to [8] for a more detailed review and physical discussions about those constructions.
### Radiative moments versus canonical moments
#### ii.1.1 Radiative moments entering the flux at 4PN order
In order to derive the energy flux (1) at 4PN order beyond the leading quadrupolar order, the obvious first input is the radiative quadrupole moment itself, \(\mathrm{U}_{ij}\), to 4PN order. Recalling that, at leading order, the radiative moments \(\mathrm{U}_{L}\) and \(\mathrm{V}_{L}\) reduce to the \(\ell\)-th time derivatives (with respect to the retarded time \(u\) of the radiative coordinates) of the canonical moments \(\mathrm{M}_{L}\) and \(\mathrm{S}_{L}\), we straightforwardly write
\[\mathrm{U}_{ij}=\mathrm{M}_{ij}^{(2)}+\mathrm{U}_{ij}^{1.5\mathrm{PN}}+\mathrm{ U}_{ij}^{2.5\mathrm{PN}}+\mathrm{U}_{ij}^{3\mathrm{PN}}+\mathrm{U}_{ij}^{3.5 \mathrm{PN}}+\mathrm{U}_{ij}^{4\mathrm{PN}}+\mathcal{O}\left(\frac{1}{c^{9}} \right)\,, \tag{4}\]
with small PN corrections up to 4PN, as indicated. The leading correction at 1.5PN is due to the quadratic interaction between the static ADM mass M and the mass quadrupole moment \(\mathrm{M}_{ij}\), denoted "\(\mathrm{M}\times\mathrm{M}_{ij}\)" and called the "tail". It is well-known and reads explicitly [7]
\[\mathrm{U}_{ij}^{1.5\mathrm{PN}}=\frac{2G\mathrm{M}}{c^{3}}\int_{0}^{+\infty} \mathrm{d}\tau\,\mathrm{M}_{ij}^{(4)}(u-\tau)\bigg{[}\ln\left(\frac{c\tau}{2b_{ 0}}\right)+\frac{11}{12}\bigg{]}\,. \tag{5}\]
Here the length scale \(b_{0}\) is an arbitrary constant linked with the choice of the origin of time of the asymptotic radiative coordinates, with respect to the harmonic coordinates covering the source's near zone. At the next 2.5PN order, there arises the non-local interaction involving two quadrupole moments \(\mathrm{M}_{ij}\times\mathrm{M}_{kl}\), called the (displacement) "memory", together with associated instantaneous quadrupole-quadrupole terms [106], and an instantaneous interaction \(\mathrm{M}_{ij}\times\mathrm{S}_{k}\) sometimes dubbed "failed tail" [107]:
\[\mathrm{U}_{ij}^{2.5\mathrm{PN}}=\frac{G}{c^{5}}\left\{-\frac{2}{7}\int_{0}^{+ \infty}\!\mathrm{d}\tau\Big{[}\mathrm{M}_{a\langle i}^{(3)}\mathrm{M}_{j \rangle a}^{(3)}\Big{]}(u-\tau)+\frac{1}{7}\,\mathrm{M}_{a\langle i}^{(5)} \mathrm{M}_{j\rangle a}^{(5)}-\frac{5}{7}\,\mathrm{M}_{a\langle i}^{(4)} \mathrm{M}_{j\rangle a}^{(1)}-\frac{2}{7}\,\mathrm{M}_{a\langle i}^{(3)} \mathrm{M}_{j\rangle a}^{(2)}+\frac{1}{3}\epsilon_{ab\langle i}\mathrm{M}_{j \rangle a}^{(4)}\mathrm{S}_{b}\right\}\,. \tag{2.6}\]
At the 3PN order appears the first cubic interaction, consisting of the interplay between two ADM masses and the quadrupole moment, _i.e._, \(\mathrm{M}\times\mathrm{M}\times\mathrm{M}_{ij}\), rightly called "tail-of-tail" and given by [95; 10]
\[\mathrm{U}_{ij}^{3\mathrm{PN}}=\frac{2G^{2}\mathrm{M}^{2}}{c^{6}}\!\!\int_{0}^ {+\infty}\!\mathrm{d}\tau\,\mathrm{M}_{ij}^{(5)}(u-\tau)\bigg{[}\mathrm{ln}^{ 2}\left(\frac{c\tau}{2b_{0}}\right)+\frac{11}{6}\ln\left(\frac{c\tau}{2b_{0}} \right)-\frac{107}{105}\ln\left(\frac{c\tau}{2r_{0}}\right)+\frac{124627}{44100 }\bigg{]}\,\,. \tag{2.7}\]
Note the appearance here of a new constant length scale \(r_{0}\), which should be distinguished from the previously introduced scale \(b_{0}\). The scale \(r_{0}\) is a running scale in the sense of renormalization group theory [108; 109; 21]. The coefficient in front of the \(\ln r_{0}\) term in (2.7) is exactly the \(\beta\)-function coefficient associated to the renormalization of the mass quadrupole moment, say \(\beta_{2}=-\frac{214}{105}\). In the present formalism, this scale originates from the Hadamard regularization scheme. The next-order 3.5PN term has a structure similar to the 2.5PN one, _i.e._ with some memory type integrals and instantaneous terms. The interactions between moments, still of quadratic nature, are however more complicated:
\[\mathrm{U}_{ij}^{3.5\mathrm{PN}}=\frac{G}{c^{7}}\Bigg{\{} \int_{0}^{+\infty}\!\mathrm{d}\tau\bigg{[}-\frac{5}{756}\mathrm{M }_{ab}^{(4)}\mathrm{M}_{ijab}^{(4)}-\frac{32}{63}\mathrm{S}_{a\langle i}^{(3)} \mathrm{S}_{j\rangle a}^{(3)}\bigg{]}(u-\tau)\] \[-\frac{1}{432}\mathrm{M}_{ab}\mathrm{M}_{ijab}^{(7)}+\frac{1}{432 }\mathrm{M}_{ab}^{(1)}\mathrm{M}_{ijab}^{(6)}-\frac{5}{756}\mathrm{M}_{ab}^{ (2)}\mathrm{M}_{ijab}^{(5)}+\frac{19}{648}\mathrm{M}_{ab}^{(3)}\mathrm{M}_{ ijab}^{(4)}+\frac{1957}{3024}\mathrm{M}_{ab}^{(4)}\mathrm{M}_{ijab}^{(3)}\] \[+\frac{1685}{1008}\mathrm{M}_{ab}^{(5)}\mathrm{M}_{ijab}^{(2)}+ \frac{41}{28}\mathrm{M}_{ab}^{(6)}\mathrm{M}_{ijab}^{(1)}+\frac{91}{216} \mathrm{M}_{ab}^{(7)}\mathrm{M}_{ijab}-\frac{5}{252}\mathrm{M}_{ab\langle i} \mathrm{M}_{j\rangle ab}^{(7)}+\frac{5}{189}\mathrm{M}_{ab\langle i}^{(1)} \mathrm{M}_{j\rangle ab}^{(6)}\] \[+\frac{5}{126}\mathrm{M}_{ab\langle i}^{(2)}\mathrm{M}_{j\rangle ab}^{(5)}+ \frac{5}{2268}\mathrm{M}_{ab\langle i}^{(3)}\mathrm{M}_{j\rangle ab}^{(4)}+ \frac{5}{42}\mathrm{S}_{a\langle i}\mathrm{S}_{ija}^{(5)}+\frac{80}{63}\mathrm{ S}_{a\langle i}\mathrm{S}_{j\rangle a}^{(5)}+\frac{16}{63}\mathrm{S}_{a\langle i}^{(1)} \mathrm{S}_{j\rangle a}^{(4)}-\frac{64}{63}\mathrm{S}_{a\langle i}^{(2)} \mathrm{S}_{j\rangle a}^{(3)}\] \[+\epsilon_{ac(i}\bigg{(}\int_{0}^{+\infty}\!\mathrm{d}\tau\left[ \frac{5}{42}\mathrm{S}_{j\rangle cb}^{(4)}\mathrm{M}_{ab}^{(3)}-\frac{20}{189 }\mathrm{M}_{j\rangle cb}^{(4)}\mathrm{S}_{ab}^{(3)}\bigg{]}(u-\tau)\] \[+\frac{1}{168}\mathrm{S}_{j\rangle bc}^{(6)}\mathrm{M}_{ab}+\frac {1}{24}\mathrm{S}_{j\rangle bc}^{(5)}\mathrm{M}_{ab}^{(1)}+\frac{1}{28}\mathrm{ S}_{j\rangle bc}^{(4)}\mathrm{M}_{ab}^{(2)}-\frac{1}{6}\mathrm{S}_{j\rangle bc}^{(3)} \mathrm{M}_{ab}^{(3)}+\frac{3}{56}\mathrm{S}_{j\rangle bc}^{(2)}\mathrm{M}_{ab} ^{(4)}\] \[+\frac{187}{168}\mathrm{S}_{j\rangle bc}^{(1)}\mathrm{M}_{ab}^{( 5)}+\frac{65}{84}\mathrm{S}_{j\rangle bc}\mathrm{M}_{ab}^{(6)}+\frac{1}{189} \mathrm{M}_{j\rangle bc}^{(6)}\mathrm{S}_{ab}-\frac{1}{189}\mathrm{M}_{j\rangle bc }^{(5)}\mathrm{S}_{ab}^{(1)}\] \[+\frac{10}{189}\mathrm{M}_{j\rangle bc}^{(4)}\mathrm{S}_{ab}^{(2)}+ \frac{32}{189}\mathrm{M}_{j\rangle bc}^{(3)}\mathrm{S}_{ab}^{(3)}+\frac{65}{189 }\mathrm{M}_{j\rangle bc}^{(2)}\mathrm{S}_{ab}^{(4)}-\frac{5}{189}\mathrm{M}_{ j\rangle bc}^{(1)}\mathrm{S}_{ab}^{(5)}-\frac{10}{63}\mathrm{M}_{j\rangle bc} \mathrm{S}_{ab}^{(6)}\Bigg{)}\Bigg{\}}\,. \tag{2.8}\]
At the 4PN order appears the "tail-of-memory", which is a cubic interaction involving two time-varying quadrupole moments and the mass, _i.e._\(\mathrm{M}\times\mathrm{M}_{ij}\times\mathrm{M}_{kl}\). It also contains a double integration over the two quadrupole moments; see the first line of (2.9). The tail-of-memory comes with many tail-looking interactions with only one integration over the quadrupole moments. Similarly to the fact that the memory came with the failed tail [see (2.6)], the tail-of-memory comes along with a cubic interaction involving the constant angular momentum, _i.e._\(\mathrm{M}\times\mathrm{S}_{i}\times\mathrm{M}_{jk}\), see the last line of (2.9). We have [99]
\[\mathrm{U}_{ij}^{4\mathrm{PN}}=\,\frac{8G^{2}\mathrm{M}}{7c^{8}} \Bigg{\{} \int_{0}^{+\infty}\!\mathrm{d}\rho\,\mathrm{M}_{a\langle i}^{(4)} (u-\rho)\int_{0}^{+\infty}\!\mathrm{d}\tau\,\mathrm{M}_{j\rangle a}^{(4)}(u- \rho-\tau)\left[\ln\left(\frac{c\tau}{2r_{0}}\right)-\frac{1613}{270}\right]\] \[-\frac{5}{2}\int_{0}^{+\infty}\!\mathrm{d}\tau\,\left[ \mathrm{M}_{a\langle i}^{(3)}\mathrm{M}_{j\rangle a}^{(4)}\right](u-\tau) \left[\ln\left(\frac{c\tau}{2r_{0}}\right)+\frac{3}{2}\ln\left(\frac{c\tau}{2b_{0 }}\right)\right]\] \[-3\int_{0}^{+\infty}\!\mathrm{d}\tau\,\left[\mathrm{M}_{a\langle i }^{(2)}\mathrm{M}_{j\rangle a}^{(5)}\right](u-\tau)\left[\ln\left(\frac{c\tau}{2r _{0}}\right)+\frac{11}{12}\ln\left(\frac{c\tau}{2b_{0}}\right)\right]\] \[-\frac{5}{2}\int_{0}^{+\infty}\!\mathrm{d}\tau\,\left[\mathrm{M}_{a \langle i}^{(1)}\mathrm{M}_{j\rangle a}^{(6)}\right](u-\tau)\left[\ln\left(\frac{c \tau}{2r_{0}}\right)+\frac{3}{10}\ln\left(\frac{c\tau}{2b_{0}}\right)\right]\]
\[-\int_{0}^{+\infty}\!{\rm d}\tau\,\left[{\rm M}_{a_{(}i}{\rm M}_{j)a}^ {(7)}\right](u-\tau)\left[\ln\left(\frac{c\tau}{2r_{0}}\right)-\frac{1}{4}\ln \left(\frac{c\tau}{2b_{0}}\right)\right]\] \[-2{\rm M}_{a_{(}i}^{(2)}\int_{0}^{+\infty}\!{\rm d}\tau\,{\rm M}_ {j)a}^{(5)}(u-\tau)\left[\ln\left(\frac{c\tau}{2r_{0}}\right)+\frac{27521}{504 0}\right]\] \[-\frac{5}{2}\,{\rm M}_{a_{(}i}^{(1)}\int_{0}^{+\infty}\!{\rm d} \tau\,{\rm M}_{j)a}^{(6)}(u-\tau)\left[\ln\left(\frac{c\tau}{2r_{0}}\right)+ \frac{15511}{3150}\right]\] \[+\frac{1}{2}\,{\rm M}_{a_{(}i}\int_{0}^{+\infty}\!{\rm d}\tau\,{ \rm M}_{j)a}^{(7)}(u-\tau)\left[\ln\left(\frac{c\tau}{2r_{0}}\right)-\frac{611 3}{756}\right]\Bigg{\}}\] \[-\frac{2G^{2}{\rm M}}{3c^{8}}\,{\rm S}_{a}\epsilon_{ab\langle i} \int_{0}^{+\infty}\!{\rm d}\tau\,{\rm M}_{j\rangle b}^{(6)}(u-\tau)\left[\ln \left(\frac{c\tau}{2b_{0}}\right)+2\ln\left(\frac{c\tau}{2r_{0}}\right)+\frac{ 1223}{1890}\right]\,. \tag{9}\]
Note that the "genuine" tail-of-memory given by the first term of (II) can be retrieved by replacing the canonical moment \({\rm M}_{ij}\) in the expression of the memory term of (II) by the radiative moment itself, taking into account the dominant tail effect, _i.e._
\[{\rm M}_{ij}^{(2)}\longrightarrow{\rm U}_{ij}={\rm M}_{ij}^{(2)}+{\rm U}_{ij} ^{1.5{\rm PN}}\,, \tag{10}\]
along with a reexpansion at cubic order and an integration by parts. This agrees with the general prediction for memory effects computed using the radiative multipole moments defined at future null infinity, see [110; 111].
Besides the mass quadrupole moment, the expression of the flux (1) at the 4PN order requires the knowledge of moments of higher multipolarity, but evaluated at lower PN orders. At the next multipolar order, we thus need the mass octupole \({\rm U}_{ijk}\) and current quadrupole \({\rm V}_{ij}\) moments with a 3PN precision:
\[{\rm U}_{ijk}={\rm M}_{ijk}^{(3)}+{\rm U}_{ijk}^{1.5{\rm PN}}+{\rm U}_{ijk} ^{2.5{\rm PN}}+{\rm U}_{ijk}^{3{\rm PN}}+{\cal O}\left(\frac{1}{c^{7}}\right)\,, \tag{11a}\] \[{\rm V}_{ij}={\rm S}_{ij}^{(2)}+{\rm V}_{ij}^{1.5{\rm PN}}+{\rm V}_{ij} ^{2.5{\rm PN}}+{\rm V}_{ij}^{3{\rm PN}}+{\cal O}\left(\frac{1}{c^{7}}\right)\,. \tag{11b}\]
Reporting the results of [95], the above contributions read
\[{\rm U}_{ijk}^{1.5{\rm PN}}= \frac{2G{\rm M}}{c^{3}}\int_{0}^{+\infty}\!{\rm d}\tau\,{\rm M}_{ ijk}^{(5)}(u-\tau)\bigg{[}\ln\left(\frac{c\tau}{2b_{0}}\right)+\frac{97}{60} \bigg{]}\,, \tag{12a}\] \[{\rm U}_{ijk}^{2.5{\rm PN}}=\frac{G}{c^{3}}\Bigg{\{}\int_{0}^{+ \infty}\!{\rm d}\tau\left[-\frac{1}{3}{\rm M}_{a\langle i}^{(3)}{\rm M}_{jk \rangle a}^{(4)}-\frac{4}{5}\epsilon_{ab\langle i}{\rm M}_{j\underline{a}}^{(3 )}{\rm S}_{k\rangle b}^{(3)}\right](u-\tau)\] \[\qquad\qquad\qquad+\frac{1}{4}{\rm M}_{a\langle i}{\rm M}_{jk \rangle a}^{(6)}+\frac{1}{4}{\rm M}_{a\langle i}^{(1)}{\rm M}_{jk\rangle a}^{( 5)}+\frac{1}{4}{\rm M}_{a\langle i}^{(2)}{\rm M}_{jk\rangle a}^{(4)}-\frac{4}{ 3}{\rm M}_{a\langle i}^{(3)}{\rm M}_{jk\rangle a}^{(3)}\] \[\qquad\qquad\qquad-\frac{9}{4}{\rm M}_{a\langle i}^{(4)}{\rm M}_{ jk\rangle a}^{(2)}-\frac{3}{4}{\rm M}_{a\langle i}^{(5)}{\rm M}_{jk\rangle a}^{(1)}+\frac{1}{12}{\rm M }_{a\langle i}^{(6)}{\rm M}_{jk\rangle a}+\frac{12}{5}{\rm S}_{\langle i}{\rm S }_{jk\rangle}^{(4)}\] \[\qquad\qquad\qquad\qquad+\epsilon_{ab\langle i}\bigg{[}\frac{9}{5} {\rm M}_{j\underline{a}}{\rm S}_{k\rangle b}^{(5)}+\frac{27}{5}{\rm M}_{j \underline{a}}^{(1)}{\rm S}_{k\rangle b}^{(4)}+\frac{8}{5}{\rm M}_{j \underline{a}}^{(2)}{\rm S}_{k\rangle b}^{(3)}+\frac{12}{5}{\rm M}_{j \underline{a}}^{(3)}{\rm S}_{k\rangle b}^{(2)}\] \[\qquad\qquad\qquad\qquad\qquad+\frac{3}{5}{\rm M}_{j\underline{a}}^ {(4)}{\rm S}_{k\rangle b}^{(1)}+\frac{1}{5}{\rm M}_{j\underline{a}}^{(5)}{\rm S }_{k\rangle b}+\frac{9}{20}{\rm M}_{j\underline{a}}^{(5)}{\rm S}_{k\rangle b} \bigg{]}\Bigg{\}}\,,\] (12b) \[{\rm U}_{ijk}^{3{\rm PN}}=\frac{2G^{2}{\rm M}^{2}}{c^{6}}\!\int_{0}^{+ \infty}\!\!\!{\rm d}\tau\,{\rm M}_{ijk}^{(6)}(u-\tau)\bigg{[}\ln^{2}\left(\frac{c \tau}{2b_{0}}\right)+\frac{97}{30}\ln\left(\frac{c\tau}{2b_{0}}\right)-\frac{13 }{21}\ln\left(\frac{c\tau}{2r_{0}}\right)+\frac{13283}{8820}\bigg{]}\,\,, \tag{12c}\]
where the underlined indices within angled brackets are excluded from the STF operation, and
\[{\rm V}_{ij}^{1.5{\rm PN}}= \frac{2G{\rm M}}{c^{3}}\int_{0}^{+\infty}\!{\rm d}\tau\,{\rm S}_{ij} ^{(4)}(u-\tau)\bigg{[}\ln\left(\frac{c\tau}{2b_{0}}\right)+\frac{7}{6}\bigg{]}\,, \tag{12d}\] \[{\rm V}_{ij}^{2.5{\rm PN}}= \frac{G}{c^{5}}\Bigg{\{}-\frac{3}{7}{\rm M}_{a\langle i}{\rm S}_{j \rangle a}^{(5)}-\frac{3}{7}{\rm M}_{a\langle i}^{(1)}{\rm S}_{j\rangle a}^{(4)}+ \frac{8}{7}{\rm M}_{a\langle i}^{(2)}{\rm S}_{j\rangle a}^{(3)}+\frac{4}{7}{\rm M }_{a\langle i}^{(3)}{\rm S}_{j\rangle a}^{(2)}+\frac{17}{7}{\rm M}_{a\langle i }^{(4)}{\rm S}_{j\rangle a}^{(1)}+\frac{9}{7}{\rm M}_{a\langle i}^{(5)}{\rm S}_{j \rangle a}-\frac{1}{28}{\rm M}_{ija}^{(5)}{\rm S}_{a}\] \[\qquad\qquad+\epsilon_{ab\langle i}\bigg{[}-\frac{15}{56}{\rm M}_{j \rangle ac}{\rm M}_{bc}^{(6)}-\frac{113}{112}{\rm M}_{j\rangle ac}^{(1)}{\rm M }_{bc}^{(5)}-\frac{353}{336}{\rm M}_{j\rangle ac}^{(2)}{\rm M}_{bc}^{(4)}-\frac{ 3}{14}{\rm M}_{j\rangle ac}^{(3)}{\rm M}_{bc}^{(3)}\]
\[+\frac{5}{168}\mathrm{M}^{(4)}_{j\,ac}\mathrm{M}^{(2)}_{bc}+\frac{3}{112} \mathrm{M}^{(5)}_{j\,ac}\mathrm{M}^{(1)}_{bc}-\frac{3}{112}\mathrm{M}^{(6)}_{j \,ac}\mathrm{M}_{bc}+\mathrm{S}^{(4)}_{j\,a}\mathrm{S}_{b}\biggr{]}\Biggr{\}}\,, \tag{12e}\] \[\mathrm{V}^{\mathrm{3PN}}_{ij}=\frac{2G^{2}\mathrm{M}^{2}}{c^{6}} \!\!\!\int_{0}^{+\infty}\!\!\!\mathrm{d}\tau\,\mathrm{S}^{(5)}_{ij}(u-\tau) \biggl{[}\ln^{2}\left(\frac{c\tau}{2b_{0}}\right)+\frac{7}{3}\ln\left(\frac{c \tau}{2b_{0}}\right)-\frac{107}{105}\ln\left(\frac{c\tau}{2r_{0}}\right)-\frac {13127}{11025}\biggr{]}\,\,. \tag{12f}\]
Higher order multipole moments have no cubic contributions at the required PN order. At 2PN order, we need the mass hexadecapole \(\mathrm{U}_{ijkl}\) as well as the current octupole \(\mathrm{V}_{ijk}\), which read
\[\mathrm{U}_{ijkl} =\mathrm{M}^{(4)}_{ijkl}+\mathrm{U}^{1.5\mathrm{PN}}_{ijkl}+ \mathcal{O}\left(\frac{1}{c^{5}}\right)\,, \tag{13a}\] \[\mathrm{V}_{ijk} =\mathrm{S}^{(3)}_{ijk}+\mathrm{V}^{1.5\mathrm{PN}}_{ijk}+ \mathcal{O}\left(\frac{1}{c^{5}}\right)\,, \tag{13b}\]
where [95]
\[\mathrm{U}^{1.5\mathrm{PN}}_{ijkl} =\frac{G}{c^{3}}\Biggl{\{}2\mathrm{M}\int_{0}^{+\infty}\!\!\! \mathrm{d}\tau\,\mathrm{M}^{(6)}_{ijkl}(u-\tau)\biggl{[}\ln\left(\frac{c\tau} {2b_{0}}\right)+\frac{59}{30}\biggr{]}\!+\frac{2}{5}\int_{0}^{+\infty}\!\!\! \mathrm{d}\tau\,\left[\mathrm{M}^{(3)}_{(ij}\mathrm{M}^{(3)}_{kl)}\right]\! \left(u-\tau\right)\] \[\qquad\qquad-\frac{21}{5}\mathrm{M}_{(ij}\mathrm{M}^{(5)}_{kl)}- \frac{63}{5}\mathrm{M}^{(1)}_{(ij}\mathrm{M}^{(4)}_{kl)}-\frac{102}{5} \mathrm{M}^{(2)}_{(ij}\mathrm{M}^{(3)}_{kl)}\Biggr{\}}\,, \tag{14a}\] \[\mathrm{V}^{1.5\mathrm{PN}}_{ijk} =\frac{G}{c^{3}}\Biggl{\{}2\mathrm{M}\int_{0}^{+\infty}\!\!\! \mathrm{d}\tau\,\mathrm{S}^{(5)}_{ijk}(u-\tau)\biggl{[}\ln\left(\frac{c\tau}{ 2b_{0}}\right)+\frac{5}{3}\biggr{]}\] \[\qquad\qquad-2\mathrm{M}^{(4)}_{(ij}\mathrm{S}_{k)}-\frac{1}{10} \epsilon_{ab(i}\mathrm{M}_{j\underline{a}}\mathrm{M}^{(5)}_{k)b}+\frac{1}{2} \epsilon_{ab(i}\mathrm{M}^{(1)}_{j\underline{a}}\mathrm{M}^{(4)}_{k)b}\Biggr{\}}\,. \tag{14b}\]
Note the appearance of a memory integral at 1.5PN order in \(\mathrm{U}_{ijkl}\), in addition to the usual tail one. Finally, we need the moments \(\mathrm{U}_{ijklm}\) and \(\mathrm{V}_{ijkl}\) at 1PN, as well as \(\mathrm{U}_{ijklmn}\) and \(\mathrm{V}_{ijklm}\) at Newtonian order, which are trivially given by
\[\mathrm{U}_{ijklm} =\mathrm{M}^{(5)}_{ijklm}+\mathcal{O}\left(\frac{1}{c^{3}} \right)\,, \mathrm{V}_{ijkl} =\mathrm{S}^{(4)}_{ijkl}+\mathcal{O}\left(\frac{1}{c^{3}}\right)\,, \tag{15a}\] \[\mathrm{U}_{ijklmn} =\mathrm{M}^{(6)}_{ijklmn}+\mathcal{O}\left(\frac{1}{c^{3}} \right)\,, \mathrm{V}_{ijklm} =\mathrm{S}^{(5)}_{ijklm}+\mathcal{O}\left(\frac{1}{c^{3}}\right)\,. \tag{15b}\]
#### ii.1.2 Radiative moments entering the quasi-circular flux at the 4.5PN order
For quasi-circular orbits, the 4.5PN term in the flux was obtained in [96]. One could naively think that such a computation would require the complete knowledge of the relations between radiative and canonical moments, as presented above, but pushed one half PN order further. This was actually not the case, since for circular orbits, only a limited control of the relation between the radiative and canonical mass quadrupole moments was necessary. This is discussed in [96], and we only remind the key points. First, it is well known that the contributions of instantaneous interactions entering the flux at half-integer PN order (_e.g._ 4.5PN order) vanish for quasi-circular orbits. So only non-local contributions such as tails can potentially contribute. Second, the quadratic non-local memory interaction that enters the radiative moments [see Eqs. (6) and (8)] become instantaneous in the flux by virtue of time differentiation, so these will not contribute either. Last, the tails-of-memory and spin-quadrupole tails, which both enter at 4PN, will next contribute at 5PN but not at 4.5PN. This allows to use dimensional arguments to determine the interactions that can contribute to the 4.5PN quasi-circular flux. For the mass quadrupole, only the quartic \(\mathrm{M}^{3}\times\mathrm{M}_{ij}\), naturally dubbed "tails-of-tails-of-tails", can play a role. It has been computed in [96], and reads
\[\mathrm{U}^{4.5\mathrm{PN}}_{ij}\biggr{|}_{\mathrm{ToToT}}=\frac{G^ {3}\mathrm{M}^{3}}{c^{9}}\int_{0}^{+\infty}\!\!\mathrm{d}\tau\,\mathrm{M}^{(6)}_ {ij}(u-\tau)\Biggl{[}\frac{4}{3}\ln^{3}\left(\frac{c\tau}{2b_{0}}\right)+\frac{1 1}{3}\ln^{2}\left(\frac{c\tau}{2b_{0}}\right)-\frac{428}{105}\ln\left(\frac{c \tau}{2b_{0}}\right)\ln\left(\frac{c\tau}{2r_{0}}\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \
The full \(\mathrm{U}^{4.5PN}_{ij}\) also contains quadratic memory interactions, like those entering at 2.5PN and 3.5PN; see (6) and (8). If, as explained above, those are not needed to compute the flux (and phase) for quasi-circular orbits, they will enter the expression of the \((2,2)\) mode. As they are yet undetermined, they restrict the accuracy we can reach when deriving the \((2,2)\) mode, which is why it is presented in Sec. VI.3 at 4PN and not up to 4.5PN order. In addition, radiation reaction effects at 4.5PN in the mass quadrupole should contribute and are also not under control.
Regarding other moments, the 3.5PN terms of the mass octupole \(\mathrm{U}_{ijk}\) and current quadrupole \(\mathrm{V}_{ij}\), as well as the 2.5PN terms of the mass hexadecapole \(\mathrm{U}_{ijkl}\) and current octupole \(\mathrm{V}_{ijk}\), are composed of quadratic memory integrals, which cannot contribute in the 4.5PN quasi-circular flux. The only moments playing a role beyond the mass quadrupole are [95; 96]
\[\mathrm{U}^{1.5PN}_{ijklm}=\frac{G}{c^{3}}\Bigg{\{} 2\mathrm{M}\int_{0}^{+\infty}\mathrm{d}\tau\,\mathrm{M}^{(7)}_{ ijklm}(u-\tau)\bigg{[}\ln\left(\frac{c\tau}{2b_{0}}\right)+\frac{232}{105} \bigg{]}+\frac{20}{21}\int_{0}^{+\infty}\mathrm{d}\tau\,\left[\mathrm{M}^{(3 )}_{ij}\mathrm{M}^{(4)}_{klm}\right]\!(u-\tau)\] \[-\frac{15}{7}\mathrm{M}_{\langle ij}\mathrm{M}^{(6)}_{klm}- \frac{41}{7}\mathrm{M}^{(1)}_{\langle ij}\mathrm{M}^{(5)}_{klm}-\frac{120}{7 }\mathrm{M}^{(2)}_{\langle ij}\mathrm{M}^{(4)}_{klm}\rangle-\frac{710}{21} \mathrm{M}^{(3)}_{ij}\mathrm{M}^{(3)}_{klm}\] \[-\frac{265}{7}\mathrm{M}^{(4)}_{\langle ij}\mathrm{M}^{(2)}_{klm }-\frac{155}{7}\mathrm{M}^{(5)}_{\langle ij}\mathrm{M}^{(1)}_{klm}\rangle- \frac{34}{7}\mathrm{M}^{(6)}_{\langle ij}\mathrm{M}_{klm}\rangle\bigg{\}}\,, \tag{17a}\] \[\mathrm{V}^{1.5PN}_{ijkl}=\frac{G}{c^{3}}\Bigg{\{} 2\mathrm{M}\int_{0}^{+\infty}\!\mathrm{d}\tau\,\mathrm{S}^{(6)}_{ ijkl}(u-\tau)\bigg{[}\ln\left(\frac{c\tau}{2b_{0}}\right)+\frac{119}{60}\bigg{]}\] \[-\frac{11}{6}\mathrm{M}_{\langle ij}\mathrm{S}^{(5)}_{kl}-\frac{ 25}{6}\mathrm{M}^{(1)}_{\langle ij}\mathrm{S}^{(4)}_{kl\rangle}-\frac{25}{3} \mathrm{M}^{(2)}_{\langle ij}\mathrm{S}^{(3)}_{kl\rangle}-\frac{35}{3} \mathrm{M}^{(3)}_{\langle ij}\mathrm{S}^{(2)}_{kl\rangle}-\frac{65}{6} \mathrm{M}^{(4)}_{\langle ij}\mathrm{S}^{(1)}_{kl\rangle}-\frac{19}{6} \mathrm{M}^{(5)}_{\langle ij}\mathrm{S}_{kl\rangle}-\frac{11}{12}\mathrm{M}^{ (5)}_{\langle ijk}\mathrm{S}_{l\rangle}\] \[+\epsilon_{ab\langle i}\bigg{[}\frac{1}{12}\mathrm{M}_{j\underline {a}}\mathrm{M}^{(6)}_{kl\rangle b}+\frac{37}{60}\mathrm{M}^{(1)}_{j\underline {a}}\mathrm{M}^{(5)}_{kl\rangle b}-\frac{5}{12}\mathrm{M}^{(2)}_{j\underline {a}}\mathrm{M}^{(4)}_{kl\rangle b}-\frac{5}{6}\mathrm{M}^{(3)}_{j\underline {a}}\mathrm{M}^{(3)}_{kl\rangle b}\] \[-\frac{11}{12}\mathrm{M}^{(4)}_{j\underline{a}}\mathrm{M}^{(2)}_{ kl\rangle b}-\frac{1}{12}\mathrm{M}^{(5)}_{j\underline{a}}\mathrm{M}^{(1)}_{ kl\rangle b}+\frac{3}{60}\mathrm{M}^{(6)}_{j\underline{a}}\mathrm{M}_{kl \rangle b}\bigg{]}\Bigg{\}}\,. \tag{17b}\]
### Corrections due to the dimensional regularization of radiative moments
The results presented in the previous section are correct in the ordinary three-dimensional MPM algorithm. Nevertheless, the treatment of the dynamics of point-masses imposes to use a dimensional regularization scheme, starting at the 3PN order [112; 113]. For consistency purposes, it is thus required to perform the MPM algorithm in \(d\) spatial dimensions, too. As proved in [93; 21], this induces divergences in the expression of the radiative multipole moments in terms of the canonical ones. Those divergences are in the form of simple poles, _i.e._ they scale as \(1/\varepsilon\equiv 1/(d-3)\). Their cancellation against counter-divergences, arising in the computation of the source multipole moments themselves, has been established in [93; 92] and represents a crucial check of the method.
Let us recapitulate how the \(d\)-dimensional radiative multipole moments required for the 4PN flux differ from their three-dimensional counterparts. As the divergences hits at 3PN, only the mass quadrupole and octupole, as well as the current quadrupole, are affected. In the CoM frame (see [93] for the complete expressions at 3PN in a generic frame), they are
\[\mathrm{U}^{(d)}_{ij}= \ \mathrm{U}_{ij}-\frac{214}{105}\frac{G^{2}\mathrm{M}^{2}}{c^{6}} \left(\Pi_{\varepsilon}+\frac{246299}{44940}\right)\mathrm{M}^{(4)}_{ij}\] \[+\frac{G^{2}\mathrm{M}}{c^{8}}\bigg{[}-\frac{4}{7}\left(\Pi_{ \varepsilon}-\frac{1447}{216}\right)\mathrm{M}_{a\langle i}\mathrm{M}^{(6)}_{ j\rangle a}-\frac{32}{7}\left(\Pi_{\varepsilon}-\frac{17783}{10080}\right) \mathrm{M}^{(1)}_{a\langle i}\mathrm{M}^{(5)}_{j\rangle a}\] \[-4\left(\Pi_{\varepsilon}-\frac{27649}{17640}\right)\mathrm{M} ^{(2)}_{a\langle i}\mathrm{M}^{(4)}_{j\rangle a}+\frac{1921}{945}\mathrm{M}^{(3 )}_{a\langle i}\mathrm{M}^{(3)}_{j\rangle a}+\frac{4}{3}\left(\Pi_{ \varepsilon}+\frac{11243}{7560}\right)\mathrm{M}^{(5)}_{a\langle i}\mathrm{ S}_{j\rangle a}\bigg{]}\,, \tag{18a}\] \[\mathrm{U}^{(d)}_{ijk}= \ \mathrm{U}_{ijk}-\frac{21}{26}\frac{G^{2}\mathrm{M}^{2}}{c^{6}} \left(\Pi_{\varepsilon}+\frac{9281}{2730}\right)\mathrm{M}^{(5)}_{ijk}\,,\] (18b) \[\mathrm{V}^{(d)}_{k\downarrow ji}= \ \mathrm{V}_{k\downarrow ji}-\frac{214}{105}\frac{G^{2}\mathrm{M}^{2}}{c^{6 }}\left(\Pi_{\varepsilon}+\frac{4989}{44940}\right)\mathrm{S}^{(4)}_{k \downarrow ji}\,, \tag{18c}\]
where we employ the notations of [94] for current moments, and where
\[\Pi_{\varepsilon}=-\frac{1}{2\varepsilon}+\ln\left(\frac{r_{0}\sqrt{\bar{q}}}{ \ell_{0}}\right)\,, \tag{19}\]
with \(\bar{q}\equiv 4\pi\mathrm{e}^{\gamma_{\mathrm{E}}}\), \(\gamma_{\mathrm{E}}\) being the Euler constant. The constant \(\ell_{0}\) is the length-scale associated with dimensional regularization, such that the \(d\)-dimensional gravitational coupling constant \(G_{d}=\ell_{0}^{\varepsilon}\,G\) relates to the usual Newton's constant \(G\), and \(r_{0}\) was introduced in the previous section. As expected, the numerical constants associated with the poles at 3PN are the \(\beta\)-coefficients of the renormalization group flows for these multipole moments [108; 109; 21].2
Footnote 2: See [114] for the computation using the renormalization group flow of high-order logarithmic effects in the conservative energy function.
As explained in Sec. IV the poles present in (18) will actually be cancelled by poles coming from the \(d\)-dimensional expression of the source multipole moments. Thus, the correction terms (18), added to the \(d\)-dimensional source moments, permit to define a notion of finite ("renormalized") source moment, which is useful in some intermediate steps of our calculation (see more details in [93]).
### Relation between canonical and source moments
In the previous sections, the radiative moments have been linked to a set of canonical moments, \(\mathrm{M}_{L}\) and \(\mathrm{S}_{L}\). Nevertheless, the matching of the linearized metric, which serves as a basis for the MPM construction, to the PN source is more easily done by using a set of source moments \(\mathrm{I}_{L}\) and \(\mathrm{J}_{L}\), together with four families of gauge moments \(\mathrm{W}_{L}\), \(\mathrm{X}_{L}\), \(\mathrm{Y}_{L}\) and \(\mathrm{Z}_{L}\). We thus need to connect the canonical moments to those source and gauge moments, which is done _via_ a coordinate transformation. This coordinate change (_i.e._, fully non-linear gauge transformation) induces a precise prescription for the relation between the canonical moments and the source moments, which has been worked out at 4PN order in three dimensions in [97]. However, since we are using full dimensional regularization in this work, we must generalize this computation to \(d\) dimensions in order to be consistent. Throughout this section, we will assume that the reader is familiar with the construction and notations of [97], and we will only highlight the main differences between the \(d\)-dimensional and the three-dimensional computations.
Regarding 4.5PN, non-local terms cannot appear at that order in the relation between canonical and source/gauge moments. Indeed, in the three-dimensional case, non-local terms do arise at cubic order, but dimensional analysis shows that there cannot be any cubic or higher-order contribution entering at exactly 4.5PN order. As for quadratic interactions entering at 4.5PN order, an analysis of the structure of the integration formulas proves that they are necessarily instantaneous. Therefore, as discussed previously, they will play no role in the flux for quasi-circular orbits, and we only need the 4PN order in the canonical/source/gauge relations to control the 4.5PN quasi-circular flux.
#### ii.3.1 Procedure in \(d\) dimensions
The solution of the linearized, \(d\) dimensional, vacuum Einstein field equations in harmonic gauge reads
\[h^{\mu\nu}_{\mathrm{gen}\,1}=h^{\mu\nu}_{\mathrm{can}\,1}+\partial^{\mu} \varphi_{1}^{\nu}+\partial^{\nu}\varphi_{1}^{\mu}-\partial_{\lambda}\varphi_{ 1}^{\lambda}\,\eta^{\mu\nu}\,, \tag{20}\]
where the canonical metric is parametrized by three types of source moments as [94]
\[h^{00}_{\mathrm{can}\,1} =-\frac{4}{c^{2}}\sum_{\ell\geqslant 0}\frac{(-)^{\ell}}{ \ell!}\partial_{\ell}\widetilde{\mathrm{I}}_{L}\,, \tag{21a}\] \[h^{0i}_{\mathrm{can}\,1} =\frac{4}{c^{3}}\sum_{\ell\geqslant 1}\frac{(-)^{\ell}}{ \ell!}\left[\partial_{L-1}\widetilde{\mathrm{I}}^{(1)}_{iL-1}+\frac{\ell}{ \ell+1}\partial_{L}\widetilde{\mathrm{J}}_{i|L}\right]\,,\] (21b) \[h^{ij}_{\mathrm{can}\,1} =-\frac{4}{c^{4}}\sum_{\ell\geqslant 2}\frac{(-)^{\ell}}{ \ell!}\left[\partial_{L-2}\widetilde{\mathrm{I}}^{(2)}_{ijL-2}+\frac{2\ell}{ \ell+1}\partial_{L-1}\widetilde{\mathrm{J}}^{(1)}_{(i|\underline{L-1}j)}+ \frac{\ell-1}{\ell+1}\partial_{L}\widetilde{\mathrm{K}}_{ij|L}\right]\,, \tag{21c}\]
and the gauge vector \(\varphi_{1}^{\mu}\), by four types of gauge moments,
\[\varphi_{1}^{0}=\frac{4}{c^{3}}\sum_{\ell\geqslant 0}\frac{(-)^{\ell}}{ \ell!}\partial_{L}\widetilde{\mathrm{W}}_{L}\,, \tag{22a}\]
\[\varphi_{1}^{i}=-\frac{4}{c^{4}}\sum_{\ell\geqslant 0}\frac{(-)^{\ell}}{\ell!} \partial_{iL}\widetilde{\chi}_{L}-\frac{4}{c^{4}}\sum_{\ell\geqslant 1}\frac{(-)^{ \ell}}{\ell!}\bigg{[}\partial_{L-1}\widetilde{\chi}_{iL-1}+\frac{\ell}{\ell+1} \,\partial_{L}\widetilde{\text{Z}}_{i|L}\bigg{]}\,. \tag{22b}\]
The conventions, notably for the indices of \(\widetilde{\text{J}}_{i|L}\) and \(\widetilde{\text{Z}}_{i|L}\), are still those of [94],3 and we have shortened
Footnote 3: There has been some confusion [94; 97; 98] concerning these conventions, which we clarify here. In the HFB convention [94], the ordering of the multi-index in the multipolar moments is defined in an unusual order by \(\text{J}_{i|L}\equiv\text{J}_{i|i_{\ell}\dots i_{\ell}}\) and \(\text{K}_{ij|L}\equiv\text{K}_{ij|i_{\ell}\dots i_{1}}\). In the BFL convention [97] however, the multi-index is always defined in the usual order, namely \(\text{J}_{i|L}\equiv\text{J}_{i|i_{\ell}\dots i_{\ell}}\) and \(\text{K}_{ij|L}\equiv\text{K}_{ij|i_{\ell}\dots i_{\ell}}\). Note that the mass-type moment is entirely symmetric, so the ordering of its indices does not matter, namely \(\text{I}_{L}=\text{I}_{i_{1}\dots i_{\ell}}=\text{I}_{i_{\ell}\dots i_{1}}\). In both conventions, the Young tableaux of the moments read (with the notation of [97]):
\[\text{I}_{L}=\boxed{i_{\ell}\
Techniques to solve wave-like equations in \(d\) dimensions, with more general radial dependence than (2.25) but same multipolarity \(\ell\), have been developed in [93], see Sec. II therein. Importantly, no particular assumption about the presence of a prefactor \(B^{b}\) has been made in this work, as clear from its Eq. (2.6). We can therefore follow the lines of [93] to compute the integral (2.25). The solution \(h\) is the sum of two contributions, \(h^{<}\) and \(h^{>}\), involving integrals over the respective domains \(\mathcal{D}=\{|\mathbf{x}^{\prime}|<r\}\) and \(\mathbb{R}^{3}-\mathcal{D}\), as displayed explicitly in Eqs. (2.8) and (2.10) of [93]. When a prefactor \(B^{b}\) is present, with \(b=1,2\), the finite part of \(h\equiv\mathcal{J}^{b}_{p,q}\) vanishes unless the integration produces a pole, which may only occur when the integral diverges for \(B=0\). As the system is taken to be stationary in the past in the MPM algorithm, the integral yielding \(h^{>}\) converges for \(B=0\) so that only \(h^{<}\) can contribute to the finite part in Eq. (2.25). This finite part is evaluated explicitly and truncated at first order in \(\varepsilon\), in order to cope with the case where \(H(t)\) contains a pole, namely \(H=\frac{1}{\varepsilon}H_{-1}+H_{0}+\mathcal{O}(\varepsilon)\). After resummation, we obtain a retarded homogeneous solution of the wave equation [93]
\[\mathcal{J}^{b}_{p,q}=\hat{\partial}_{L}\widetilde{G}^{b}_{p,q}(t,r)\equiv \hat{\partial}_{L}\,\left[\frac{\tilde{k}}{r^{d-2}}\int_{1}^{+\infty}\mathrm{ d}z\,\gamma_{\frac{1-d}{2}}(z)\,G^{b}_{p,q}(t-zr/c)\right]\,. \tag{2.26}\]
Explicitly, we find that \(G^{b}_{p,q}\) is non-vanishing only if \(b=1\), \(q=1\) and \(p-\ell-3=2j\), where \(j\in\mathbb{N}\). Under those very restrictive conditions, we have
\[G^{b}_{p,q}(t)=\frac{(-)^{\ell+1}}{(2j)!!\,(2j+2\ell+1)!!}\left[\left(1+ \varepsilon\ln\sqrt{\bar{q}}-\varepsilon\sum_{k=0}^{j+\ell}\frac{1}{2k+1} \right)H^{(2j)}(t)+\mathcal{O}(\varepsilon)\right]\,, \tag{2.27}\]
where we recall that \(H\) can bear a pole \(1/\varepsilon\), and our notation \(\bar{q}=4\pi\mathrm{e}^{\gamma_{\mathrm{E}}}\) (with \(\gamma_{\mathrm{E}}\) the Euler constant). With the previous formula at hand, we are ready to compute the quantities \(X^{\mu\nu}_{n}\) and \(Y^{\mu\nu}_{n}\) and, therefore, obtain the correction to the canonical moments to the \(n\)-th PM order. Since the result (2.27) is zero unless \(q=1\), we are able to eliminate many terms from the source for this computation solely based on the value of \(q\).
As for the gauge vector \(\varphi^{\mu}_{n}\), it will be needed to compute the source term for the next PM order \(n+1\), following the algorithmic procedure of [97]. It is decomposed as \(\varphi^{\mu}_{n}=\phi^{\mu}_{n}+\psi^{\mu}_{n}\), where \(\psi^{\mu}_{n}\) is also extracted from \(X^{\mu\nu}_{n}\) and \(Y^{\mu\nu}_{n}\), and \(\phi^{\mu}_{n}\) is obtained by the direct integration
\[\phi^{\mu}_{n}\equiv\underset{B=0}{\mathrm{FP}}\,\square_{\mathrm{ret}}^{-1} \left[\left(\frac{r}{r_{0}}\right)^{B}\Delta^{\mu}_{n}\right]\,. \tag{2.28}\]
In \(d\) dimensions, it is in general not possible to find a convenient, explicit expression for \(\phi^{\mu}_{n}\) (see, however, Sec. II in [93]). Fortunately, we do not need here the full solution valid at any field point, but only the solution in the form of a near-zone expansion when \(r\to 0\). Indeed, following the procedure of [97], it is the near-zone expansion of \(\phi^{\mu}_{n}\), denoted \(\bar{\phi}^{\mu}_{n}\), that needs to be inserted into the expressions of \(\Omega^{\mu\nu}_{n+1}\) and \(\Delta^{\mu}_{n+1}\) sourcing the next order quantities \(X^{\mu\nu}_{n+1}\) and \(Y^{\mu\nu}_{n+1}\). The quantity \(\bar{\phi}^{\mu}_{n}\) can be obtained directly from the near-zone expansion of the source term, _i.e._\(\bar{\Delta}^{\mu}_{n}\), plus a crucial homogeneous solution, itself in the form of a near-zone expansion, namely:
\[\bar{\phi}^{\mu}_{n}=\bar{\phi}^{\mu}_{\mathrm{part}\,n}+\bar{\phi}^{\mu}_{ \mathrm{hom}\,n}\,. \tag{2.29}\]
The particular solution is obtained by a formal iteration of inverse Laplace operators in \(d\)-dimensions, using the "Matthieu" formula, Eq. (B.26c) in [113]:
\[\bar{\phi}^{\mu}_{\mathrm{part}\,n}=\underset{B=0}{\mathrm{FP}}\,\sum_{k=0}^{+ \infty}\Delta^{-1-k}\left(\frac{\partial}{c\,\partial t}\right)^{2k}\left[ \left(\frac{r}{r_{0}}\right)^{B}\bar{\Delta}^{\mu}_{n}\right]\,. \tag{2.30}\]
The homogeneous solution \(\bar{\phi}^{\mu}_{\mathrm{hom}\,n}\) is given by Eq. (3.20) of [81], which in this case yields (with \(c=1\))
\[\bar{\phi}^{\mu}_{\mathrm{hom}\,n}=-\sum_{\ell\geqslant 0}\sum_{j=0}^{+\infty} \frac{\Delta^{-j}\hat{x}_{L}}{d+2\ell-2}\,\underset{B=0}{\mathrm{FP}}\int_{1} ^{+\infty}\mathrm{d}z\,\gamma_{\frac{1-d}{2}-\ell}(z)\int_{0}^{+\infty} \mathrm{d}r^{\prime}r^{\prime-\ell+1+B}\,\hat{n}^{\prime}_{L}\,\bar{\Delta}^{ \mu\,(2j)}_{n,L}(t-zr^{\prime},r^{\prime})\,, \tag{2.31}\]
where we have decomposed the source in its multipolar components as
\[\bar{\Delta}^{\mu}_{n}(t,\mathbf{x})=\sum_{\ell\geqslant 0}\hat{n}_{L}\,\bar{ \Delta}^{\mu}_{n,L}(t,r)\,, \tag{2.32}\]
and shortened the iterated inverse Laplacians as [115, 116, 81]
\[\Delta^{-j}\hat{x}_{L}=\frac{\Gamma\left(\frac{d}{2}+\ell\right)}{\Gamma\left( \frac{d}{2}+\ell+j\right)}\frac{r^{2j}\hat{x}_{L}}{2^{2j}\,j!}\,. \tag{33}\]
In the particular case where the source term has a structure given by
\[\bar{\Delta}^{\mu}_{n}(t,r)=\sum_{\ell,k,p,q}\frac{\hat{n}_{L}}{r^{p+qr}}\int_ {1}^{+\infty}\mathrm{d}y\,y^{k}\gamma_{-1-\frac{q}{2}}(y)\,F_{\ell,k,p,q}(t-yr )\,, \tag{34}\]
where the \(F_{\ell,k,p,q}(t)\)'s can be arbitrary functions of time, explicit formulas are known for performing the integrations: see (34) and (D1) of [81] when \(q=2\), and the discussion in Sec. IV of [93] for other values of \(q\). However, we will encounter source terms that exhibit more complicated structures, and for which no analogous formula is available. This is notably the case for sources involving two non-static multipolar moments, such as \(\Delta^{\mu}_{\mathrm{W}\times\mathrm{I}_{ij}}\). Fortunately, we have shown that those more complicated homogeneous solutions do not contribute at 4PN order, see hereafter.
#### ii.2.3 The 4PN relation between canonical and source moments
At the 4PN order, only quadratic and cubic interactions are allowed by dimensional analysis (and we recall that the 4.5PN sector vanishes in the quasi-circular flux). Naturally, the moments allowed to enter those interactions do not depend on the space dimension. Applying the method described above, it was found that the quadratic order is identical in \(d\) and three dimensions, _i.e._ we recover the same "odd" corrections at 2.5PN and 3.5PN orders in the relation between canonical and source/gauge moments. More interestingly, the results for the cubic interactions arising at 4PN order greatly differ.
Let us recall that three interactions enter the 4PN mass quadrupole moment at cubic order, namely \(\mathrm{M}\times\mathrm{M}\times\mathrm{W}_{ij}\) and \(\mathrm{M}\times\mathrm{M}\times\mathrm{Y}_{ij}\) (where only one moment is non-static), and most importantly \(\mathrm{M}\times\mathrm{W}\times\mathrm{I}_{ij}\) (where the two moments \(\mathrm{W}\) and \(\mathrm{I}_{ij}\) are dynamical). For the first two interactions, it turns out that the source term entering the integral (25) bears \(q\geqslant 2\). Indeed, it is composed of two interactions: \(h_{\mathrm{M}\times\mathrm{M}}\times\varphi_{\mathrm{K}_{ij}}\) and \(h_{\mathrm{M}}\times\varphi_{\mathrm{M}\times\mathrm{K}_{ij}}\), where \(\mathrm{K}_{ij}\) stands for either \(\mathrm{W}_{ij}\) or \(\mathrm{Y}_{ij}\). As both \(h_{\mathrm{M}\times\mathrm{M}}\) and \(\varphi_{\mathrm{M}\times\mathrm{K}_{ij}}\) bear \(q=2\), the cubic sources will have \(q\geqslant 2\), and thus cannot satisfy the \(q=1\) necessary condition to have a non-vanishing function \(G(t)\) in Eq. (27). Therefore, the \(\mathrm{M}\times\mathrm{M}\times\mathrm{W}_{ij}\) and \(\mathrm{M}\times\mathrm{M}\times\mathrm{Y}_{ij}\) interactions do not contribute in \(d\) dimensions, as we have explicitly checked, using the method presented in the previous sections. This vanishing of the \(\mathrm{M}\times\mathrm{M}\times\mathrm{W}_{ij}\) and \(\mathrm{M}\times\mathrm{M}\times\mathrm{Y}_{ij}\) interactions in \(d\) dimensions contrasts with their explicit contributions in 3 dimensions; see Eq. (1.1) in [97].
As for the last cubic interaction, \(\mathrm{M}\times\mathrm{W}\times\mathrm{I}_{ij}\), it turns out that the source term _does_ have a \(q=1\) component, arising from the interaction \(h_{\mathrm{I}_{ij}}\times\varphi_{\mathrm{M}\times\mathrm{W}}\). More precisely, the term with \(q=1\) comes exclusively from the interaction between: (i) the homogeneous solution \(\bar{\phi}_{\mathrm{hom}\,2}\) at quadratic order arising from the interaction \(\mathrm{M}\times\mathrm{W}\), _i.e._\(\bar{\phi}_{\mathrm{hom}\,\mathrm{M}\times\mathrm{W}}\), which bears \(q=0\); and (ii) the \(q=1\) piece of the linear metric \(h_{\mathrm{I}_{ij}}\); see (21). Note that, as it involves a quadratic interaction with only one dynamical moment, \(\varphi_{\mathrm{M}\times\mathrm{W}}\) can be calculated using (34). By contrast, the homogeneous solution for the interaction \(\mathrm{W}\times\mathrm{I}_{ij}\), _i.e._\(\bar{\phi}_{\mathrm{hom}\,\mathrm{W}\times\mathrm{I}_{ij}}\), does not fulfill the \(p-\ell-3\in 2\mathbb{N}\) condition, and as such, does not contribute to the result. This is fortunate, since it does not follow the structure (34), and we are unable to compute this term explicitly, at least with currently-known formulas. Another interesting observation is that the explicit computation of the \(q=1\) terms (the only ones that contribute to the final result) leads to the appearance of a pole. The relation between canonical and source/gauge moments at 4PN is thus a pure \(\mathrm{M}\times\mathrm{W}\times\mathrm{I}_{ij}\) interaction, which reads
\[\delta\mathrm{M}_{ij}=\frac{8\,G^{2}\mathrm{M}}{c^{8}}\int_{0}^{+\infty} \mathrm{d}\tau\bigg{[}\frac{1}{\varepsilon}-2\ln\left(\frac{c\tau\sqrt{q}}{2} \right)-1\bigg{]}\bigg{(}\mathrm{I}_{ij}^{(1)}(t)\,\mathrm{W}^{(3)}(t-\tau)- \mathrm{I}_{ij}(t)\,\mathrm{W}^{(4)}(t-\tau)\bigg{)}+\mathcal{O}\left(\frac{ 1}{c^{10}}\right)\,. \tag{35}\]
This result is consistent with the corresponding one in three dimensions, where the interaction \(\mathrm{M}\times\mathrm{W}\times\mathrm{I}_{ij}\) gives rise to an ordinary tail integral at 4PN order: the coefficient of the pole in (35) matches the logarithmic structure of Eq. (1.1) in [97]. Moreover, we have checked the result (35), as well as the vanishing of the interactions \(\mathrm{M}\times\mathrm{M}\times\mathrm{W}_{ij}\) and \(\mathrm{M}\times\mathrm{M}\times\mathrm{Y}_{ij}\), by an independent method, using the techniques exposed in [93], to compute the _difference_ between the \(d\)-dimensional and Hadamard regularization schemes in the MPM algorithm, and adding it to the three-dimensional result of [97].
The new pole in (35) arising from dimensional regularization is actually cancelled by another pole. This stems from an extra contribution to be added to the end result of [92], which we will now discuss. Indeed, when computing the equations of motion at 4PN order, the tail effect (entering as a radiation mode in the conservative dynamics
described by the Fokker action) has been implemented using the canonical quadrupole moment \(\mathrm{M}_{ij}\)[81], and is therefore valid in the associated "canonical" coordinate system. By contrast, the source quadrupole moment \(\mathrm{I}_{ij}\), as given in [92], is expressed in a so-called "generic" coordinate system, following the terminology of [97]. Note that the computation of the source quadrupole moment \(\mathrm{I}_{ij}\) at 4PN does not use the 4PN equations of motion. These two results, taken independently, are therefore perfectly correct in their respective coordinate systems. However, in this work, we need to take time derivatives of the source quadrupole moment, which crucially requires the 4PN equations of motion. This operation can only be performed if the source quadrupole moment and the equations of motion are expressed in the same coordinate system. We thus compute the expression of the source quadrupole moment in the canonical coordinate system rather than in the generic one. This is done by performing a simple coordinate change \(x^{\mu}_{\mathrm{gen}}=x^{\mu}_{\mathrm{can}}+\zeta^{\mu}\), which transforms the gravitational field according to Eqs. (3.1)-(3.3) in [97]. A simple dimensional analysis shows that only the \(\mathrm{M}\times\mathrm{W}\) interaction can enter this coordinate change at 4PN order. Using the methods exposed previously to calculate the contribution of a given interaction to the gauge vector, we find that
\[\zeta^{0} \equiv\varphi^{0}_{\mathrm{hom\,M}\times\mathrm{W}}=\frac{8\,G^{ 2}\mathrm{M}}{c^{7}}\sum_{j\geqslant 0}\frac{\Gamma\left(\frac{d}{2}\right)}{ \Gamma\left(\frac{d}{2}+j\right)}\frac{r^{2j}}{2^{2j}\,j!\,c^{2j}}\int_{0}^{+ \infty}\mathrm{d}\tau\bigg{[}\frac{1}{\varepsilon}-2\ln\left(\frac{c\tau\, \sqrt{\bar{q}}}{2}\right)-1\bigg{]}\mathrm{W}^{(3+2j)}(t-\tau)\,, \tag{2.36a}\] \[\zeta^{i} \equiv\varphi^{i}_{\mathrm{hom\,M}\times\mathrm{W}}=0\,, \tag{2.36b}\]
where only \(j=0\) is relevant at 4PN. Implementing the procedure exposed in Sec. II of [92] with the "pure gauge" metric \(h^{\mu\nu}=-\partial^{\mu}\zeta^{\nu}-\partial^{\nu}\zeta^{\mu}+\eta^{\mu\nu} \partial_{\lambda}\zeta^{\lambda}\), we arrive at
\[\delta_{\zeta}\mathrm{I}_{ij}=-\frac{8\,G^{2}\mathrm{M}}{c^{8}}\int_{0}^{+ \infty}\mathrm{d}\tau\bigg{[}\frac{1}{\varepsilon}-2\ln\left(\frac{c\tau\sqrt{ \bar{q}}}{2}\right)-1\bigg{]}\bigg{(}\mathrm{I}^{(1)}_{ij}(t)\,\mathrm{W}^{(3 )}(t-\tau)-\mathrm{I}_{ij}(t)\,\mathrm{W}^{(4)}(t-\tau)\bigg{)}+\mathcal{O} \left(\frac{1}{c^{10}}\right)\,. \tag{2.37}\]
This effect is formally a correction to the _source_ mass quadrupole moment \(\mathrm{I}_{ij}\). Nevertheless, it is more handy to identify it as a correction in the relation between the canonical and source/gauge moments. At the 4PN order, no obstacle interferes with such identification. Thus, we will consider it as an "indirect" contribution in this relation, namely \(\delta_{\mathrm{indirect}}\mathrm{M}_{ij}\equiv\delta_{\zeta}\mathrm{I}_{ij}\), to be added to the "direct" one given by (2.35). We then find that, at least at 4PN order, the direct and indirect contributions exactly cancel:
\[\delta_{\mathrm{direct}}\mathrm{M}_{ij}+\delta_{\mathrm{indirect}}\mathrm{M} _{ij}=0\,. \tag{2.38}\]
Therefore, we have proved that all cubic interactions in \(d\) dimensions are vanishing at 4PN order, in clear contrast with the three-dimensional result, Eq. (1.1) of [97]. Finally, as there is no pole left in the final relation between the source and canonical quadrupole, we can reduce it directly to three dimensions. This relation, using full dimensional regularization, reads
\[\mathrm{M}_{ij} =\mathrm{I}_{ij}+\frac{4G}{c^{5}}\bigg{[}\mathrm{W}^{(2)} \mathrm{I}_{ij}-\mathrm{W}^{(1)}\mathrm{I}^{(1)}_{ij}\bigg{]}\] \[\quad+\frac{4G}{c^{7}}\Bigg{\{}\frac{4}{7}\mathrm{W}^{(1)}_{a( \mathrm{I}^{(}_{j})a)}+\frac{6}{7}\mathrm{W}_{a(\mathrm{I}^{(}_{j})a)}-\frac {1}{7}\mathrm{Y}^{(3)}_{a(\mathrm{I}^{(}_{j})a)}-\mathrm{Y}_{a(\mathrm{I}^{(} _{j})a)}-2\mathrm{X}\,\mathrm{I}^{(3)}_{ij}\] \[\qquad\qquad-\frac{5}{21}\mathrm{W}^{(4)}_{a}\mathrm{I}_{ija}+ \frac{1}{63}\mathrm{W}^{(3)}_{a}\mathrm{I}^{(1)}_{ija}-\frac{25}{21}\mathrm{Y }^{(3)}_{a}\mathrm{I}_{ija}-\frac{22}{63}\mathrm{Y}^{(2)}_{a}\mathrm{I}^{(1) }_{ija}+\frac{5}{63}\mathrm{Y}^{(1)}_{a}\mathrm{I}^{(2)}_{ija}\] \[\qquad\qquad+2\mathrm{W}^{(3)}\mathrm{W}_{ij}+2\mathrm{W}^{(2)} \mathrm{W}^{(1)}_{ij}-\frac{4}{3}\mathrm{W}_{(i}\mathrm{W}^{(3)}_{j)}+2 \mathrm{W}^{(2)}\mathrm{Y}_{ij}-4\mathrm{W}_{(i}\mathrm{Y}^{(2)}_{j)}\] \[\qquad\qquad+\epsilon_{ab\langle i}\bigg{[}\frac{1}{3}\mathrm{I} _{j)a}\mathrm{Z}^{(3)}_{b}-\mathrm{I}^{(3)}_{j)a}\mathrm{Z}_{b}+\frac{4}{9} \mathrm{J}_{j)a}\mathrm{W}^{(3)}_{b}-\frac{4}{9}\mathrm{J}_{j)a}\mathrm{Y}^{(2 )}_{b}+\frac{8}{9}\mathrm{J}^{(1)}_{j)a}\mathrm{Y}^{(1)}_{b}\bigg{]}\bigg{\}}+ \mathcal{O}\left(\frac{1}{c^{9}}\right)\,. \tag{2.39}\]
For the other multipole moments (essentially the mass octupole and the current quadrupole), such relations are purely quadratic and consist of odd parity terms, and so are not affected by dimensional-regularization corrections. We do not report them here, as they can be found in Sec. III.B of [95].
Since we have absorbed the correction \(\delta_{\mathrm{indirect}}\mathrm{M}_{ij}\) into the redefinition of the canonical moment, the relation (2.39), derived in the context of dimensional regularization, now holds with exactly the same definition for the source quadrupole moment as proposed in [93]. Namely, \(\mathrm{I}_{ij}\) is the "renormalized" source quadrupole moment (in the sense of [93]) which is given for quasi-circular orbits in Sec. IV.1, and which contains crucial contributions due to the (IR) dimensional regularization. Adding up the non-linear contributions described in Sec. II, we obtain the radiative moment measured at infinity. As we will see the above procedure yields the correct perturbative limit for compact binaries on circular orbits, in agreement with first-order black hole perturbation theory.
Interestingly, we found that the perturbative limit turns out also to be correct when using a simpler treatment of the IR divergences of the source quadrupole moment, based on the Hadamard regularization in ordinary three dimensions, rather than the dimensional regularization. In this case, the correct relation between canonical and source moments is given by Eq. (1.1) of [97], instead of the \(d\)-dimensional result (2.39). We find that the cubic interactions \(\mathrm{M}\times\mathrm{M}\times\mathrm{W}_{ij}\) and \(\mathrm{M}\times\mathrm{M}\times\mathrm{Y}_{ij}\) present in three dimensions do contribute to the perturbative limit for circular orbits (note that W is zero in this case), but in such a way as to cancel the contribution due to the difference between the Hadamard and dimensional regularization schemes for the mass quadrupole source moment. It turns out then that we also recover the correct perturbative limit. However, in contrast to the dimensional regularization, we do not expect the Hadamard regularization in three dimensions to lead to a well-defined and fully unambiguous result beyond this limit, so the above observation should be considered only as an interesting consistency check.
## III Corrections to the source moments due to infrared commutators
In this section, we discuss the appearance of infrared (IR) "d'Alembertian commutators" in the expression of the PN-expanded near-zone metric, and their effects on the expression of the source moments. This feature starts at 4PN order and does not affect the 4PN equations of motion, but should have been considered in previous works [91, 92, 93]. We shall show that the IR commutators enter the flux and the \((2,2)\) mode for generic orbits, however they do not contribute in the quasi-circular limit. Since they are zero for quasi-circular orbits, all the results explicitly presented in the previous works [91, 92, 93] are correct. Throughout this section, we will be based on the construction and computation of the source mass quadrupole, as described in [91].
### Appearance of the commutators in the PN metric
Crucial to the derivation of the source moments at a given PN order is the expression of the near-zone metric with the same precision. Hereafter, we will present the case of the source mass quadrupole at 4PN order, but the following discussion is easily generalized to any source moment. The gothic metric perturbation \(h^{\mu\nu}\equiv\sqrt{-g}g^{\mu\nu}-\eta^{\mu\nu}\) obeys the gauge-fixed Einstein field equations
\[\Box h^{\mu\nu}=\frac{16\pi G}{c^{4}}\tau^{\mu\nu}\,, \tag{3.1}\]
where the pseudo stress-energy tensor \(\tau^{\mu\nu}\) is defined _e.g._ in Eq. (2.4) of [91]. As explained in [115], the PN metric is obtained by solving iteratively the wave equation (3.1) in the near zone, in such a way as to match the metric in the far zone. The constructed expansion of the gravitational field in the near zone, say \(\bar{h}^{\mu\nu}\), is then given formally, to arbitrarily high orders, by [117, 81]
\[\bar{h}^{\mu\nu}=\frac{16\pi G}{c^{4}}\bigg{[}\mathrm{FP}\,\overline{\Box}_{ \mathrm{ret}}^{\,1}\,[\bar{\tau}^{\mu\nu}]\bigg{]}+\sum_{\ell=0}^{+\infty}\sum _{j=0}^{+\infty}\Delta^{-j}\hat{x}_{L}\,\frac{1}{c^{2j}}\,f_{L}^{(2j)\mu\nu}( t)\,. \tag{3.2}\]
The second term on the right-hand side of (3.2) is an homogeneous solution that is regular at the coordinate origin \(r=0\). The explicit \(d\)-dimensional expressions of the time-dependent functions \(f_{L}^{\mu\nu}(t)\), as functionals of the MPM expansion of \(\tau^{\mu\nu}\) in the exterior zone, can be found in [117].
Of interest to us here is the first term in (3.2). It is made by the PN-expanded retarded integral \(\mathrm{FP}\,\overline{\Box}_{\mathrm{ret}}^{-1}\), _i.e._, the inverse d'Alembertian operator in which all retardations are formally expanded in a PN fashion, and regularized by means of the finite part procedure, \(\mathrm{FP}_{B=0}\int\mathrm{d}^{d}\mathbf{x}^{\prime}(|\mathbf{x}^{\prime}|/ r_{0})^{B}[\cdots]\). Furthermore, the operator \(\mathrm{FP}\,\overline{\Box}_{\mathrm{ret}}^{-1}\) acts on the near-zone expansion of the pseudo stress-energy tensor \(\bar{\tau}^{\mu\nu}\), which can be derived at any PN order from the lower order expressions of \(\bar{h}^{\mu\nu}\). The IR behavior of the integrals entering the propagator is naturally affected by the choice of regularization. In general, the PN expanded propagator in (3.2) has no reason to commute with the d'Alembertian operator \(\Box\). This is the root of the appearance of the IR commutators in the PN metric, that are defined as
\[\mathcal{C}\big{\{}F\big{\}}\equiv\Big{[}\mathrm{FP}\,\overline{\Box}_{ \mathrm{ret}}^{-1},\Box\Big{]}F=\mathrm{FP}\,\overline{\Box}_{\mathrm{ret}}^ {-1}\bigg{[}\bigg{(}\frac{r}{r_{0}}\bigg{)}^{B}\Box F\bigg{]}-F\,, \tag{3.3}\]
where \(F\) is a regular (smooth) function in the near zone but typically diverges at spatial infinity, when \(r\to+\infty\); see (3.11) below.
In order to simplify the resolution of the Einstein field equations (18), we parametrize the PN metric by means of some PN "potentials", obeying flat space-time wave equations in \(d\) dimensions
\[\Box P=S\,, \tag{20}\]
where \(S\) is composed of a compact-supported sector (proportional to Dirac distributions in the case of point particles), and a non-compact one (composed of products of derivatives of lower-order PN potentials). We refer to App. A of [91] for complete definitions of the PN potentials. Now those potentials are constructed by means of the operator \(\text{FP}\,\overline{\Box}_{\text{ret}}^{-1}\), _i.e._, they are given, by definition, as
\[P\equiv\text{FP}\,\overline{\Box}_{\text{ret}}^{-1}S\,, \tag{21}\]
which accounts for the first term in the right side of (19). The second term in (19) is non-local in time and corresponds to radiation modes that are being accounted for separately [81].
When parametrizing the PN metric in this manner, it is very handy to recognize products of lower-order potentials. For instance, if some relevant combination of the field equations (18) contains terms of the form \(\partial_{i}P\partial_{i}P\) at some \(n\)PN order,4
Footnote 4: For convenience we pose \(h^{00ii}\equiv\frac{2}{(d-1)}[(d-2)h^{00}+h^{ii}]\) in \(d\) dimensions.
\[\Box\bar{h}_{(n)}^{00ii} =\cdots+\partial_{i}P\partial_{i}P+\cdots\,, \tag{22}\]
then it can be solved using the PN-expanded retarded propagator \(\text{FP}\,\overline{\Box}_{\text{ret}}^{-1}\), as
\[\bar{h}_{(n)}^{00ii} \equiv\cdots+\text{FP}\,\overline{\Box}_{\text{ret}}^{-1}\big{\{} \partial_{i}P\partial_{i}P\big{\}}+\cdots=\cdots+\text{FP}\,\overline{\Box}_{ \text{ret}}^{-1}\,\Big{\{}\frac{1}{2}\Box\left(P^{2}\right)-P\Box P\Big{\}}+\cdots\] \[=\cdots+\frac{P^{2}}{2}-\text{FP}\,\overline{\Box}_{\text{ret}}^ {-1}\big{\{}PS\big{\}}+\frac{1}{2}\mathcal{C}\{P^{2}\}+\cdots\,, \tag{23}\]
where we have used Eq. (20) and neglected the higher PN order term \((\partial_{t}P)^{2}/c^{2}\) in the second curly brackets of the first line. Because solving \(\text{FP}\,\overline{\Box}_{\text{ret}}^{-1}\{PS\}\) is much more tractable than the original \(\text{FP}\,\overline{\Box}_{\text{ret}}^{-1}\{\partial_{i}P\partial_{i}P\}\), the operation described by Eq. (23) constitutes a real improvement. As shown later on, the commutator left in Eq. (23) is in general not vanishing. The expression of the 4PN metric in terms of those commutators is displayed in App. B.
While, formally, the commutators appear at a relative 1PN order in the metric [see Eqs. (17)], they effectively only contribute starting at 4PN order, as shown in Sec. III.3. Therefore, the only source multipole moment to be affected is the 4PN source mass quadrupole. Moreover, the 4PN contribution in the metric arises as a mere function of time in \(\bar{g}_{00}\) only, through the combination \(\bar{h}^{00ii}\). Since it is the spatial gradient \(\partial_{i}\bar{g}_{00}\) that contributes to the equations of motion, the corresponding contribution vanishes. More generally, following the "\(n+2\) method" [79] to determine the PN order of the metric needed for insertion in the Fokker Lagrangian, we see that the commutators play no role in the derivation of the conservative equations of motion at 4PN order.
### Expression of the commutators
We immediately see from (21) that the commutator must be a homogeneous solution of the wave equation. Furthermore, for the IR commutator, which depends on the far zone behavior \(r\to+\infty\), that homogeneous solution must be regular in the near zone, when \(r\to 0\). These facts constrain its form: given the function \(F\), there exist a set \(\{\hat{g}_{L}\}\) of (yet unspecified) functions of time only, such that
\[\mathcal{C}\big{\{}F\big{\}}=\sum_{\ell=0}^{+\infty}\frac{(-)^{\ell}}{\ell!} \sum_{j=0}^{+\infty}\frac{1}{c^{2j}}\Delta^{-j}\dot{x}_{L}\,\hat{g}_{L}^{(2j) }(t)\,, \tag{24}\]
where we recall that the inverse Laplacians are defined in Eq. (33). In order to explicitly determine the functions \(\hat{g}_{L}\), we work out the propagator applied to a generic source \(S(\mathbf{x},t)\):
\[\text{FP}\,\overline{\Box}_{\text{ret}}^{-1}S=-\frac{\tilde{k}}{4\pi}\,\underset {B=0}{\mathrm{FP}}\sum_{k\geqslant 0}\int\!\mathrm{d}^{d}\mathbf{x}^{ \prime}\left(\frac{r^{\prime}}{r_{0}}\right)^{B}\frac{|\mathbf{x}-\mathbf{x}^ {\prime}|^{2k}}{2^{2k}k!\,c^{2k}}\bigg{[}\frac{\Gamma\left(2-\frac{d}{2} \right)}{\Gamma\left(k+2-\frac{d}{2}\right)}\frac{S^{(2k)}(\mathbf{x}^{ \prime},t)}{|\mathbf{x}-\mathbf{x}^{\prime}|^{d-2}}\]
\[-\frac{\sqrt{\pi}}{2\Gamma\left(\frac{5-d}{2}\right)\Gamma\left(k+\frac{d}{2} \right)}\int_{0}^{+\infty}\!\!\mathrm{d}\tau\,\frac{\tau^{3-d}}{c^{d-2}}S^{(2k +2)}(\mathbf{x}^{\prime},t-\tau)\bigg{]}\,. \tag{3.9}\]
This expression comes from the near-zone expansion of the homogeneous solution of the wave equation in \(d\) dimensions. The two terms in (3.9) represent respectively the even and odd parity parts of this expansion, as given by Eqs. (A7) and (A8) in [81]. Note that the odd parity part involves a non-local in time integral, shown in the second term of (3.9). The PN propagator (3.9) corresponds exactly to the prescription which is required for the first term in Eq. (3.2), _i.e._, where all retardations of the inverse d'Alembertian operator are PN expanded. In three dimensions, this prescription recovers Eqs. (3.4) and (3.7) of [116]. In turn, the prescription ensures the correct matching of the PN metric to the MPM metric in the exterior zone.
When computing the commutator \(\mathcal{C}\{F\}\) with Eqs. (3.3) and (3.9), we may transform the d'Alembertian operator inside the source term by means of appropriate integrations by parts, which yields for the commutator a finite part integral with a global factor \(B\) appearing in the source:
\[\mathcal{C}\big{\{}F\big{\}}=\underset{B=0}{\mathrm{FP}}\,\overline{\Box}_{ \mathrm{ret}}^{-1}\!\bigg{[}B\left(\frac{r}{r_{0}}\right)^{B}\left(-\frac{B-d }{r^{2}}F-\frac{2}{r}\partial_{r}F\right)\bigg{]}\,. \tag{3.10}\]
Note the similarity with the expression of the quantities \(X_{n}^{\mu\nu}\) and \(Y_{n}^{\mu\nu}\) in (2.24). Indeed, the latter have been formally defined as commutators (see Eq. (3.20) of [97]), but, in contrast to the "IR" commutators (3.3) or (3.10), \(X_{n}^{\mu\nu}\) and \(Y_{n}^{\mu\nu}\) are ultraviolet ("UV") commutators since they depend on the near zone behavior of the source term, when \(r\to 0\).
We now use the expression (3.10) of the commutator. As we said, because of the explicit factor \(B\) in the source, the commutator depends only on the IR behavior of the function \(F\), _i.e._, when \(r\to+\infty\) with \(t=\mathrm{const}\) (spatial infinity). We expand the function in \(d=3+\varepsilon\) dimensions as
\[F(\mathbf{x},t)=\sum_{\begin{subarray}{c}p_{0}<p\\ q_{0}\leqslant q_{\ell}q_{1}\end{subarray}}\frac{f_{p,q}\left(\mathbf{n},t \right)}{r^{p+qc}}\,, \tag{3.11}\]
where the sums extend on all multipolarities \(\ell\), and formally for all values of \(p\) larger than some (generally negative) integer \(p_{0}\), and for a finite set of values of \(q\) for each given \(p\). We inject this expansion into the commutator (3.10) and employ the explicit expression (3.9) for the PN expanded propagator. Consistently with the IR limit we must expand the factors \(|\mathbf{x}-\mathbf{x}^{\prime}|^{\alpha}\) in (3.9) (where \(\alpha\) depends on the parity) when \(|\mathbf{x}^{\prime}|\to+\infty\). Actually this expansion is valid as soon as \(r^{\prime}=|\mathbf{x}^{\prime}|>r=|\mathbf{x}|\) and reads, using the STF decomposition and the notation (2.33),5
Footnote 5: Eq. (3.12) is equivalent to the more familiar (but less convenient for our purpose) expansion in terms of Geggenguer polynomials \(C_{\ell}^{\gamma}(x)\),
\[|\mathbf{x}^{\prime}-\mathbf{x}|^{\alpha}=r^{\prime\alpha}\sum_{\ell=0}^{+ \infty}C_{\ell}^{-\frac{\alpha}{2}}(\mathbf{n}\cdot\mathbf{n}^{\prime})\left( \frac{r}{r^{\prime}}\right)^{\ell}\,.\]
\[|\mathbf{x}^{\prime}-\mathbf{x}|^{\alpha}=\sum_{\ell=0}^{+\infty}\frac{(-)^{ \ell}}{\ell!}\sum_{j=0}^{+\infty}\Delta^{-j}\hat{x}_{L}\,\Delta^{\prime j} \hat{\partial}_{L}^{\prime}r^{\prime\alpha}\,, \tag{3.12}\]
the object \(\Delta^{\prime j}\hat{\partial}_{L}^{\prime}r^{\prime\alpha}\) being explicitly known as
\[\Delta^{\prime j}\hat{\partial}_{L}^{\prime}r^{\prime\alpha}=2^{\ell+2j}\frac {\Gamma\big{(}\frac{\alpha}{2}+1\big{)}\Gamma\big{(}\frac{\alpha+d}{2}\big{)} }{\Gamma\big{(}\frac{\alpha}{2}+1-\ell-j\big{)}\Gamma\big{(}\frac{\alpha+d}{ 2}-j\big{)}}\hat{x}_{L}^{\prime}r^{\prime\alpha-2\ell-2j}\,. \tag{3.13}\]
With the latter formulas at hand, it is straightforward to obtain the functions \(\hat{g}_{L}(t)\) parametrizing the commutator in (3.8) in terms of the expansion coefficients in (3.11). We find
\[\hat{g}_{L}(t)=\sum_{k=0}^{+\infty}\frac{4k-2\ell-d+2}{2^{2k}k!\, c^{2k}(d-2)}\Bigg{[}\frac{2^{\ell}\,\Gamma\left(2-\frac{d}{2}\right)}{\Gamma\left(k+2- \ell-\frac{d}{2}\right)}\,\hat{f}_{2k-\ell,0}^{L\left(2k\right)}(t) \tag{3.14}\] \[-\frac{\sqrt{\pi}}{2^{\ell+1}\Gamma\left(\frac{5-d}{2}\right) \Gamma\left(k+\ell+\frac{d}{2}\right)}\frac{4k+2\ell+d-2}{4k-2\ell-d+2}\int_{ 0}^{+\infty}\!\!\mathrm{d}\tau\,\frac{(c\,\tau)^{3-d}}{c^{2\ell+1}}\,\hat{f}_ {2k+\ell+1,1}^{L\left(2k+2\ell+2\right)}(t-\tau)\Bigg{]}\,,\]
where we define the angular average over the \((d-1)\)-dimensional sphere
\[\hat{f}_{p,q}^{L}(t)\equiv\int\frac{\mathrm{d}\Omega_{d-1}}{\Omega_{d-1}}\, \hat{n}^{L}f_{p,q}(\mathbf{n},t)\quad\text{ with }\quad\Omega_{d-1}=\frac{2\pi^{\frac{d}{2}}}{\Gamma(\frac{d}{2})}\,. \tag{3.15}\]
Note the non-local character of the relation (3.14) in \(d\) dimensions, coming from the odd parity part of the PN retarded integral (3.9). Finally we see that only specific values of \(q\) contribute: the \(q=0\) terms induce an even sector, and the \(q=1\) terms, an odd one. Finally, it is important to remark that the formula (3.14) cannot induce poles \(\propto(d-3)^{-1}\) by itself. Therefore, we can safely work in the \(d=3\) limit, as long as the coefficients \(\hat{f}^{L}_{p,q}\) do not have poles.
### Application to the source mass quadrupole
Let us apply the formulas (3.8) and (3.14) to the case of the source mass quadrupole at 4PN order (the IR commutators do not affect the computation of the other required moments). The fact that Eq. (3.14) selects only terms with \(q=0\) or \(q=1\) drastically reduces the number of interactions that effectively can play a role at 4PN. To lowest orders at least, one can show that the even PN sector of the potentials bears \(q\geqslant 1\), and the odd one, \(q=0\). Therefore, the only possible interactions at lowest orders are either even-odd or odd-odd couplings, as even-even ones will have \(q\geqslant 2\). In addition, the odd part of both the lowest order potentials \(V\) and \(V_{i}\) (defined in the App. A of [91]) starts at 1.5PN (more precisely, the 0.5PN term of those two potentials is vanishing in the \(d\to 3\) limit). Investigating the metric (14), one can see that only two interactions can lead to corrections with 4PN accuracy: \(V^{2}\) and \(V\hat{W}\), where we denote \(\hat{W}\equiv\hat{W}_{kk}\).6 The commutators affect the quantities \(\Sigma\) and \(\tilde{\mu}_{1}\), defined in Eqs. (2.10) and (2.17) of [91], as
Footnote 6: The near-zone potential \(\hat{W}_{ij}\) and its trace \(\hat{W}\) (see App. A of [91]) should not be confused with the \(\ell\)-th gauge moment \(\mathrm{W}_{L}\) and \(\mathrm{W}\) corresponding to \(\ell=0\), as defined in (2.22)–(2.23).
\[\Delta\Sigma=\frac{4(d-1)\sigma}{(d-2)^{2}c^{4}}\!\left(\mathcal{ C}\big{\{}V^{2}\big{\}}+\frac{2(d-2)}{(d-1)c^{2}}\mathcal{C}\big{\{}V\hat{W} \big{\}}\right)+\mathcal{O}\left(\frac{1}{c^{10}}\right)\,, \tag{3.16a}\] \[\Delta\tilde{\mu}_{1}=\frac{2(d-4)}{(d-2)c^{4}}\!\left( \mathcal{C}\big{\{}V^{2}\big{\}}+\frac{2(d-2)}{(d-1)c^{2}}\mathcal{C}\big{\{} V\hat{W}\big{\}}\right)\!_{1}m_{1}+\mathcal{O}\left(\frac{1}{c^{10}}\right)\,, \tag{3.16b}\]
where \(\sigma\) is the compact-supported expression defined in Eq. (2.15) of [91], \(m_{1}\) the mass of particle 1, and where the label 1 indicates that the quantities have to be evaluated and regularized at its location. This induces a correction to the source quadrupole
\[\Delta\mathrm{I}_{ij}=\frac{d(d-1)}{(d-2)^{2}c^{4}}\int\!\mathrm{d}^{d}\mathbf{ x}\!\left(\mathcal{C}\big{\{}V^{2}\big{\}}+\frac{2(d-2)}{(d-1)c^{2}}\mathcal{C} \big{\{}V\hat{W}\big{\}}\right)\delta_{1}m_{1}\hat{g}_{1}^{ij}+(1\leftrightarrow 2 )+\mathcal{O}\left(\frac{1}{c^{10}}\right)\,, \tag{3.17}\]
where \(\delta_{1}=\delta^{(d)}[\mathbf{x}-\mathbf{y}_{1}]\) is the \(d\)-dimensional Dirac distribution at the position \(\mathbf{y}_{1}\) of the particle 1.
The commutator of the interaction \(V^{2}\) is quite easy to compute. Indeed, the potential \(V\) is known in the whole space to a high PN order, well beyond the required precision. One can straightforwardly expand it as in Eq. (3.11) and read the coefficients \(\hat{f}^{L}_{p,q}\). As discussed previously, the 0.5PN term of \(V\) vanishes in the \(d=3\) limit. Thus, only the (even, 0PN)-(odd, 1.5PN) coupling, having \(q=1\), can enter at 4PN. Working it explicitly, we find
\[\mathcal{C}\big{\{}V^{2}\big{\}}=\frac{G^{2}m}{c^{4}}\left[2\tilde{ \mu}^{(2)}c^{2}+\frac{1}{3}I^{(4)}\right]+\mathcal{O}\left(\varepsilon\right)\,, \tag{3.18}\]
where \(m=m_{1}+m_{2}\) and \(\tilde{\mu}(t)=\int\mathrm{d}^{3}\mathbf{x}^{\prime}\,\sigma(\mathbf{x}^{ \prime},t)\) reduces to a constant at leading order for arbitrary matter systems, so that \(\tilde{\mu}(t)=m+\mathcal{O}(c^{-2})\) with \(m\) constant, while \(I=\int\mathrm{d}^{3}\mathbf{x}^{\prime}\,\sigma(\mathbf{x}^{\prime},t)\,r^{ \prime 2}\) represents the effective moment of inertia. For compact binaries, those quantities read explicitly
\[\tilde{\mu}(t) =\tilde{\mu}_{1}(t)+\tilde{\mu}_{2}(t)=m_{1}\left[1+\frac{1}{c^{2 }}\left(\frac{3}{2}\mathbf{v}_{1}^{2}-\frac{Gm_{2}}{r_{12}}\right)\right]+(1 \leftrightarrow 2)+\mathcal{O}\left(\frac{1}{c^{4}}\right)\,, \tag{3.19}\] \[I(t) =m_{1}\,\mathbf{y}_{1}^{2}+m_{2}\,\mathbf{y}_{2}^{2}+\mathcal{O}\left( \frac{1}{c^{2}}\right)\,, \tag{3.20}\]
with \(\mathbf{v}_{1}=\mathrm{d}\mathbf{y}_{1}/\mathrm{d}t\) and \(r_{12}=|\mathbf{y}_{1}-\mathbf{y}_{2}|\). On the other hand, the \(d\)-dimensional expression of the potential \(\hat{W}\) is not known in the whole space (see App. C of [113] for more details on the computation). Using an integration by part, one can rewrite the wave equation it obeys, Eq. (A4d) of [91], as
\[\Box\hat{W}=\frac{8\pi G}{d-2}\,\sigma_{kk}-\frac{d-1}{2(d-2)}\partial_{i}V \partial_{i}V=\frac{8\pi G}{d-2}\,\sigma_{kk}-\frac{d-1}{4(d-2)}\Big{(}\Box V^{ 2}+\frac{2}{c^{2}}(\partial_{i}V)^{2}+8\pi GV\sigma\Big{)}\,, \tag{3.21}\]
where \(\sigma_{kk}\) and \(\sigma\) are the compact-supported expression defined in Eq. (15) of [91]. The compact-supported part of the source is easily integrated over the whole space, using methods described _e.g._ in [91]. It contains an even sector with \(q=1\) that cannot contribute (we recall that, up to 1PN, \(V\) bears \(q=1\)), and an odd, 0.5PN sector with \(q=0\) that will contribute, when coupled to the expansion of \(V\). As for the non-compact sector, it reads
\[\hat{W}^{\rm NC}=-\frac{d-1}{4(d-2)}{\rm FP}\,\overline{\square}_{\rm ret}^{-1} \Big{\{}\square V^{2}+\frac{2}{c^{2}}(\partial_{t}V)^{2}\Big{\}}=-\frac{d-1}{4 (d-2)}\bigg{(}V^{2}+\frac{2}{c^{2}}(\partial_{t}V)^{2}+\mathcal{C}\big{\{}V^{2 }\big{\}}\bigg{)}\,. \tag{32}\]
The first two terms will have \(q=2\) up to 1PN and 2PN orders, respectively, and we just proved that the third term is of order 2PN. Therefore, \(\hat{W}^{\rm NC}\) does not contribute to the IR commutator at the required order. Applying the formula (24), one finds the nice relation
\[\mathcal{C}\big{\{}V\hat{W}\big{\}}=-\frac{3\,c^{2}}{4}\mathcal{C}\big{\{}V^{ 2}\big{\}}+\mathcal{O}\left(\varepsilon\right)+\mathcal{O}\left(\frac{1}{c^{ 4}}\right)\,. \tag{33}\]
With the two explicit expressions (18) and (33) at hand, one can compute the resulting correction to the source mass quadrupole \(\Delta\mathrm{I}_{ij}\) in (17). In the CoM frame, it comes
\[\Delta\mathrm{I}_{ij}=-\frac{4G^{3}m^{4}\nu^{2}}{r^{3}\,c^{8}}\bigg{[}\mathbf{v}^{ 2}-3\dot{r}^{2}-\frac{Gm}{r}\bigg{]}x_{\langle i}x_{j\rangle}+\mathcal{O} \left(\frac{1}{c^{10}}\right)\,. \tag{34}\]
In the case of quasi-circular orbits, \(\mathbf{v}^{2}=Gm/r+\mathcal{O}(c^{-2})\) and \(\dot{r}=\mathcal{O}(c^{-5})\), so this correction plays no role. Note also that the two corrections to the metric (18) and (33) are functions of the time only, and so cannot contribute to the equations of motion.
## IV Source multipole moments for circular orbits
Having now expressed the radiative moments in terms of the canonical moments in Sec. II.1, and the canonical moments in terms of the source and gauge moments in Sec. II.3, we now review the explicit expression of the source moments for compact binary systems. Note that the gauge moments (entering the relation (39) as well as in Sec. III.B of [95]) are required only at Newtonian order, excepted for W, needed at 1PN. They are easily computed using Eq. (125) of [24], and their expressions on quasi-circular orbits are displayed in Eq. (57) of [17]. The most crucial moment is the source quadrupole moment \(\mathrm{I}_{ij}\), needed only at 4PN order. Indeed, its 4.5PN terms, due to 2PN relative radiation reaction corrections, cannot involve non-local integrals, and will disappear from the flux.
### Source mass quadrupole at the 4PN order
A first computation, using the Hadamard regularization for the IR divergences and dimensional regularization for the UV ones, was performed in [91]. However, in order to be consistent with the equations of motion [79; 80; 81; 82] which used dimensional regularization both in the UV and IR sectors, it should be computed with dimensional regularization also for the IR divergences. It was thus completed in [92], and the expression of the source quadrupole moment, obtained with full dimensional regularization, has the expected feature of exhibiting poles in \(\varepsilon=d-3\). As reviewed in Sec. II.2, those poles are crucial to cancel the divergences linked with the \(d\) dimensional computation of the radiative moments, obtained in [93]. Indeed, the source moments are not observables _per se_, but the radiative moments are. Therefore, only the latter have to be finite in the \(\varepsilon\to 0\) limit. However, as already mentioned, for the sake of computational simplicity, it was deemed possible to introduce the notion of a "renormalized" source quadrupole, defined by Eq. (62) of [93]. This renormalized quantity is nothing but the sum of the \(d\)-dimensional source quadrupole and the corrections (18a), arising from the radiative/canonical relation (we recall that we consider the corrections (37) as being part of the canonical/source relation). The renormalized quadrupole, by construction, has no poles in \(\varepsilon\), and when injected into the three dimensional MPM algorithm, yields the correct radiative quadrupole by definition. Last, but not least, up to 3.5PN order, the corrections due to the IR dimensional regularization exactly cancel the ones due to the renormalization, even out of the CoM frame, as proven in [93]. Thus, up to 3.5PN order, the renormalized source quadrupole coincides with the one computed with Hadamard regularization in the IR, which is why those subtleties did not hit previous computations of the flux. However, this equivalence is no more valid at 4PN order, thus the crucial need for the implementation of dimensional regularization and the "renormalization" described above.
As the correction due to the IR commutators discussed in Sec. III vanishes on quasi-circular orbits, the expression of the renormalized source quadrupole on quasi-circular orbits remains identical to the one displayed in Eq. (6.11) of [93], namely
\[\mathrm{I}_{ij}=m\nu\,\left(A\,x_{\langle i}x_{j\rangle}+B\,\frac{r^{2}}{c^{2}}v _{\langle i}v_{j\rangle}+\frac{G^{2}m^{2}\nu}{c^{5}r}\,C\,x_{\langle i}v_{j \rangle}\right)+\mathcal{O}\left(\frac{1}{c^{9}}\right)\,, \tag{4.1}\]
where, introducing the PN parameter \(\gamma\equiv\frac{Gm}{rc^{2}}\), the coefficients are given by
\[A = 1+\gamma\biggl{(}-\frac{1}{42}-\frac{13}{14}\nu\biggr{)}+\gamma ^{2}\biggl{(}-\frac{461}{1512}-\frac{18395}{1512}\nu-\frac{241}{1512}\nu^{2} \biggr{)} \tag{4.2a}\] \[+\gamma^{3}\biggl{(}\frac{395899}{13200}-\frac{428}{105}\ln \biggl{(}\frac{r}{r_{0}}\biggr{)}+\biggl{[}\frac{3304319}{166320}-\frac{44}{3} \ln\biggl{(}\frac{r}{r_{0}^{\prime}}\biggr{)}\biggr{]}\nu+\frac{162539}{16632} \nu^{2}+\frac{2351}{33264}\nu^{3}\biggr{)}\] \[+\gamma^{4}\biggl{(}-\frac{1067041075909}{12713500800}+\frac{31886 }{2205}\ln\biggl{(}\frac{r}{r_{0}}\biggr{)}\] \[\qquad\qquad+\biggl{[}-\frac{85244498897}{470870400}-\frac{2783}{ 1792}\pi^{2}-\frac{64}{7}\ln\left(16\gamma\mathrm{e}^{2\gamma_{E}}\right)- \frac{10886}{735}\ln\biggl{(}\frac{r}{r_{0}}\biggr{)}+\frac{8495}{63}\ln \biggl{(}\frac{r}{r_{0}^{\prime}}\biggr{)}\biggr{]}\nu\] \[\qquad\qquad+\biggl{[}\frac{171906563}{4484480}+\frac{44909}{2688 }\pi^{2}-\frac{4897}{21}\ln\biggl{(}\frac{r}{r_{0}^{\prime}}\biggr{)}\biggr{]} \nu^{2}-\frac{22063949}{5189184}\nu^{3}+\frac{71131}{314496}\nu^{4}\biggr{)}\,,\] \[B = \frac{11}{21}-\frac{11}{7}\nu+\gamma\biggl{(}\frac{1607}{378}- \frac{1681}{378}\nu+\frac{229}{378}\nu^{2}\biggr{)}\] (4.2b) \[+\gamma^{2}\biggl{(}-\frac{357761}{19800}+\frac{428}{105}\ln \biggl{(}\frac{r}{r_{0}}\biggr{)}-\frac{92339}{5544}\nu+\frac{35759}{924}\nu^{ 2}+\frac{457}{5544}\nu^{3}\biggr{)}\] \[+\gamma^{3}\biggl{(}\frac{23006898527}{1589187600}-\frac{4922}{22 05}\ln\biggl{(}\frac{r}{r_{0}}\biggr{)}\] \[\qquad\qquad+\biggl{[}\frac{8431514969}{529729200}+\frac{143}{192 }\pi^{2}-\frac{32}{7}\ln\left(16\gamma\mathrm{e}^{2\gamma_{E}}\right)-\frac{ 1266}{49}\ln\biggl{(}\frac{r}{r_{0}}\biggr{)}-\frac{968}{63}\ln\biggl{(}\frac{ r}{r_{0}^{\prime}}\biggr{)}\biggr{]}\nu\] \[\qquad\qquad+\biggl{[}\frac{351838141}{5045040}-\frac{41}{24}\pi ^{2}+\frac{968}{21}\ln\biggl{(}\frac{r}{r_{0}^{\prime}}\biggr{)}\biggr{]}\nu^{ 2}-\frac{1774615}{81081}\nu^{3}-\frac{3053}{432432}\nu^{4}\biggr{)}\,,\] \[C = \frac{48}{7}+\gamma\left(-\frac{4096}{315}-\frac{24512}{945}\nu \right)-\frac{32}{7}\pi\,\gamma^{3/2}\,. \tag{4.2c}\]
The coefficients \(A\) and \(B\) represent the conservative part of the quadrupole, while \(C\) is due to the radiation reaction dissipative effects. Note that, in addition to the already-encountered scale \(r_{0}\), the expression of the quadrupole involves a scale \(r_{0}^{\prime}\), associated with the UV regularization (see the footnote 10 of [91] for more details).
As discussed in [92], we found that the source quadrupole moment is not a local quantity at the 4PN order anymore, as it contains a non-local tail integral, given by Eq. (6.5) in [93]. The 4PN logarithms \(\ln(16\gamma\mathrm{e}^{2\gamma_{E}})\) in \(A\) and \(B\) are due to the conservative part of this non-local tail term in the mass quadrupole, and the coefficient \(-\frac{32}{7}\pi\gamma^{3/2}\) in \(C\), to the corresponding dissipative part of the tail term.
Finally, as reported in App. A, the general expression of the source mass quadrupole moment for generic orbits has been conclusively tested in the so-called boosted Schwarzschild black hole limit.
### Higher-order source moments
As expected from the behavior of the associated radiative moment (2.18b), poles arise when performing the IR dimensional regularization of the source mass octupole \(\mathrm{I}_{ijk}\). Nevertheless, the renormalized source mass octupole moment exactly coincides at 3PN order with the one computed with Hadamard regularization in the IR, see [93] for discussion. It reads [95]
\[\mathrm{I}_{ijk}=-\nu\,m\,\Delta\biggl{\{}D\,x_{\langle i}x_{j}x_{k\rangle}+E\, \frac{r}{c}\,v_{\langle i}x_{j}x_{k\rangle}+F\,\frac{r^{2}}{c^{2}}\,v_{\langle i }v_{j}x_{k\rangle}+G\,\frac{r^{3}}{c^{3}}\,v_{\langle i}v_{j}v_{k\rangle} \biggr{\}}+\mathcal{O}\left(\frac{1}{c^{7}}\right)\,, \tag{4.3}\]
where \(\Delta\equiv(m_{1}-m_{2})/m\), and the coefficients are
\[D=1-\gamma\nu+\gamma^{2}\left(-\frac{139}{330}-\frac{11923}{660}\nu-\frac{29}{11 0}\nu^{2}\right)\]
\[E = \frac{196}{15}\gamma^{2}\nu\,, \tag{4.4b}\] \[F = 1-2\nu+\gamma\left(\frac{1066}{165}-\frac{1433}{330}\nu+\frac{21}{ 55}\nu^{2}\right)+\gamma^{2}\left(-\frac{1130201}{48510}-\frac{989}{33}\nu+ \frac{20359}{330}\nu^{2}-\frac{37}{198}\nu^{3}+\frac{52}{7}\ln\!\left(\frac{r} {r_{0}}\right)\right)\,,\] (4.4c) \[G = 0\,. \tag{4.4d}\]
As for the renormalized current quadrupole \(\mathrm{J}_{ij}\), it also coincides at 3PN order with the one computed with Hadamard regularization in the IR [93]. It comes [94]
\[\mathrm{J}_{ij}=-\nu m\Delta\left[H\,L^{\langle i}x^{j\rangle}+K\,\frac{Gm}{c ^{3}}\,L^{\langle i}v^{j\rangle}\right]+\mathcal{O}\left(\frac{1}{c^{7}}\right)\,, \tag{4.5}\]
where we denote \(L^{i}\equiv\epsilon_{ijk}x^{j}v^{k}\), and where
\[H = 1+\gamma\left(\frac{67}{28}-\frac{2}{7}\nu\right)+\gamma^{2} \left(\frac{13}{9}-\frac{4651}{252}\nu-\frac{\nu^{2}}{168}\right) \tag{4.6a}\] \[+\gamma^{3}\left(\frac{2301023}{415800}-\frac{214}{105}\ln\left( \frac{r}{r_{0}}\right)+\left[-\frac{243853}{9240}+\frac{123}{128}\pi^{2}-22\ln \left(\frac{r}{r_{0}^{\prime}}\right)\right]\nu+\frac{44995}{5544}\nu^{2}+ \frac{599}{16632}\nu^{3}\right)\,,\] \[K = \frac{188}{35}\nu\,\gamma\,. \tag{4.6b}\]
The remaining required source moments (mass hexadecapole at 2PN, current octupole at 2PN, _etc._) are easy to compute as no regularization subtleties arise. Their expressions on quasi-circular orbits are displayed _e.g._ in Sec. 9.1 of [24]. Note that, as the first non-local feature cannot appear at a lower order than 4PN, all those higher-order source moments are instantaneous up to 3.5PN order. Therefore, they cannot contribute to the 4.5PN term of the quasi-circular flux.
## V Toolbox of integration formulas
This technical section presents some miscellaneous material that was used to perform the explicit computations of the gravitational wave flux to 4.5PN and the \((2,2)\) mode to 4PN, which will be reported in Sec. VI.
### 4PN equations of motion for circular orbits
First, the relation between the radiative and canonical moments, as well as the expression of the flux, involve temporal derivatives. We thus need the expression of the equations of motion for quasi-circular orbits at the 4PN order, including both conservative and dissipative terms. Note that the 4.5PN piece of the equations of motion [118; 119] is local and so, as already discussed, will not contribute to the 4.5PN flux. Recalling that \(\gamma=\frac{Gm}{re^{2}}\), the acceleration for quasi-circular orbit reads [82]
\[\frac{\mathrm{d}v^{i}}{\mathrm{d}t}=-\omega^{2}x^{i}-\frac{32}{5}\frac{G^{3}m^ {3}\nu}{r^{4}c^{5}}\left[1+\left(-\frac{743}{336}-\frac{11}{4}\nu\right)\gamma +4\pi\gamma^{3/2}+\mathcal{O}\!\left(\gamma^{2}\right)\right]v^{i}\,. \tag{5.1}\]
The radiation-reaction part (proportional to \(v^{i}\)) includes 1PN and 1.5PN corrections beyond the leading 2.5PN effect. In particular the 1.5PN correction is due to tails. The conservative part of the equations of motion is specified for circular orbits by the orbital frequency, which is a generalization of Kepler's law valid through 4PN:
\[\omega^{2} = \frac{Gm}{r^{3}}\Bigg{\{}1+(-3+\nu)\gamma+\left(6+\frac{41}{4} \nu+\nu^{2}\right)\gamma^{2}\] \[\quad+\left(-10+\left[-\frac{75707}{840}+\frac{41}{64}\pi^{2}+22 \ln\left(\frac{r}{r_{0}^{\prime}}\right)\right]\nu+\frac{19}{2}\nu^{2}+\nu^{3 }\right)\!\gamma^{3}\]
\[+\left(15+\left[\frac{19644217}{33600}+\frac{163}{1024}\pi^{2}+ \frac{256}{5}\gamma_{\rm E}+\frac{128}{5}\ln(16\gamma)-290\ln\left(\frac{r}{r_{0} ^{\prime}}\right)\right]\nu\right.\] \[\left.\qquad\qquad+\left[\frac{44329}{336}-\frac{1907}{64}\pi^{2} +\frac{1168}{3}\ln\left(\frac{r}{r_{0}^{\prime}}\right)\right]\nu^{2}+\frac{51 }{4}\nu^{3}+\nu^{4}\right)\!\gamma^{4}+\mathcal{O}\!\left(\gamma^{5}\right) \right\}. \tag{5.2}\]
We recall that the scales \(r_{0}\) and \(r_{0}^{\prime}\) are respectively associated with IR and UV regularizations. Using the flux-balance equation as well as the generalized Kepler's law, we find the secular decay of orbital quantities at 1.5PN order
\[\dot{r} \equiv\frac{{\rm d}r}{{\rm d}t}=-\frac{64}{5}\frac{G^{3}m^{3}\nu} {r^{3}\,c^{5}}\left[1+\left(-\frac{1751}{336}-\frac{7}{4}\nu\right)\gamma+4 \pi\gamma^{3/2}+\mathcal{O}\!\left(\gamma^{2}\right)\right]\,, \tag{5.3a}\] \[\dot{\omega} \equiv\frac{{\rm d}\omega}{{\rm d}t}=\frac{96}{5}\frac{Gm\nu}{r^ {3}}\gamma^{5/2}\left[1+\left(-\frac{2591}{336}-\frac{11}{12}\nu\right)\gamma+ 4\pi\gamma^{3/2}+\mathcal{O}\!\left(\gamma^{2}\right)\right]\,. \tag{5.3b}\]
The relative 2PN order is also known [118; 119], but is not required for the present work. In the following, it will be important to distinguish between the orbital frequency \(\omega\) defined from the equations of motion by (5.2), and the GW half-frequency \(\Omega\) which is measured in the waves propagating in the far zone. We pose
\[y\equiv\left(\frac{Gm\omega}{c^{3}}\right)^{2/3}\,, \tag{5.4}\]
and distinguish it later from the PN parameter \(x\) defined from the GW half-frequency \(\Omega\), see Eq. (6.9).
### Post-adiabatic integration of the tail effect
In order to compute the tail integrals, for instance (2.5), one needs to specify the orbit's behavior of the compact binary system in the remote past, as the effect is not localized in time, but integrates over the whole past history of the source. In the case of quasi-circular orbits, and up to 3.5PN precision in the multipoles, an adiabatic approximation (considering the orbital elements \(r\) and \(\omega\) to be constant in time) is sufficient, and one can follow the lines of [120], together with the integrals presented in the App. B of [121]. However, Eqs. (5.3) show that this adiabatic approximation is no longer valid at a relative 2.5PN precision. As the tail enters at 1.5PN order in the moments, the first "post-adiabatic" (PA) correction will affect the moments at the 4PN order, thus we need to properly evaluate it in order to consistently derive the 4PN flux and \((2,2)\) mode.
As shown in [120], the tail integrals on quasi-circular orbits reduce to elementary integrals of the type
\[\mathcal{I}_{a,n}(u)=\int_{0}^{+\infty}{\rm d}\tau\left[y(u-\tau)\right]^{a}{ \rm e}^{-{\rm i}n\,\phi(u-\tau)}\ln\left(\frac{\tau}{\tau_{0}}\right)\,, \tag{5.5}\]
where \(n\in\mathbb{N}^{\star}\) (the case \(n=0\) does not appear at 4PN), \(a\) is usually a rational fraction, and the constant \(\tau_{0}\) denotes either \(2r_{0}/c\) or \(2b_{0}/c\). The orbital phase \(\phi\), the orbital frequency \(\omega=\dot{\phi}\) and the parameter \(y=(\frac{Gm\omega}{c^{3}})^{2/3}\) are integrated over any instant \(u-\tau\) in the past. Our strategy will be to notice that the integral \(\mathcal{I}_{a,n}\) involves a fast oscillating exponential, that is derivable under properly defined PA approximations. This section provides a general method valid an arbitrary PA order, which we apply to first order 1PA.
We first remark that the difference between the current phase \(\phi(u)\) and the phase \(\phi(u-\tau)\) in the past is of the order of the inverse of the radiation reaction scale, _i.e._, \(\phi(u)-\phi(u-\tau)\) scales as \(\mathcal{O}(c^{5})\), which is also obvious from Eq. (317) of [24]. To describe the radiation-reaction scale, we introduce a dimensionless adiabatic parameter at the current time \(u\), denoted by7
Footnote 7: This definition is intended to be valid at any PN order, and is different from the one adopted in previous work; see (4.8) in [120].
\[\xi(u)\equiv\frac{\dot{\omega}(u)}{\omega^{2}(u)}=\mathcal{O}\left(\frac{1}{c^ {5}}\right)\,. \tag{5.6}\]
Using the energy balance equation we have \(\xi=-\mathcal{F}(\omega)/[\omega^{2}{\rm d}E/{\rm d}\omega]\). We want to compute the integral (5.5) in the PA limit where \(\xi(u)\to 0\). To this end, let us pose
\[\phi(u)-\phi(u-\tau)=\frac{v}{\xi(u)}\,, \tag{5.7}\]
and change the integration variable from \(\tau\) to \(v\), so that the integral (5.5) becomes
\[{\cal I}_{a,n}=\frac{Gm}{c^{3}}\,\frac{\mathrm{e}^{-\mathrm{i}n\phi(u)}}{\xi(u)} \,\int_{0}^{+\infty}\mathrm{d}v\,\big{[}y\big{(}u-\tau(v)\big{)}\big{]}^{a-3/2} \,\ln\!\left(\frac{\tau(v)}{\tau_{0}}\right)\mathrm{e}^{\frac{\mathrm{i}n\, \omega}{\xi(u)}}\,. \tag{5.8}\]
Here \(\tau(v)\) is obtained by inverting Eq. (5.7) for \(\tau\) as a function of \(v\). The main point is that, in the PA limit, \(\xi(u)\to 0\), and so, as the imaginary exponential in the integrand of (5.8) oscillates very rapidly, the integral is a sum of alternatively positive and negative contributions which essentially sum up to zero. However there are no oscillations when \(v=0\), therefore the integral is essentially given by the contribution in the neighborhood of the bound at \(v=0\). In other words we are entitled to compute the integral by replacing the integrand by its formal expansion series when \(v\to 0\). This will give a formal (asymptotic) series in powers of \(\xi(u)\) which is the "PA" expansion of the integral.
In the PA limit we thus have \(\tau\to 0\), and we can expand Eq. (5.7) to any order as
\[v=\xi\sum_{n=0}^{+\infty}\frac{(-)^{n}}{(n+1)!}\,\omega^{(n)}\tau^{n+1}\,. \tag{5.9}\]
From now on, we pose \(\xi=\xi(u)\), \(\phi=\phi(u)\), \(\omega=\omega(u)=\dot{\phi}(u)\), \(\omega^{(n)}=(\mathrm{d}^{n}\omega/\mathrm{d}u^{n})(u)\) for the quantities at the _current_ time \(u\). Since \(\omega^{(1)}\equiv\dot{\omega}=\xi\,\omega^{2}\) the expansion (5.9) clearly represents the PA approximation in powers of \(\xi\) and arbitrary high time derivatives of \(\xi\) denoted \(\xi^{(n)}\). Note that each time derivative of \(\xi\) adds a radiation reaction scale, hence we have \(\xi^{(n)}={\cal O}(\xi^{n+1})={\cal O}(c^{-5n-5})\).
To obtain the PA expansion of (5.8) we need to invert the series (5.9) and obtain \(\tau\) as a power series in \(v\). This is given by the Lagrange inversion theorem as
\[\tau=\sum_{p=0}^{+\infty}\frac{1}{(p+1)!}\,f_{p}\left(\frac{v}{\xi}\right)^{p +1}\,, \tag{5.10}\]
where the general coefficients \(f_{p}\) read
\[f_{p}=\left(\frac{\mathrm{d}}{\mathrm{d}\tau}\right)^{p}\left[\left(\frac{ \tau}{\phi(u)-\phi(u-\tau)}\right)^{p+1}\right]_{\tau=0}=\left(\frac{\mathrm{ d}}{\mathrm{d}\tau}\right)^{p}\left[\left(\sum_{n=0}^{+\infty}\frac{(-)^{n}}{(n+1)! }\,\omega^{(n)}\tau^{n}\right)^{-p-1}\,\right]_{\tau=0}, \tag{5.11}\]
where \(\tau=0\) is applied after the \(p\) differentiations with respect to \(\tau\). For instance, up to 2PA order, we obtain
\[\tau=\frac{v}{\xi\,\omega}\Bigg{[}1+\frac{v}{2}+\left(1-\frac{\dot{\xi}}{\xi^ {2}\omega}\right)\!\frac{v^{2}}{6}+{\cal O}\left(v^{3}\right)\Bigg{]}\,. \tag{5.12}\]
The previous formulas show that the method can be straightforwardly extended to any order. But as we said, to compute the tail term at 4PN order, we need only the correction of order 2.5PN, which corresponds to the 1PA approximation, hence just \(\tau=\frac{v}{\xi\,\omega}[1+\frac{v}{2}+{\cal O}(v^{2})]\). To 1PA order, the integral (5.8) reads
\[{\cal I}_{a,n}=\frac{Gm}{c^{3}}\,\frac{y^{a-3/2}\,\mathrm{e}^{-\mathrm{i}n \phi}}{\xi}\,\int_{0}^{+\infty}\mathrm{d}v\,\Bigg{[}\bigg{(}1+\Big{(}1-\frac{ 2a}{3}\Big{)}v\bigg{)}\ln\left(\frac{v}{\xi\,\omega\,\tau_{0}}\right)+\frac{v} {2}+{\cal O}\left(v^{2}\right)\Bigg{]}\,\mathrm{e}^{\frac{\mathrm{i}n\,\omega} {\xi}}\,. \tag{5.13}\]
The remaining integral is computed as follows. We transform the complex exponential into a real one by performing the change of variable \(v=\mathrm{i}\,w\) for \(n>0\), and \(v=-\mathrm{i}\,w\) for \(n<0\), respectively. The integration now takes place along the imaginary axis, which can be remedied by resorting to the Cauchy theorem on the closed contour made of the three following pieces, to be considered in the limit \(R\to+\infty\): (i) the path from \(w=0\) to \(w=R\) on the real axis, (ii) the oriented quarter of circle of radius \(|w|=R\) from \(\arg w=0\) to \(\arg w=-\pi/2\) (or \(\arg w=\pi/2\) if \(n<0\)) and (iii) the segment of the imaginary axis going from \(w=-\mathrm{i}R\) (\(w=\mathrm{i}R\) if \(n<0\)) to \(w=0\). This leads to
\[{\cal I}_{a,n}=\frac{Gm}{c^{3}}\,\frac{y^{a-3/2}\,\mathrm{e}^{-\mathrm{i}n\phi}} {\mathrm{i}\,\xi}\,\int_{0}^{+\infty}\mathrm{d}w\,\Bigg{[}\bigg{(}-\mathrm{s} (n)+\frac{2a-3}{3}\,\mathrm{i}\,w\bigg{)}\left(\ln\left(\frac{w}{\xi\,\omega\, \tau_{0}}\right)+\mathrm{i}\frac{\pi}{2}\,\mathrm{s}(n)\right)+\frac{\mathrm{i} \,w}{2}+{\cal O}\left(w^{2}\right)\Bigg{]}\,\mathrm{e}^{-\frac{|n|}{\xi}}\,, \tag{5.14}\]
where \(\mathrm{s}(n)\) is the sign function. In this form, \({\cal I}_{a,n}\) can be integrated, as it boils down to elementary integrals. For completeness we give the formulas needed to handle any PA approximation:
\[\int_{0}^{+\infty}\mathrm{d}w\,w^{j}\,\mathrm{e}^{-\frac{|n|w}{\xi}}=j!\left( \frac{\xi}{|n|}\right)^{j+1}\,, \tag{5.15a}\]
\[\int_{0}^{+\infty}\mathrm{d}w\,w^{j}\,\ln\left(\frac{w}{\xi\,\omega\,\tau_{0}} \right)\,\mathrm{e}^{-\frac{|n|\,w}{\xi}}=j!\left(\frac{\xi}{|n|}\right)^{j+1} \left[H_{j}-\gamma_{\mathrm{E}}-\ln\!\left(|n|\,\omega\,\tau_{0}\right)\right], \tag{5.15b}\]
where \(H_{j}=\sum_{k=1}^{j}\frac{1}{k}\) denotes the harmonic number and \(\gamma_{\mathrm{E}}\) is the Euler constant. Finally we obtain, at desired 1PA order, the result
\[\mathcal{I}_{a,n}=\frac{Gm}{c^{3}}\,\frac{y^{a-3/2}\,\mathrm{e}^{-\mathrm{i}n \phi}}{\mathrm{i}\,n}\Bigg{\{}\bigg{(}1+\frac{2a-3}{3}\,\frac{\xi}{\mathrm{i} \,n}\bigg{)}\bigg{[}\ln\!\left(|n|\,\omega\,\tau_{0}\right)+\gamma_{\mathrm{E} }-\mathrm{i}\,\frac{\pi}{2}\,\mathrm{s}(n)\bigg{]}-\frac{4a-9}{6}\,\frac{\xi }{\mathrm{i}\,n}+\mathcal{O}\left(\xi^{2}\right)\Bigg{\}}\,. \tag{5.16}\]
Recall that all quantities are evaluated at current time \(u\) and that the adiabatic parameter \(\xi=\dot{\omega}/\omega^{2}\) is easily computed with Eqs. (5.3). With this result at hand, we are able to derive the tail integral (2.5) with 2.5PN relative precision, which impacts the computation and final results at the 4PN order.
### Memory effects for circular orbits up to 1.5PN relative order
Memory terms, such as the first term of Eq. (2.6), are hereditary integrals of the form \(\int_{0}^{+\infty}\mathrm{d}\tau F(u-\tau)G(u-\tau)\), where \(F\) and \(G\) represent dynamical multipole moments. Note that they enter only in mass-type moments, as clear from Sec. II.1. In the case of quasi-circular orbits, they reduce to a sum of terms of the form
\[\mathcal{J}_{a,n}(u)=\int_{0}^{+\infty}\mathrm{d}\tau\left[y(u-\tau)\right]^{ a}\mathrm{e}^{-\mathrm{i}n\,\phi(u-\tau)}\,. \tag{5.17}\]
Besides the absence of logarithm, the main difference with the tail integral is the possibility of having \(n=0\), _i.e._ persistent or "DC" terms.
Let us first focus on oscillatory ("AC") memory terms, having \(n\neq 0\). As they only involve a simple, logarithmic-free, integration, memory terms will enter in the flux as instantaneous contributions, and can be computed as such. In particular, since they arise at the odd PN approximations 2.5PN and 3.5PN, they do not contribute to the flux for quasi-circular orbits. The evaluation of the integrals of type (5.17) is thus only required for the derivation of the modes. Concerning the quadrupole moment, since the memory effect enters at 2.5PN order, as clear from (2.6), it is thus required at a relative 1.5PN precision for the mode \((2,2)\) and thus, one can safely compute the integral in the adiabatic approximation. As discussed in [120], for circular orbits this is equivalent to taking \(y\) to be constant together with a linear phase (appropriate for the exact circular orbit), and keeping only the contribution of the integral due to the bound \(\tau=0\). One finds
\[\mathcal{J}_{a,n}=y^{a}\,\mathrm{e}^{-\mathrm{i}n\,\phi}\int_{0}\mathrm{d} \tau\,\mathrm{e}^{\mathrm{i}n\,\omega\,\tau}+\mathcal{O}\left(\xi\right)=- \frac{y^{a}\,\mathrm{e}^{-\mathrm{i}n\,\phi}}{\mathrm{i}n\,\omega}+\mathcal{O }\left(\xi\right)\,. \tag{5.18}\]
The persistent DC terms obviously do not contribute to the flux, and they only contribute to the modes which have \(\mathrm{m}=0\), for example, the \((\ell,\mathrm{m})=(2,0)\) mode for the mass quadrupole. As is clear in the following, the absence of the fast oscillating exponential in \(\mathcal{J}_{a,0}\) generates an inverse power of the adiabatic parameter \(\xi\), thus degrading the precision by the radiation-reaction scale 2.5PN. This is the well-known memory effect: as it starts at 2.5PN order in the waveform, it finally enters the \((2,0)\) mode at Newtonian order. Several methods are possible to evaluate \(\mathcal{J}_{a,0}\); see _e.g._[120]. In the following, we rely on a change of integration variables from \(\tau\) to \(y^{\prime}=y(u-\tau)\):
\[\mathcal{J}_{a,0}=\int_{0}^{+\infty}\mathrm{d}\tau\left[y(u-\tau)\right]^{a}= \int_{0}^{y(u)}\mathrm{d}y^{\prime}\,\frac{y^{\prime a}}{\dot{y}(u-\tau)}\,, \tag{5.19}\]
reading \(\dot{y}(u-\tau)\) as a function of \(y^{\prime}\) from Eqs. (5.3), and supposing that \(a>4\), as is the case in practical computations. To 1.5PN relative order, we have
\[\mathcal{J}_{a,0}=\frac{5Gm}{64c^{3}\nu}\,\frac{y^{a-4}}{a-4}\left[1+\frac{a- 4}{a-3}\left(\frac{743}{336}+\frac{11}{4}\nu\right)y-8\pi\frac{a-4}{2a-5}y^{3/ 2}+\mathcal{O}\left(y^{2}\right)\right]\,. \tag{5.20}\]
It is interesting to observe that the tail effect in the flux directly influences the DC memory in (5.20).
Last, but not least, we need to look at the interesting case of the tails-of-memory terms (2.9). These terms can be treated as standard tail terms (with relative Newtonian accuracy), except for the "genuine" tail-of-memory given by the first line of (2.9), namely
\[\mathcal{K}_{ij}=\frac{8G^{2}\mathrm{M}}{7c^{8}}\int_{0}^{+\infty}\!\mathrm{d} \rho\,\mathrm{M}_{a(i}^{(4)}(u-\rho)\int_{0}^{+\infty}\!\mathrm{d}\tau\, \mathrm{M}_{j)a}^{(4)}(u-\rho-\tau)\ln\left(\frac{\tau}{\tau_{0}}\right)\,, \tag{5.21}\]
where \(c\,\tau_{0}=2r_{0}\,\mathrm{e}^{\frac{1613}{270}}\). We remind that this expression agrees with the tail-of-memory directly computed from the radiative quadrupole moment at infinity [110; 111]. We first perform the tail-like integral over \(\tau\). As we need to evaluate it at relative Newtonian order only, we can safely use the adiabatic approximation. Next we perform the integral over \(\tau\), which is found to be a simple DC memory integral of the type (5.19) with \(a=13/2\). Hence we find
\[\mathcal{K}_{ij}=\frac{128\pi}{7}\frac{\nu^{2}c^{5}}{G}\,\ell_{\langle i}\ell_ {j\rangle}\int_{0}^{+\infty}\!\mathrm{d}\rho\left[y(u-\rho)\right]^{13/2}=\frac {4\pi}{7}m\nu\,c^{2}\,y^{5/2}\,\ell_{\langle i}\ell_{j\rangle}\,, \tag{5.22}\]
where \(\ell^{i}=L^{i}/|\mathbf{L}|\) is the constant unit vector associated with the Newtonian angular momentum and, thus, orthogonal to the orbital plane. Interestingly, this tail-of-memory result will give a contribution in the \((2,0)\) mode that exactly cancels the one coming from the 1.5PN corrections to the ordinary memory effect, obtained in (5.20).
## VI Results
Collecting all the pieces that were discussed in previous sections, and using notably the integration techniques developed in Sec V, we obtain our main results, namely the gravitational flux and quadrupole modes. All results displayed hereafter can be found in the ancillary file [102].
### Flux at 4PN order for generic orbits
For generic orbits, and in the CoM frame, we split the gravitational energy flux (2.1) as
\[\mathcal{F}=\mathcal{F}_{\text{can}}+\mathcal{F}_{\text{non-lin}}\,. \tag{6.1}\]
The first piece, \(\mathcal{F}_{\text{can}}\), is the contribution due to the canonical mass and current moments, when they are fully expressed in terms of the compact binary parameters, _i.e._
\[\mathcal{F}_{\text{can}}=\sum_{\ell\geq 2}\frac{G}{c^{2\ell+1}}\!\left[a_{ \ell}\,\mathrm{M}_{L}^{(\ell+1)}\mathrm{M}_{L}^{(\ell+1)}+\frac{b_{\ell}}{c^{ 2}}\,\mathrm{S}_{L}^{(\ell+1)}\mathrm{S}_{L}^{(\ell+1)}\right], \tag{6.2}\]
where we recall that the numerical coefficients \(a_{\ell}\) and \(b_{\ell}\) are given in Eq. (2.2). We further split \(\mathcal{F}_{\text{can}}\) into a local-in-time (or instantaneous) part and a non-local one:
\[\mathcal{F}_{\text{can}}=\mathcal{F}_{\text{can}}^{\text{loc}}+\mathcal{F}_{ \text{can}}^{\text{non-loc}}\,. \tag{6.3}\]
The local piece \(\mathcal{F}_{\text{can}}^{\text{loc}}\) is too long to be displayed here but can be found in the ancillary file [102]. Note the presence of the scales associated with the regularization processes, \(r_{0}\) and \(r_{0}^{\prime}\), which is expected, as the canonical part of the flux is not a gauge invariant quantity. As for the non-local piece, \(\mathcal{F}_{\text{can}}^{\text{non-loc}}\), it comes from two effects: (i) the direct contribution of the non-local part of the mass quadrupole moment, given by Eq. (6.5) in [93]; (ii) the tail term in the 4PN equations of motion, which contributes when time differentiating the 4PN mass quadrupole moment. [The latter tail term is given by the first line of Eq. (4.4) in [82], while the second line is included in the local part.] The non-local piece reads (at the required order, we are dealing with Newtonian quantities, so we can identify source and canonical moments):
\[\mathcal{F}_{\text{can}}^{\text{non-loc}}=\frac{48}{5}\frac{G^{3 }\mathrm{M}}{c^{13}}\,\mathrm{M}_{ij}^{(3)}\Bigg{\{} \frac{1}{7}\,\frac{\mathrm{d}^{3}}{\mathrm{d}u^{3}}\Bigg{[}\mathrm{M}_{ik}(u )\int_{0}^{+\infty}\!\mathrm{d}\tau\ln\left(\frac{c\tau}{2r_{0}}\right)\mathrm{ M}_{jk}^{(5)}(u-\tau)\Bigg{]}-\frac{m\nu}{15}x_{ik}\int_{0}^{+\infty}\!\mathrm{d} \tau\ln\left(\frac{c\tau}{2r}\right)\mathrm{M}_{jk}^{(8)}(u-\tau)\] \[-\frac{m\nu}{15}\big{(}3x_{k}v_{i}+x_{i}v_{k}\big{)}\int_{0}^{+ \infty}\!\mathrm{d}\tau\ln\left(\frac{c\tau}{2r}\right)\mathrm{M}_{jk}^{(7)}(u -\tau)\Bigg{\}}+\mathcal{O}\left(\frac{1}{c^{14}}\right)\,. \tag{6.4}\]
Note the difference between the logarithmic kernel of the first integral, bearing \(r_{0}\), and the two other ones, bearing \(r\). Finally the second piece of the flux (6.1), \(\mathcal{F}_{\text{non-lin}}\), is given by all the non-linearities in the GW propagation discussed in Sec. II.1. It contains all the double products between canonical moments and radiative moments, plus the square of the tail terms and the double product between the 1.5PN and 2.5PN corrections to the quadrupole. We write it as
\[\mathcal{F}_{\text{non-lin}}=\frac{G}{5c^{5}}\bigg{[}2\mathrm{M}_{ ij}^{(3)}\mathrm{U}_{ij}^{1.5\text{PN}\,(1)}+2\mathrm{M}_{ij}^{(3)}\mathrm{U}_{ij}^{2.5 \text{PN}\,(1)}+2\mathrm{M}_{ij}^{(3)}\mathrm{U}_{ij}^{3\text{PN}\,(1)}+\mathrm{ U}_{ij}^{1.5\text{PN}\,(1)}\mathrm{U}_{ij}^{1.5\text{PN}\,(1)}+\dots\bigg{]}\] \[+\frac{G}{189c^{7}}\bigg{[}2\mathrm{M}_{ijk}^{(4)}\mathrm{U}_{ijk }^{1.5\text{PN}\,(1)}+2\mathrm{M}_{ijk}^{(4)}\mathrm{U}_{ijk}^{2.5\text{PN}\,(1)}+ \dots\bigg{]}+\dots+\mathcal{O}\left(\frac{1}{c^{14}}\right)\,, \tag{6.5}\]
where the ellipsis can be completed in a straightforward way from the results in Sec. II.1.
### Flux at 4.5PN order for quasi-circular orbits
Introducing the unit separation \(\mathbf{n}=\mathbf{x}/r\) between the particles, as well as the unit vector \(\mathbf{\lambda}=\mathbf{\ell}\times\mathbf{n}\) such that \((\mathbf{n},\mathbf{\lambda},\mathbf{\ell})\) forms a direct orthonormal triad, we may write the relative velocity as \(\mathbf{v}=r\omega\mathbf{\lambda}+\dot{r}\,\mathbf{n}\), where \(\dot{r}\) is given by (5.3). Then, using (5.2), we can express the 4PN flux on quasi-circular orbits as a function of the orbital frequency \(\omega\), or equivalently the parameter \(y\) defined by (5.4).
However we find that the 4PN flux parametrized by \(y\) still contains the unphysical constant \(b_{0}\), although the other arbitrary scales \(r_{0}\) and \(r_{0}^{\prime}\) have properly disappeared. The reason is that, starting at the 4PN order, the frequency is modified due to the propagation of tails in the wave zone. The half-phase \(\psi\) of the reduced GW, _i.e._ half the phase of the (2,2) mode, differs from the orbital phase \(\phi\) (such that the orbital frequency is \(\omega=\dot{\phi}\)) by a logarithmic, tail-induced phase modulation, as
\[\psi=\phi-\frac{2G\mathrm{M}\,\omega}{c^{3}}\ln\!\left(\frac{ \omega}{\omega_{0}}\right), \tag{6.6}\]
where M denotes the constant ADM mass, and \(\omega_{0}\) is related to \(b_{0}\) by \(c\,\omega_{0}^{-1}=4b_{0}\,\mathrm{e}^{\gamma_{\mathrm{E}}-11/12}\). We remind the reader that the constant \(b_{0}\), which appears in the tail terms (2.5), (2.7) and (2.9), is arbitrary and parametrizes the logarithmic deviation of light cones in harmonic coordinates from the flat cones, see _e.g._ (2.10)-(2.11) in [99]. The constant \(r_{0}\) is an IR scale introduced into the definition of the source multipole moments, see Eq. (2.1) in [91], while \(r_{0}^{\prime}\) is a UV scale associated with the regularization of the self-field of point particles.
This phase modulation was determined in [121, 122] and it has been repeatedly argued [120, 123] that it affects the waveform at the 4PN order. Now, the phase modulation (6.6) also shifts the GW half-frequency \(\Omega=\dot{\psi}\) with respect to the orbital one \(\omega=\dot{\phi}\). The GW half-frequency, directly measurable from the waveform at infinity, reads
\[\Omega=\omega-\frac{2G\mathrm{M}\,\dot{\omega}}{c^{3}}\left[ \ln\!\left(\frac{\omega}{\omega_{0}}\right)+1\right]\,. \tag{6.7}\]
Using Eq. (5.3) for the frequency chirp at the dominant order, it explicitly comes, replacing M by \(m\) at that order,
\[\Omega=\omega\left\{1-\frac{192}{5}\,\nu\left(\frac{Gm\omega}{ c^{3}}\right)^{8/3}\left[\ln\!\left(\frac{\omega}{\omega_{0}}\right)\!+\!1 \right]+\mathcal{O}\left(\frac{1}{c^{10}}\right)\right\}, \tag{6.8}\]
where we recall that \(\nu=\frac{m_{1}m_{2}}{m^{2}}\) is the symmetric mass ratio. In the following, the results are expressed for quasi-circular orbits in terms of the measurable GW half-frequency \(\Omega\) through the PN parameter
\[x=\left(\frac{Gm\Omega}{c^{3}}\right)^{2/3}\,. \tag{6.9}\]
Recalling also the definition (5.4) and posing \(y_{0}=(\frac{Gm\omega_{0}}{c^{5}})^{2/3}\) we thus have
\[x=y\left\{1-\frac{192}{5}\,\nu\,y^{4}\!\left[\ln\!\left(\frac{y }{y_{0}}\right)+\frac{2}{3}\right]+\mathcal{O}\!\left(y^{5}\right)\right\}, \tag{6.10}\]
showing that indeed the GW half-frequency \(\Omega\) differs from the orbital one \(\omega\) at the 4PN order only, hence the fact that it did not affect previous computations such as in [120, 123]. Therefore, once the 4PN flux is obtained in terms of \(y\), we replace \(y\) in terms of \(x\) using the inverse of Eq. (6.10) and reexpand to 4PN order. With this procedure, the constant \(b_{0}\) cancels out as expected. Finally, adding also the 4.5PN piece [96], the quasi-circular 4.5PN flux is
\[\mathcal{F}=\frac{32c^{5}}{5G}\nu^{2}x^{5}\Bigg{\{} 1+\bigg{(}-\frac{1247}{336}-\frac{35}{12}\nu\bigg{)}x+4\pi x^{3/2}+ \bigg{(}-\frac{44711}{9072}+\frac{9271}{504}\nu+\frac{65}{18}\nu^{2}\bigg{)}x ^{2}+\bigg{(}-\frac{8191}{672}-\frac{583}{24}\nu\bigg{)}\pi x^{5/2}\] \[+\bigg{[}\frac{6643739519}{69854400}+\frac{16}{3}\pi^{2}-\frac{ 1712}{105}\gamma_{\mathrm{E}}-\frac{856}{105}\ln(16\,x)+\bigg{(}-\frac{134543 }{7776}+\frac{41}{48}\pi^{2}\bigg{)}\nu-\frac{94403}{3024}\nu^{2}-\frac{775}{3 24}\nu^{3}\bigg{]}x^{3}\] \[+\bigg{(}-\frac{16285}{504}+\frac{214745}{1728}\nu+\frac{193385} {3024}\nu^{2}\bigg{)}\pi x^{7/2}\] \[+\bigg{[}-\frac{323105549467}{3178375200}+\frac{232597}{4410} \gamma_{\mathrm{E}}-\frac{1369}{126}\pi^{2}+\frac{39931}{294}\ln 2-\frac{47385}{15 68}\ln 3+\frac{232597}{8820}\ln x\]
\[+\left(-\frac{1452202403629}{1466942400}+\frac{41478}{245}\gamma_{ \rm E}-\frac{267127}{4608}\pi^{2}+\frac{479062}{2205}\ln 2+\frac{47385}{392}\ln 3 +\frac{20739}{245}\ln x\right)\!\nu\] \[+\left(\frac{1607125}{6804}-\frac{3157}{384}\pi^{2}\right)\!\nu^{2 }+\frac{6875}{504}\nu^{3}+\frac{5}{6}\nu^{4}\right]\!x^{4}\] \[+\left[\frac{265978667519}{745113600}-\frac{6848}{105}\gamma_{\rm E }-\frac{3424}{105}\ln(16\,x)+\left(\frac{2062241}{22176}+\frac{41}{12}\pi^{2} \right)\!\nu\right.\] \[\left.\qquad-\frac{133112905}{290304}\nu^{2}-\frac{3719141}{38016 }\nu^{3}\right]\!x^{9/2}+{\cal O}\!\left(x^{5}\right)\Biggr{\}}\,. \tag{6.11}\]
A significant check is to observe that the leading-order terms in the test-mass limit \(\nu\to 0\) perfectly agree with the results of linear black-hole perturbation theory [124; 125; 126; 127; 128]. Note that the flux has been confirmed by different groups up to 2PN order [19; 23], and that the 4.5PN piece is in agreement with the independent work [100]. All other terms are new with the present paper. We have also explicitly verified that, at the 4PN order, Eq. (6.11) can be recovered from the gravitational modes given by Eq. (6.17) below and in [18], using
\[{\cal F}=\frac{c^{3}}{16\pi G}\sum_{\ell\geqslant 0}\sum_{{\rm m}=-\ell}^{ \ell}\left|\dot{h}_{\ell m}\right|^{2}. \tag{6.12}\]
As usual, the contributions due to the absorption by the black-hole horizons should be added separately from the post-Newtonian result (6.11); see [129; 130; 131; 132; 133; 134].
In the companion Letter [101] we use the quasi-circular flux \({\cal F}(x)\) to derive the gravitational frequency chirp and phase at the 4.5PN order. Namely, we apply the energy flux-balance law
\[\dot{E}(x)=-{\cal F}(x)\,, \tag{6.13}\]
where \(x\) is directly linked to the observed GW half-frequency through the definition (6.9), and is related to the orbital frequency _via_ the relation (6.10). Here, \(E(x)\) denotes the (Noetherian) binding energy of the compact binary on the quasi-circular orbit, as calculated from the 4PN equations of motion, see _e.g._[82]. The fact that the left-hand side of the balance equation, which concerns the motion, should be expressed in terms of the same observed GW half-frequency \(x\) as the right-hand side, which concerns the radiation, and not, for instance, in terms of the "orbital" frequency \(y\), is worth an explanation: suppose that the compact binary system is actually a binary pulsar system. Hence, in addition to the gravitational waves generated by the orbital motion, the pulsar emits electromagnetic (radio) waves, also received by the far away observer. Now the observer at infinity can measure the orbital frequency of the system from the instants of arrival of the radio pulses -- this is the standard analysis of binary pulsars. Such frequency should be the one to be inserted in the left side of the balance equation. However, far from the system, the space-time curvature \({\cal R}^{-1}\sim\sqrt{{\rm M}/r^{3}}\) tends to zero, and therefore the geometric optics or WKB approximation applies for both the EM and gravitational waves. Thus the EM and gravitational waves follow the same geodesic, independently of their frequency, and in particular are subject to the same tail-induced phase modulation (6.6). We conclude that the distant observer measures the same frequency \(x\) from the EM radio pulses and from the gravitational wave, and this is that frequency that he inserts into both sides of the flux-balance law (6.13).8
Footnote 8: See [135] for a similar argument in the context of self-forces, using an observer sitting on the particle and equipped with a flashlight.
### Gravitational wave modes \(\boldsymbol{(2,2)}\) and \(\boldsymbol{(2,0)}\)
Next, we can extract the \((\ell,{\rm m})=(2,2)\) and \((2,0)\) physical modes from the newly computed 4PN mass quadrupole radiative moment. Projecting the asymptotic metric (2.3) onto the basis of polarizations \(\{+,\times\}\) and the usual basis of spin-weighted spherical harmonics, \(Y_{-2}^{\ell{\rm m}}\) (following the conventions of [16; 17]), one can define the observable gravitational modes \(h_{\ell{\rm m}}\) as
\[h_{+}-{\rm i}h_{\times}=\sum_{\ell=2}^{+\infty}\sum_{{\rm m}=-\ell}^{\ell}h_{ \ell m}\,Y_{-2}^{\ell{\rm m}}\,. \tag{6.14}\]
For the sake of simplicity, we single out the (observable) GW half-phase \(\psi\), and rescale the modes by the dominant contribution, defining \(H_{\ell{\rm m}}\) as
\[h_{\ell{\rm m}}=\frac{8Gm\nu x}{c^{2}R}\,\sqrt{\frac{\pi}{5}}\,H_{\ell{\rm m}}\, \mathrm{e}^{-{\rm i}m\psi}\,. \tag{101}\]
We have \(H_{\ell,-{\rm m}}=(-)^{\ell}\,\overline{H}_{\ell{\rm m}}\), where the overbar denotes the complex conjugate. Using the PN-MPM framework, the current state-of-the-art is the 3.5PN accuracy for all the modes [136, 17, 17]. From the present completion of the 4PN mass quadrupole \({\rm U}_{ij}\), one can extract the 4PN correction to the \((\ell,{\rm m})=(2,2)\) mode as
\[H_{22}=-\frac{\mathrm{e}^{2{\rm i}\psi}}{2c^{2}m\nu x}\,\overline{\mathbf{m}}^ {\dagger}\,{\overline{\mathbf{m}}}^{\dagger}\,{\rm U}_{ij}\,, \tag{102}\]
where \(\mathbf{m}=(\mathbf{n}+{\rm i}\mathbf{\lambda})/\sqrt{2}\), with \(\overline{\mathbf{m}}\) its complex conjugate, and the radiative moment \({\rm U}_{ij}\) is evaluated at the retarded time \(u\). Explicitly, it comes
\[H_{22}=1 +\biggl{(}-\frac{107}{42}+\frac{55}{42}\nu\biggr{)}x+2\pi x^{3/2} +\biggl{(}-\frac{2173}{1512}-\frac{1069}{216}\nu+\frac{2047}{1512}\nu^{2} \biggr{)}x^{2}+\biggl{[}-\frac{107\pi}{21}+\biggl{(}\frac{34\pi}{21}-24\,{\rm i }\biggr{)}\,\nu\biggr{]}\,x^{5/2}\] \[+\biggl{[}\frac{27027409}{646800}-\frac{856}{105}\,\gamma_{\rm E} +\frac{428\,{\rm i}\,\pi}{105}+\frac{2\pi^{2}}{3}+\biggl{(}-\frac{278185}{3326 4}+\frac{41\pi^{2}}{96}\biggr{)}\nu-\frac{20261}{2772}\nu^{2}+\frac{114635}{9 9792}\nu^{3}-\frac{428}{105}\ln(16x)\biggr{]}x^{3}\] \[+\biggl{[}-\frac{2173\pi}{756}+\biggl{(}-\frac{2495\pi}{378}+ \frac{14333\,{\rm i}}{162}\biggr{)}\nu+\biggl{(}\frac{40\pi}{27}-\frac{4066\,{ \rm i}}{945}\biggr{)}\nu^{2}\biggr{]}x^{7/2}\] \[+\biggl{[}-\frac{846557506853}{12713500800}+\frac{45796}{2205} \gamma_{\rm E}-\frac{22898}{2205}{\rm i}\pi-\frac{107}{63}\pi^{2}+\frac{22898} {2205}\ln(16x)\] \[\qquad\quad+\biggl{(}-\frac{336005827477}{4237833600}+\frac{15284} {441}\gamma_{\rm E}-\frac{219314}{2205}{\rm i}\pi-\frac{9755}{32256}\pi^{2}+ \frac{7642}{441}\ln(16x)\biggr{)}\nu\] \[\qquad\quad+\biggl{(}\frac{256450291}{7413120}-\frac{1025}{1008} \pi^{2}\biggr{)}\nu^{2}-\frac{81579187}{15567552}\nu^{3}+\frac{26251249}{31135 104}\nu^{4}\biggr{]}x^{4}+\mathcal{O}\bigl{(}x^{9/2}\bigr{)}\,. \tag{103}\]
In the test-mass limit \(\nu\to 0\), this new result is also nicely in agreement with the prediction of linear black-hole perturbation theory [137].
Note that the two unphysical scales \(r_{0}\) and \(b_{0}\) appearing in the intermediate calculations are absent from the final results. More precisely, these constant appear in the expressions of the source moments and in the relations between radiative and source moments, notably in (9). Although this is expected, the constants in both contributions cancel each other and this constitutes a highly non-trivial check of the MPM-algorithm and the robustness of the PN integration of the source moments.
Finally, we have also extracted the zero-frequency quadrupole mode \((\ell,{\rm m})=(2,0)\) at 4PN order:
\[H_{20}=-\sqrt{\frac{3}{8}}\,\frac{\ell^{(i}\ell^{j)}}{c^{2}m\nu x}\,{\rm U}_{ ij}\,. \tag{104}\]
As discussed in Sec. V.3, this mode arises from integration of the non-linear DC memory terms over the past history of the system, assuming a model for the quasi-circular evolution of the orbit in the past. Since the integration increases the effect by the inverse of the 2.5PN order, with the present 4PN formalism we are able to control the \((2,0)\) mode only with relative 1.5PN precision. We find
\[H_{20}=-\frac{5}{14\sqrt{6}}\,\biggl{[}1+\biggl{(}-\frac{4075}{4032}+\frac{67} {48}\nu\biggr{)}x+\mathcal{O}\bigl{(}x^{2}\bigr{)}\biggr{]}\,. \tag{105}\]
Notice that the 1.5PN term of this mode vanishes. This is due to the fact that the 1.5PN correction in the model of evolution of the quasi-circular orbit in the past, which results in the 1.5PN term in Eq. (103), exactly cancels the 1.5PN direct contribution of the "tail-of-memory" at 4PN order, and given by (100). That is,
\[H_{20}^{\rm ToM}=-H_{20}^{\rm mem,1.5PN}=-\frac{2\sqrt{2}}{7\sqrt{3}}\pi\,x^{3/2}\,. \tag{106}\]
The result (6.19) is in full agreement with Eq. (4.3a) of [110], obtained from the general expression of non-linear memory terms in terms of radiative moments. Indeed, recall that the tail-of-memory integral (5.21) at 4PN order can be simply obtained from the leading memory integral (2.6) at 2.5PN order by replacing the canonical moment by the corresponding radiative moment including the tail effect (2.5) at relative 1.5PN order, see Eq. (2.10). This confirms that the tail-of-memory is adequately taken into account in the computation of the memory using radiative moments defined at future null infinity [110; 111].
###### Acknowledgements.
It is a great pleasure to thank Laura Bernard, Alessandra Buonanno, Bala Iyer, Tanguy Marchand, Sylvain Marsat, Rafael Porto and Adam Pound for enlightening exchanges during the project. F.L. is grateful to the Institut d'Astrophysique de Paris for its hospitality during this project. He received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No 817791). G.F. thanks IIT Madras for a Visiting Faculty Fellow position under the IoE program during the completion of this work.
## Appendix A Test of the boosted Schwarzchild solution
Among all tests that can be performed to check the expression of the mass quadrupole moment, one of the simplest is the boosted Schwarzschild limit. Despite its apparent simplicity, it is quite efficient and was crucially used to fix a remaining ambiguity constant in an early computation of the flux at 3PN order [103]. The principle is quite transparent: if we remove one of the two black holes, then our system reduces to a single Schwarzchild black hole of mass \(m_{1}\), boosted at a (constant) speed \(\mathbf{v}_{1}\). The multipole moments of a boosted Schwarzschild solution (BSS) are straightforward to derive, and have been previously determined at 3PN order in [103]. Extending this work at 4PN order, we find that the quadrupole moment of a BBS of mass M, boosted with a velocity \(\mathbf{V}\), reads
\[\mathrm{I}_{ij}^{\text{BSS}}= \,\mathrm{M}\,t^{2}\,V^{(i}V^{j)}\left[1+\frac{9}{14}\,\frac{V^{2 }}{c^{2}}+\frac{83}{168}\,\frac{V^{4}}{c^{4}}+\frac{507}{1232}\,\frac{V^{6}}{ c^{6}}+\frac{45923}{128128}\,\frac{V^{8}}{c^{8}}\right]\] \[+\frac{4}{7}\,\frac{G^{2}\,\mathrm{M}^{3}}{c^{6}}\,V^{(i}V^{j)}+ \frac{10}{21}\,\frac{G^{2}\,\mathrm{M}^{3}}{c^{8}}\,V^{2}\,V^{(i}V^{j)}+ \mathcal{O}\left(\frac{1}{c^{10}}\right)\,.\] (A1)
On the other hand, taking the BSS limit \((m_{2},\mathbf{v}_{2})\to(0,\mathbf{0})\) of the renormalized mass quadrupole moment defined in [93] (on generic orbits, out of the CoM frame), it comes
\[\lim_{\text{BSS}}\mathrm{I}_{ij}= m_{1}\,t^{2}\,v_{1}^{(i}v_{1}^{j)}\left[1+\frac{9}{14}\,\frac{v_{1}^{2}}{c ^{2}}+\frac{83}{168}\,\frac{v_{1}^{4}}{c^{4}}+\frac{507}{1232}\,\frac{v_{1}^{6 }}{c^{6}}+\frac{45923}{128128}\,\frac{v_{1}^{8}}{c^{8}}\right]\] \[+\frac{4}{7}\frac{G^{2}\,m_{1}^{3}}{c^{6}}\,v_{1}^{(i}v_{1}^{j)}+ \frac{10}{21}\,\frac{G^{2}\,m_{1}^{3}}{c^{8}}\,v_{1}^{2}\,v_{1}^{(i}v_{1}^{j)} +\mathcal{O}\left(\frac{1}{c^{10}}\right)\,.\] (A2)
Both expressions coincide under the identification \((m_{1},\mathbf{v}_{1})=(\mathrm{M},\mathbf{V})\), therefore this test is conclusive.
## Appendix B Corrections to the metric due to infrared commutators
This appendix displays the formal contributions of the infrared commutators to the gothic metric perturbation \(h^{\mu\nu}=\sqrt{-g}g^{\mu\nu}-\eta^{\mu\nu}\), up to 4PN order, as discussed in Sec. III. Those contributions are to be added to Eqs. (A.2) of the work [91] in order to obtain the full metric at 4PN, and the potentials entering it are the one defined by Eqs. (A.4) of [91]. It thus comes, at the required order [the commutators being defined by (3.3)]
\[\Delta h^{00}= -\frac{2(d-1)^{2}}{(d-2)^{2}c^{4}}\,\mathcal{C}\big{\{}V^{2}\big{\}}\] \[+\frac{1}{c^{6}}\Bigg{[}\frac{8(d-1)^{2}(d-3)}{(d-2)^{3}}\, \mathcal{C}\big{\{}KV\big{\}}+\frac{8(d-3)}{d-2}\,\mathcal{C}\big{\{}V_{i}V_{i }\big{\}}-\frac{4(d-1)}{d-2}\,\mathcal{C}\big{\{}V\hat{W}\big{\}}\] \[\qquad\qquad-\frac{4(d-1)^{3}}{(d-2)^{3}}\bigg{(}\frac{1}{3} \mathcal{C}\big{\{}V^{3}\big{\}}+V\,\mathcal{C}\big{\{}V^{2}\big{\}}+ \mathcal{C}\big{\{}V\mathcal{C}\big{\{}V^{2}\big{\}}\big{\}}\bigg{)}\Bigg{]}\]
\[+\frac{1}{c^{8}}\Bigg{[}\frac{2(d-1)^{3}(4-3d)}{(d-2)^{4}}\bigg{(} \frac{V}{3}\,\mathcal{C}\big{\{}V^{3}\big{\}}+V\,\mathcal{C}\big{\{}V\,\mathcal{C} \big{\{}V^{2}\big{\}}\big{\}}+\frac{1}{4}\,\big{(}\mathcal{C}\big{\{}V^{2}\big{\}} \big{)}^{2}-\frac{2(d-1)\,V^{2}}{4-3d}\mathcal{C}\big{\{}V^{2}\big{\}}\bigg{)}\] \[\qquad\qquad-\frac{4(d-1)^{2}(d-3)(4-3d)}{(d-2)^{4}}\bigg{(}V\, \mathcal{C}\big{\{}KV\big{\}}+\frac{K}{2}\,\mathcal{C}\big{\{}V^{2}\big{\}} \bigg{)}-\frac{4(d-3)(4-3d)}{(d-2)^{2}}\,V\,\mathcal{C}\big{\{}V_{i}V_{i}\big{\}}\] \[\qquad\qquad+\frac{8(d-1)(d-4)}{(d-2)^{2}}\,V_{i}\,\mathcal{C} \big{\{}VV_{i}\big{\}}+\frac{2(d-1)(4-3d)}{(d-2)^{2}}\,V\,\mathcal{C}\big{\{} V\hat{W}\big{\}}-\frac{4(d-1)^{2}}{(d-2)^{2}}\,\hat{W}\,\mathcal{C}\big{\{}V^{2} \big{\}}\Bigg{]}+\mathcal{O}\left(\frac{1}{c^{10}}\right)\,, \tag{57a}\] \[\Delta h^{0i}= \,\frac{4(d-1)}{(d-2)c^{5}}\,\mathcal{C}\big{\{}VV_{i}\big{\}}\] \[+\frac{1}{c^{7}}\Bigg{[}\frac{4(d-1)^{2}}{(d-2)^{2}}\bigg{(} \mathcal{C}\big{\{}V^{2}V_{i}\big{\}}+V_{i}\,\mathcal{C}\big{\{}V^{2}\big{\}} +V\,\mathcal{C}\big{\{}VV_{i}\big{\}}+\mathcal{C}\big{\{}V_{i}\,\mathcal{C} \big{\{}V^{2}\big{\}}\big{\}}+\mathcal{C}\big{\{}V\,\mathcal{C}\big{\{}VV_{i} \big{\}}\big{\}}\bigg{)}\] \[\qquad\qquad-\frac{8(d-1)(d-3)}{(d-2)^{2}}\,\mathcal{C}\big{\{} KV_{i}\big{\}}+8\mathcal{C}\big{\{}\hat{W}V_{i}\big{\}}-8\mathcal{C}\big{\{} \hat{W}_{ik}V_{k}\big{\}}+\frac{8(d-1)}{d-2}\,\mathcal{C}\big{\{}V\hat{R}_{i} \big{\}}\Bigg{]}+\mathcal{O}\left(\frac{1}{c^{9}}\right)\,,\] (57b) \[\Delta h^{ij}= -\frac{8(d-1)}{(d-2)c^{8}}\bigg{(}V_{i}\,\mathcal{C}\big{\{}VV_{j }\big{\}}+V_{j}\,\mathcal{C}\big{\{}VV_{i}\big{\}}\bigg{)}\] \[+\frac{\delta_{ij}}{c^{8}}\Bigg{[}\frac{4(d-1)^{2}(d-3)}{(d-2)^{3} }\bigg{(}V\,\mathcal{C}\big{\{}KV\big{\}}+\frac{K}{2}\,\mathcal{C}\big{\{}V^{ 2}\big{\}}\bigg{)}-\frac{2(d-1)^{3}}{(d-2)^{3}}\bigg{(}\frac{V}{3}\,\mathcal{C} \big{\{}V^{3}\big{\}}+V\,\mathcal{C}\big{\{}V\mathcal{C}\big{\{}V^{2}\big{\}} \big{\}}+\frac{1}{4}\,\big{(}\mathcal{C}\big{\{}V^{2}\big{\}}\big{)}^{2}\, \bigg{)}\] \[\qquad\qquad+\frac{4(d-3)}{d-2}\,V\,\mathcal{C}\big{\{}V_{k}V_{k} \big{\}}+\frac{8(d-1)}{d-2}V_{k}\,\mathcal{C}\big{\{}VV_{k}\big{\}}-\frac{2(d- 1)}{d-2}\,V\,\mathcal{C}\big{\{}V\hat{W}\big{\}}\Bigg{]}+\mathcal{O}\left( \frac{1}{c^{10}}\right)\,. \tag{57c}\]
|
2307.08187 | An Empirical Study of Pre-trained Model Selection for
Out-of-Distribution Generalization and Calibration | In out-of-distribution (OOD) generalization tasks, fine-tuning pre-trained
models has become a prevalent strategy. Different from most prior work that has
focused on advancing learning algorithms, we systematically examined how
pre-trained model size, pre-training dataset size, and training strategies
impact generalization and uncertainty calibration on downstream tasks. We
evaluated 100 models across diverse pre-trained model sizes, \update{five}
pre-training datasets, and five data augmentations through extensive
experiments on four distribution shift datasets totaling over 120,000 GPU
hours. Our results demonstrate the significant impact of pre-trained model
selection, with optimal choices substantially improving OOD accuracy over
algorithm improvement alone. We find larger models and bigger pre-training data
improve OOD performance and calibration, in contrast to some prior studies that
found modern deep networks to calibrate worse than classical shallow models.
Our work underscores the overlooked importance of pre-trained model selection
for out-of-distribution generalization and calibration. | Hiroki Naganuma, Ryuichiro Hataya, Ioannis Mitliagkas | 2023-07-17T01:27:10Z | http://arxiv.org/abs/2307.08187v3 | # An Empirical Investigation of Pre-trained Model Selection for
###### Abstract
In the realm of out-of-distribution generalization tasks, finetuning has risen as a key strategy. While the most focus has been on optimizing learning algorithms, our research highlights the influence of pre-trained model selection in finetuning on out-of-distribution performance and inference uncertainty. Balancing model size constraints of a single GPU, we examined the impact of varying pre-trained datasets and model parameters on performance metrics like accuracy and expected calibration error. Our findings underscore the significant influence of pre-trained model selection, showing marked performance improvements over algorithm choice. Larger models outperformed others, though the balance between memorization and true generalization merits further investigation. Ultimately, our research emphasizes the importance of pre-trained model selection for enhancing out-of-distribution generalization.
## 1 Introduction
The issue of out-of-distribution (OOD) generalization has garnered significant attention in recent years, given its crucial role in ensuring the robustness and reliability of artificial intelligence systems across a wide array of tasks and contexts [7, 20, 22, 31]. The strategy of finetuning, which fine-tunes the parameters of pre-trained models to boost performance on specified tasks, has been extensively employed in addressing this challenge [9]. While current research largely emphasizes studying learning algorithms to enhance OOD generalization, the influence of the choice of pre-trained models remains underexplored and forms the crux of our investigation. Our study endeavors to shed light on the impact of pre-trained model selection on OOD performance and its inference uncertainty. Recognizing the importance of this issue is fundamental, as it could potentially reshape how we approach OOD generalization and redefine the performance metrics for models.
Given computational constraints and the risk of memorization, we restricted our study to pre-trained models that can be managed within a single GPU. The risk with exceedingly large models is that they may merely memorize OOD datasets, rather than truly generalize from them [5]. In light of this, our research focuses on moderately sized pre-trained models, seeking a balance between model capacity and generalization capability. Our research holds particular relevance to practical contexts requiring OOD generalization in computer vision, such as domain generalization tasks [21, 2, 6, 8, 16, 2]. Although domains such as large language model learning present compelling challenges, they fall outside our study's scope, further narrowing our concentration on computer vision applications. To empirically examine the role of pre-trained models in OOD performance, we conducted a series of experiments involving variations in pre-trained models' datasets and architectural parameters. We utilized two common performance metrics: accuracy (ACC) and expected calibration error (ECE) [18].
Our results offer a significant revelation: the choice of pre-trained models contributes to more pronounced improvements in OOD performance compared to the selection of algorithms. As shown in Tab. 1, this conclusion stems from our systematic experiments, where we observed performance enhancements of up to 22.8% with optimal pre-trained model choices, a figure markedly higher than the average 2% increase observed with algorithm alterations in DomainBed tasks [9]. This finding underscores the imperative role of careful pre-trained model selection in the task of finetuning for OOD generalization.
## 2 Preliminary
Before delving into our experiments and findings, it's crucial to lay the groundwork by defining key concepts that underpin our research. The related work is shown in Appendix A.
**OOD Generalization:** This term refers to a model's ability to accurately interpret data not represented in the training set, a vital determinant of real-world model performance.
Our study probes the impact of pre-trained model selection on this aspect.
**Uncertainty Quantification:** This metric gauges the confidence that a model has in its predictions, gaining particular relevance in the context of OOD generalization. In certain instances, models expressing uncertainty about OOD data predictions could be more reliable than overly confident ones. For this study, we employ ECE [18] to quantify uncertainty.
**Model Parameter Size and Dataset Size:** These dimensions significantly shape the effectiveness of finetuning. Generally, larger models and datasets yield superior results. However, extremely large models can pose challenges due to increased computational requirements and potential overfitting. The pre-trained models examined in our study are listed in Table 2.
The following sections will explore how these factors interact within our experimental setup, and their subsequent influence on model performance with OOD data. The overarching aim is to guide the selection of pre-trained models for finetuning tasks where OOD generalization is of concern.
## 3 Experiments
**Setup:** To evaluate our approach, we selected the PACS, VLCS, OfficeHome, and DomainNet datasets from DomainBed [9]. Our basic learning approach followed that of the standard procedure of the DomainBed benchmark, with the key difference being our use of a variety of pre-trained models as opposed to the previous reports, which predominantly employed ResNet-50 pre-trained on ImageNet-1k. Specifically, we used pre-trained models from the PyTorch Image Models (timm) library [28] listed in Tab. 2. The remaining experimental settings are explained in Appendix B.
**Comparison:** Our comparison strategy was centered on evaluating performance metrics like ECE and ACC. Additionally, we deemed it important to understand the models' performance on the original IID ImageNet-1K; hence we built a comparison table to assess the correlations between these metrics and the models used.
From Fig. 1, it's evident that when we switched from using the ResNet-50 model, which was predominantly used in the DomainBed benchmarks, to ViT-Large (IN21k), we saw an improvement of 24.8% in the OOD test ACC on the OfficeHome dataset. This improvement far outweighs the 2% performance enhancement obtained when changing the algorithm from ERM to the best-performing algorithm Mixup while keeping the pre-trained model fixed to ResNet-50.
These comparisons were drawn across several dimensions:
**Model Size:** Figure 2 shows how the number of model parameters influences the ECE and ACC. We can observe clear trends in the enhancement of both ECE and ACC as the number of parameters increases, with few exceptions. In cases excluding VLCS, there is a trend of increasing
\begin{table}
\begin{tabular}{l c c c|c} \hline \hline & ERM & Best Alg. & Best Model & \(\Delta\) \\ \hline PACS & 88.1 & 89.1 & 96.8 & +7.72 \\ VLCS & 63.3 & 65.2 & 67.1 & +1.88 \\ OfficeHome & 62.7 & 64.7 & 87.5 & +22.8 \\ DomainNet & 58.4 & 59.5 & 78.2 & +18.7 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of OOD test ACC of ResNet-50 trained with ERM, ResNet-50 with the best algorithms reported in [9], and a larger model (ViT-Large pre-trained on ImageNet-21k) with ERM. \(\Delta\) indicates the improvement from results with the best algorithms to results with the best models. Baseline results are adopted from [9].
Figure 1: OOD Test ACC of models trained with ERM. In many cases, simply using larger models drastically improves OOD performance.
\begin{table}
\begin{tabular}{l c c} \hline \hline Model & IN-1k val. ACC & \# of Param. \\ \hline ResNet-50 & 76.13 & 25M \\ ResNet-101 & 77.38 & 44M \\ ResNet-152 & 78.32 & 60M \\ ConvNeXt-tiny & 82.70 & 28M \\ ConvNeXt-small & 83.70 & 50M \\ ConvNeXt-base & 84.43 & 88M \\ ConvNeXt-large & 84.85 & 197M \\ ConvNeXt-tiny (IN12k) & 84.45 & 28M \\ ConvNeXt-tiny (IN22k) & 78.90 & 28M \\ ViT-small & 78.85 & 22M \\ ViT-base & 79.15 & 86M \\ ViT-tiny (IN21k) & 75.45 & 5M \\ ViT-small (IN21k) & 81.38 & 22M \\ ViT-base (IN21k) & 84.53 & 86M \\ ViT-large (IN21k) & 85.83 & 304M \\ \hline \hline \end{tabular}
\end{table}
Table 2: Pre-trained models used in the experiments with ImageNet-1k validation ACC and the number of parameters from timm. Models pre-trained on ImageNet-12k (IN12k), ImageNet-21k (IN21k), and ImageNet-22k (IN22k) were further fine-tuned on ImageNet-1k. See Tab. A2 for their names in the timm library.
OOD test ACC as the number of parameters increases, regardless of whether the architecture is ResNet, ViT, or ConvNext.
**Model Architecture:** Performance was compared across various model architectures, helping us understand the role that architectural choices play in the final results. The models in Fig. 2 were pre-trained on ImageNet-1k. As can be seen, ConvNeXt generally outperforms other models, while ResNets and ViTs are difficult to be ranked. It's also noteworthy that there is a significant disparity in performance between ViT-base (86M) and ConvNext-base (88M), even though they have roughly the same number of parameters, as evident from Fig. 2.
**Pre-training Dataset Size:** Figure 3 explores how the choice of pre-training dataset size affects the performance on out-of-distribution generalization. Comparing the same model architectures connected by dashed lines, we can observe that pre-training dataset size positively contributes to OOD generalization, except for a few cases. We can also observe that ViT-Large pre-trained on ImageNet-21k shows superior performance to others in many cases. For instance, on OfficeHome, it achieves 87.5% OOD ACC, surpassing the result of ConvNext-Large pre-trained on ImageNet-1k (66.1%) by more than a 10% margin.
**Correlation between IID and OOD Performance:** In line with the studies conducted by [19] and [25], we, too embarked on exploring the relationship between IID and OOD performance. Understanding this correlation is instrumental in informing model selection and shaping learning strategies. A key differentiator of our approach lies in our extensive use of multiple pre-trained models, unlike the limited scope of models used in the aforementioned studies. Our empirical results, presented in Fig. 4, revealed a complex picture. While there appears to be a general trend, it's evident that the correlation between IID and OOD performance does not always adhere to a linear increasing pattern, as seen in the case of the VLCS dataset. This highlights the nuanced influence of dataset characteristics on the performance correlation and underscores the need for further exploration in this aspect.
## 4 Discussion and Conclusion
**Discussion:** While our study focuses on models that fit within the capacity of a single GPU, the broader question of memorization versus generalization becomes even more significant in the context of larger models. How to discern whether a model is generalizing or simply memorizing data poses an intriguing challenge for future exploration and indicates the need to redefine what out-of-distribution generalization means in the realm of large models.
Figure 2: The relationship between the number of model parameters and the OOD test ACC (top; higher is better) and expected calibration error (bottom; lower is better). Except for VLCS, we can observe a clear trend that larger models generalize better.
**Conclusion:** Our study highlights the superior OOD performance of larger models, corroborating previous findings [4]. In light of our results, we advocate for the use of state-of-the-art models rather than persisting with older ones, given the substantial improvements they can bring to out-of-distribution generalization capabilities. However, we emphasize that the development of algorithms for OOD generalization remains crucial in understanding these mechanisms.
**Future Work:** A promising avenue for future research would be to propose a benchmark or guideline indicating the optimal choice of a pre-trained model for a given downstream task. We conjecture that improvements in OOD performance relative to an increase in the number of parameters may reach a saturation point. Establishing a predictive curve fitting for this phenomenon could provide valuable insights for the selection of optimal models.
**Limitations:** Several limitations to our study warrant mention. First, the number of validation datasets for downstream tasks was not extensive, potentially restricting the generalizability of our findings. Second, we have not fully explored the influence of algorithm selection -- our experiments were primarily conducted with Empirical Risk Minimization (ERM). Lastly, optimal tuning of hyperparameters such as batch size and optimizer selection remains an area that requires further investigation.
Figure 4: The relationship between ImageNet-1k ACC and OOD test ACC. For each model architecture, an almost linear relationship can be observed.
Figure 3: The relationship between the size of pre-training datasets and the OOD test ACC (top; higher is better) and expected calibration error (bottom; lower is better). The same model architectures with different pre-training datasets are connected by dashed lines. Larger pre-training datasets generally enhance the OOD robustness.
## Acknowledgments
We acknowledge the generous allocation of computational resources from the TSUBAME3.0 supercomputer facilitated by Tokyo Institute of Technology. This assistance came through the Exploratory Joint Research Project Support Program from JHPCN (EX23401) and the TSUBAME Encouragement Program for Young / Female Users.
|
2303.12428 | Quantitative non-classicality of mediated interactions | In plethora of physical situations one can distinguish a mediator -- a system
that couples other, non-interacting systems. Often the mediator itself is not
directly accessible to experimentation, yet it is interesting and sometimes
crucial to understand if it admits non-classical properties. An example of this
sort that recently enjoys considerable attention are two quantum masses coupled
via gravitational field. It has been argued that the gain of quantum
entanglement between the masses indicates non-classicality of the states of the
whole tripartite system. Here, we focus on non-classical properties of the
involved interactions rather than the states. We derive inequalities whose
violation indicates non-commutativity and non-decomposability (open system
generalisation of non-commuting unitaries) of interactions through the
mediators. The derivations are based on properties of general quantum formalism
and make minimalistic assumptions about the studied systems, in particular the
interactions can remain uncharacterised throughout the assessment. Furthermore,
we also present conditions that solely use correlations between the coupled
systems, excluding the need to measure the mediator. Next, we show that the
amount of violation places a lower bound on suitably defined degree of
non-decomposability. This makes the methods quantitative and at the same time
experiment ready. We give applications of these techniques in two different
fields: for detecting non-classicality of gravitational interaction and in
bounding the Trotter error in quantum simulations. | Ray Ganardi, Ekta Panwar, Mahasweta Pandit, Bianka WoÅoncewicz, Tomasz Paterek | 2023-03-22T09:58:26Z | http://arxiv.org/abs/2303.12428v2 | # Quantitative non-classicality of mediated interactions
###### Abstract
In plethora of physical situations one can distinguish a mediator -- a system that couples other, non-interacting systems. Often the mediator itself is not directly accessible to experimentation, yet it is interesting and sometimes crucial to understand if it admits non-classical properties. An example of this sort that recently enjoys considerable attention are two quantum masses coupled via gravitational field. It has been argued that the gain of quantum entanglement between the masses indicates non-classical states of the whole tripartite system. Here, we focus on non-classical properties of the involved interactions rather than the involved states. We derive inequalities whose violation indicates non-commutativity and non-decomposability (open system generalisation of non-commuting unitaries) of interactions through the mediators. The derivations are based on properties of general quantum formalism and make minimalistic assumptions about the studied systems, in particular the interactions can remain uncharacterised throughout the assessment. Furthermore, we also present conditions that solely use correlations between the coupled systems, excluding the need to measure the mediator. Next, we show that the amount of violation places a lower bound on suitably defined degree of non-decomposability. This makes the methods quantitative and at the same time experiment ready. We give applications of these techniques in two different fields: for detecting non-classicality of gravitational interaction and in bounding the Trotter error in quantum simulations.
Mediated interactions are very common. For example, unpaired spins interact via spin chains in solids [1], light modes interact via mechanical membranes [2], fundamentally electric charges are coupled via electromagnetic field, etc. All these scenarios share a common structure where systems \(A\) and \(B\) do not interact directly, but are solely coupled via mediator system \(M\), see Fig. 1. Already at this general level, one can ask about the properties of the mediator that can be deduced from the dynamics of the coupled systems.
Along this line, methods have been proposed to witness non-classicality of mediator's state from correlation dynamics of the coupled probes. In particular, conditions were derived under which gain of quantum entanglement implies that the mediator must have explored non-orthogonal states during the dynamics [3; 4]. Similar ideas applied to more general models than the canonical quantum formalism were used to argue that the entanglement gain between quantum masses witnesses non-classical gravity [5; 6], and motivated a number of concrete proposals aimed at experimental demonstration of gravity-induced entanglement, see e.g. [7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17]. A considerable advantage of these methods is given by minimalistic assumptions they make about the physical systems involved. They are independent of the initial state, dimensions of involved systems or explicit form of interactions and also work in the presence of local environments. Accordingly, they are applicable in variety of fields, see e.g. [18] for an example in quantum biology and [19] in solid state physics.
Here we move on from the non-classicality of states and develop tools to quantify the amount of non-classicality of mediated _interactions_, while keeping minimalistic assumptions about the considered physical systems. The notion of non-classicality we employ is given by the commutativity of interaction Hamiltonians, in the case of closed dynamics, which generalises to decomposability of dynamical maps that also encompasses open systems. Arguments supporting this choice are given in the next section. A method to detect presence of such non-classicality was first presented in Ref. [20], but it was only qualitative, i.e. it can only witness the presence of non-commutativity. It is intriguing that the methods mentioned earlier, aimed at the non-classicality of states, are also at this qualitative level at the present moment. Our main contribution here is the development of quantitive methods. We derive conditions which lower bound the norm of the commutator as well as suitably defined distance to decomposable maps. These conditions are of two types and the structure of the paper reflects this division. In the first part we assume that the mediator is accessible to experimentation and in the second part the derived conditions use only data measured on the probes. Non-trivial bounds are derived for any continuous correlation measure and the only additional assumption we had to add to the minimalistic list mentioned above is the dimension of the mediator. Hence, it is again expected that the methods presented are applicable in variety of fields. We provide two examples.
The first one is in the field of quantum simulations.
Suzuki-Trotter expansion is a common way to simulate arbitrary sums of local Hamiltonians, see e.g. [21; 22]. It has been recently shown that the number of Trotter steps needed to obtain required simulation error scales with the spectral norm of the commutator [23]. We link this norm to the correlations in the system, showing a quantitative relation between the complexity of simulation and the amount of correlations.
As the second example, the methods detect and measure non-commutativity of gravitational interaction coupling two quantum masses. The idea of detecting non-classicality of gravitational interaction has been discussed very recently in Ref. [24], but there the notion of non-classicality is different, based on impossibility of simulating the dynamics via local operations and classical communication. For other ways of revealing that the evolution cannot be understood in terms of classical (gravitational) field see also Refs. [25; 26]. Our tools show that correlations between the masses exclude gravity as a classical interaction (commuting particle-field couplings) via any finite-dimensional mediator.
## I Classicality and decomposability
Let us start with closed systems and explain our choice of the notion of classicality and its relation to the properties of dynamical maps. In this work, classical mediated interactions are defined by commuting Hamiltonians \(H_{AM}\) and \(H_{BM}\), see Fig. 1. A high level motivation for this choice comes from the fact that in classical mechanics all observables commute, hence a classical mediator would have all its couplings to other systems commuting. The commutativity can also be motivated starting with the notion of classical states as those admitting vanishing quantum discord [27], or vanishing coherence in the case of a single system [28], and asking for the evolution that preserves this form of classicality. The vanishing discord means that the whole tripartite state can be measured on the mediator without disturbing the total state. Mathematically, the state has a block diagonal form and we assume that at all times there exists a single 'preferred' basis of the mediator. We show in Appendix A that such dynamics is generated if and only if the Hamiltonian has a block diagonal form too, with the same basis on the mediator. Since we consider here systems with global Hamiltonian \(H=H_{AM}+H_{BM}\), the state classicality is preserved when both \(H_{AM}\) and \(H_{BM}\) are block diagonal with the same basis on system \(M\), i.e. both Hamiltonians commute \([H_{AM},H_{BM}]=0\).
A closely related notion is that of decomposability. A tripartite unitary \(U\) is decomposable if there exist unitaries \(U_{AM}\) and \(U_{BM}\) such that
\[U=U_{BM}U_{AM}. \tag{1}\]
Intuitively, decomposable unitaries are those that can be simulated by coupling system \(A\) to the mediator \(M\), and _then_ coupling \(B\) to the mediator. One can think the mediator particle is being transmitted between \(A\) and \(B\) which are in separate laboratories making this setting similar to that in Refs. [29; 30; 31; 32; 33; 34]. Although the Suzuki-Trotter formula shows that any unitary can be approximated by a sequence of Trotter steps, decomposable unitaries are special because we can implement the exact unitary with only a single Trotter step.
Clearly, for classical interactions \([H_{AM},H_{BM}]=0\), the unitary operator \(U=e^{-itH}\) is decomposable. But there exist unitaries that are decomposable and yet they are not generated by a classical interaction. A concrete example is given in Appendix A.3 and relies on the fact that the unitary can be written as \(U=U_{BM}U_{AM}\), but there exist no unitaries \(V_{AM}\) and \(V_{BM}\) which in sequence \(V_{AM}V_{BM}\) would be equal to \(U\). This example already suggests that decomposability has to be augmented with commutativity of decompositions to be equivalent to the classicality of interactions, a fact that we prove in Appendix A.4. Therefore, the unitary generated by classical interactions is decomposable, with the added property that the decomposition must commute, i.e. \([U_{AM},U_{BM}]=0\). Accordingly, it is irrelevant
Figure 1: Mediated interactions. (a): Systems \(A\) and \(B\) are coupled via mediator \(M\), i.e. the underlying Hamiltonian is \(H_{AM}+H_{BM}\), and explicitly excludes direct coupling between the systems, i.e. \(H_{AB}\). We present methods based on correlations showing that the interaction Hamiltonians do not commute, i.e. the tripartite dynamics cannot be understood as a sequence of interactions via \(H_{AM}\) and then \(H_{BM}\), or in reverse order. We also quantify this non-commutativity by providing a lower bound on a suitable norm of the commutator \([H_{AM},H_{BM}]\). These notions are generalised to open systems and we emphasise that the tools make minimalistic assumptions about the whole setup: we only need to know the dimension of the mediator, even the quantified Hamiltonians can remain unknown. (b): We extend these techniques to cases where the mediator is non-accessible. They are based on correlations in system \(AB\) only and show that the tripartite dynamics cannot be understood as a sequence of interactions described by dynamical maps \(\Lambda_{AM}\) and \(\Lambda_{BM}\), or in reverse order. We also quantify this form of non-decomposability.
whether we define the decomposition order as \(U_{BM}U_{AM}\) or \(U_{AM}U_{BM}\).
Decomposability naturally extends to open systems. In this case, the evolution is described by a map \(\Lambda\) giving the state of the system at time \(t\), i.e. \(\rho=\Lambda_{t}(\rho_{0})\). We say that a tripartite map \(\Lambda\) is decomposable if there exist maps \(\Lambda_{AM}\) and \(\Lambda_{BM}\) such that
\[\Lambda(\rho)=\Lambda_{BM}\Lambda_{AM}(\rho), \tag{2}\]
for every \(\rho\). In Appendix A.5, we discuss consistency of this definition and the one based on unitaries. As expected, a unitary operator is decomposable if and only if the corresponding unitary map is decomposable (general maps are not required).
It is this general notion of decomposability that we will exclude and measure the degree of its exclusion in the coming sections. A number of similar concepts has been introduced before and it is instructive to compare the decomposability with them and note where the novelty is. So-called divisibility asks whether map \(\Lambda\) can be written as \(\Lambda_{1}\Lambda_{2}\) where both \(\Lambda_{1}\) and \(\Lambda_{2}\) are not unitaries [35]. A stronger notion of cp-divisibility, studied in the context of Markovian dynamics [36; 37], asks whether map \(\Lambda_{t}\) can be written as the sequence of completely positive maps \(\Lambda_{t}=V_{t,s}\Lambda_{s}\). Interestingly, the set of cp-divisible maps is not convex [35]. The decomposability we study here has a specific multipartite structure that was considered only in [38; 20], that is clearly significant from physics perspective.
## II Accessible mediator
We first present methods that utilise correlations measured on all three subsystems, and devote the next section to eliminating measurements on the mediator. The basic idea is that correlations between subsystem \(A\) and subsystems \(MB\) together should be bounded in the case of decomposable dynamics because they are effectively established via a process where mediator is being transmitted from \(A\) to \(B\) only _once_. It is therefore expected that the correlations are bounded by the 'correlation capacity' of the mediator, i.e. maximal correlations to the mediator alone. Such inequalities for distance-based correlation measures have been derived in Ref. [20] and could also be obtained by manipulating the results of Refs. [3; 38]. Our contribution in this section is a generalisation to any continuous correlation measure and then quantification of non-decomposability based on the amount of violation of the derived criterion.
### Detecting non-decomposability
Let us take a correlation quantifier \(Q\) that is monotonic under local operations. In Appendix B we show that the bound in terms of correlation capacity holds when we additionally assume the initial state is of the form \(\rho_{0}=\rho_{AM}\otimes\rho_{B}\). For such an initial state, the correlations generated by a decomposable map \(\lambda\) admit
\[Q_{A:MB}(\lambda(\rho_{0}))\leq\sup_{\sigma_{AM}}Q_{A:M}(\sigma_{AM}), \tag{3}\]
where the bound is derived for any correlation measure \(Q\) that is monotonic under local processing. This bound is already non-trivial as we now demonstrate by showing that the maximally entangling map cannot be decomposable. Consider the initial product state \(|000\rangle\) and assume systems \(A\) and \(B\) are of higher dimension than the mediator, i.e. \(d_{A}=d_{B}>d_{M}\). As an exemplary entanglement measure, take the relative entropy of entanglement, \(E\). It is known that its maximum depends on the dimension of the smaller Hilbert space, i.e. \(\sup_{\sigma_{AM}}E_{A:M}(\sigma_{AM})=\log d_{M}\). According to Eq. (3), any decomposable evolution cannot produce more entanglement than \(\log d_{M}\). This holds for entanglement \(E_{A:MB}\) as well as for \(E_{A:B}\) due to the monotonicity of relative entropy under partial trace. Since dimensions of \(A\) and \(B\) are larger than the dimension of the mediator, maximally entangled state between \(AB\) cannot be produced.
Of course we are interested in extending Eq. (3) to arbitrary initial state and in this way make the method independent of it. To achieve this aim, we use continuity arguments. Many correlation measures, including relative entropy based quantifiers [39], all distance-based measures [40] or convex roof extensions of asymptotically continuous functions [41], admit a version of continuity where there exists an invertible, monotonically non-decreasing function \(g\), such that \(|Q(x)-Q(y)|\leq g(d(x,y))\), where \(d\) is a contractive distance and \(\lim_{s\to 0}g(s)=0\). For simplicity, we shall call such functions gd-continuous. We prove in Appendix B that correlation quantifiers which are gd-continuous are bounded in decomposable dynamics as follows:
\[Q_{A:MB}(\lambda(\rho_{0}))\leq\sup_{\sigma_{AM}}Q_{A:M}(\sigma_{AM})+I_{AM: B}(\rho_{0}), \tag{4}\]
where \(I_{AM:B}(\rho)=\inf_{\sigma_{AM}\otimes\sigma_{B}}g(d(\rho,\sigma_{AM}\otimes \sigma_{B}))\) is a measure of total correlations in the state \(\rho\) across the partition \(AM:B\). Indeed, from the properties of \(g\) and \(d\) is it easy to verify that this quantity is monotonic under local operations and it is zero if and only if \(\rho\) is a product state across \(AM:B\) partition.
This bound is also non-trivial and its violation has been demonstrated in Ref. [42], which focused on negativity as a concrete correlation (entanglement) measure. The system under consideration involved two cavity modes \(A\) and \(B\) coupled via two-level atom \(M\). This scenario is particularly good for demonstrating the violation because the dimension of the mediator is as small as it can be whereas the dimensions of the probes are in principle unbounded.
### Measuring non-decomposability
Having established witnesses of non-decomposability, we now argue that the amount of violation of Eq. (4) quantifies the non-decomposability. As a measure of non-decomposability we propose a minimal operator distance from an arbitrary map \(\Lambda\) to the set of decomposable maps, that we denote as DEC:
\[\mathrm{ND}(\Lambda)=\inf_{\lambda\in\texttt{DEC}}D(\Lambda,\lambda). \tag{5}\]
We shall refer to this quantity as the 'degree of non-decomposability'. The operator distance \(D\) in its definition could be chosen as the one induced by the distance on states
\[D(\Lambda_{1},\Lambda_{2})=\sup_{\sigma}d(\Lambda_{1}(\sigma),\Lambda_{2}( \sigma)), \tag{6}\]
where \(\Lambda_{1}\) and \(\Lambda_{2}\) are arbitrary maps and \(\sigma\) is any state from the domain of the map. In Appendix B we demonstrate that violation of Eq. (4) lower bounds the degree of non-decomposability as follows
\[\mathrm{ND}(\Lambda)\geq g^{-1}(Q_{A:MB}(\Lambda(\rho))-B(\rho_{0})), \tag{7}\]
where \(B(\rho_{0})\) is the right-hand side of Eq. (4). Accordingly, any violation of the decomposability criterion in terms of correlations sets a non-trivial lower bound on the distance between the dynamical map and the set of decomposable maps.
### Quantum simulations
As the first application of the introduced measure, suppose we would like to simulate the dynamics generated by the Hamiltonian \(H=H_{AM}+H_{BM}\). (In fact this analysis can be generalized to any 2-local Hamiltonian). Quantum simulators implement dynamics close to the desired one by truncating the Suzuki-Trotter formula to \(r\) Trotter steps
\[e^{-itH}\approx\left(e^{-i\frac{t}{r}H_{AM}}e^{-i\frac{t}{r}H_{BM}}\right)^{r}. \tag{8}\]
The error of this approximation can be quantified by the spectral norm (the largest singular value)
\[||e^{-itH}-\left(e^{-i\frac{t}{r}H_{AM}}e^{-i\frac{t}{r}H_{BM}}\right)^{r}||_{\infty} \tag{9}\]
and it was shown in Ref. [23] that in order to make this error smaller than \(\varepsilon\), the number of Trotter steps has to scale with the norm of the commutator
\[r=O\left(\frac{t^{2}}{\varepsilon}||\left[H_{AM},H_{BM}\right]||_{\infty} \right). \tag{10}\]
Our aim is to provide a lower bound on the commutator norm in terms of correlations, and in this way bound the number of required Trotter steps. Recall after Ref. [23] that for a single Trotter step we have
\[||U-U_{AM}U_{BM}||_{\infty}\leq\frac{t^{2}}{2}||\left[H_{AM},H_{BM}\right]||_{ \infty},\]
where \(U=e^{-itH}\) and, e.g., \(U_{AM}=e^{-itH_{AM}}\). We need to link our methods to the spectral norm. For finite-dimensional systems, all metrics generate the same topology [43], i.e. for any two distances \(d_{1}\) and \(d_{2}\) there exists a constant \(C\) such that
\[\frac{1}{C}\,d_{2}(\rho,\sigma)\leq d_{1}(\rho,\sigma)\leq C\,d_{2}(\rho, \sigma). \tag{11}\]
In particular, there exists a constant that relates any distance to the trace distance \(d_{\mathrm{tr}}(\rho,\sigma)=\frac{1}{2}||\rho-\sigma||_{1}\). Therefore, if a correlation quantifier on finite dimensional systems is gd-continuous with respect to the trace distance, it is also gd-continuous with respect to any other distance \(d\). Furthermore, since the trace distance is contractive, Eq. (4) holds for any distance for finite-dimensional systems, at the cost of constants in function \(g\). Accordingly, let us consider the distance induced by the spectral norm \(d_{\infty}(\rho,\sigma)=||\rho-\sigma||_{\infty}\). We call the corresponding operator distance \(D_{\infty}(\Lambda_{1},\Lambda_{2})\), and the degree of non-decomposability \(\mathrm{ND}_{\infty}(\Lambda)\). For the connection to the Trotter error, we note the following
\[\mathrm{ND}_{\infty}(U)\leq D_{\infty}(U,U_{AM}U_{BM})\leq 2||U-U_{AM}U_{BM}||_{ \infty}, \tag{12}\]
where the first inequality follows from the fact that \(\mathrm{ND}_{\infty}(U)\) is the shortest distance to the set of decomposable maps and \(U_{AM}U_{BM}\) is a particular decomposable map. The second inequality is proven in Appendix B.1.
We have therefore shown a direct link between correlations in the system and the number of Trotter steps one needs to keep the simulation error small. The amount of violation of Eq. (4) lower bounds the degree of non-decomposability and hence spectral norm of the commutator and accordingly sets the number of required Trotter steps. Conversely, if it is possible to simulate \(U\) with \(r\) Trotter steps to precision \(\varepsilon\), Eq. (10) shows that commutator norm is bounded and consequently Eq. (12) implies that correlations \(Q_{A:MB}\) admit an upper bound.
## III Inaccessible mediator
An interesting opportunity arises where the non-classicality of evolution through mediator could be witnessed without measuring the mediator. Here we show that this is indeed possible. We start by introducing the necessary concepts and the related mathematical tools, and then present witnesses of non-decomposable evolution based on measurements on \(AB\) only. Finally, we establish measures of non-decomposability together with their experimentally friendly lower bounds.
### Marginal maps
In order to detect non-classicality of interactions solely through the correlations between the coupled objects, we need the notion of'marginals' of decomposable maps. We propose to introduce it via a related concept of dilation. A dilation of a map \(\Lambda:X\to X\) is an ancillary state \(\sigma_{R}\) and a map \(\tilde{\Lambda}:XR\to XR\) acting on the system and ancilla, such that
\[\Lambda(\rho)=\operatorname{Tr}_{R}(\tilde{\Lambda}(\rho\otimes\sigma_{R})) \tag{13}\]
for all \(\rho\). Accordingly, our aim is to exclude the existence of a decomposable dilation of dynamics that are observed on systems \(AB\). In principle, the existence of dilations may depend on the dimension of the Hilbert space of the mediator which motivates us to introduce _decomposable \(m\)-dilation_ as follows. A map \(\Lambda:AB\to AB\) has a decomposable \(m\)-dilation if there exists \(\tilde{\Lambda}:ABM\to ABM\) such that \(\tilde{\Lambda}\) is decomposable and dimension of the mediator satisfies \(d_{M}\leq m\). We denote the set of all maps with a decomposable \(m\)-dilation as \(\overline{\mathsf{DEC}}(m)\).
With these definitions we can state our goal precisely: we wish to infer whether a map on \(AB\) admits any decomposable \(m\)-dilation, and we wish to do this via measurements of correlations only. If there does not exist any decomposable dilation, we conclude that the interaction generating the map is non-classical.
### Detecting non-decomposability
It turns out that one can obtain an interesting condition that witnesses non-decomposability as a simple corollary to Eq. (4). In Appendix C we prove that any gd-continuous correlation measure \(Q\) admits the following bound under the evolution generated by \(\lambda\in\overline{\mathsf{DEC}}(m)\):
\[Q_{A:B}(\rho_{t})\leq\sup_{\sigma_{AM}}Q_{A:M}(\sigma_{AM})+I_{A:B}(\rho_{0}), \tag{14}\]
where \(\rho_{t}=\lambda(\rho_{0})\) and we emphasise that \(\lambda\in\overline{\mathsf{DEC}}(m)\) acts on \(AB\) only. The supremum on the right-hand side runs over all \(AM\) states with \(d_{M}\leq m\), and \(I_{A:B}(\rho_{AB})=\inf_{\sigma_{A}\otimes\sigma_{B}}g(d(\rho_{AB},\sigma_{A} \otimes\sigma_{B}))\) measures the total correlations across \(A:B\).
As an example of using this criterion note that maximally entangling maps we have discussed before cannot have any decomposable \(m\)-dilation for \(m<\min(d_{A},d_{B})\). A question emerges whether there exist evolutions that do not admit decomposable \(m\)-dilation even when the dimension of the mediator is unbounded. This is indeed the case. We show in Appendix C.1 that a SWAP operation on two objects (even two qubits) has no decomposable \(m\)-dilation for any \(m\). This leads to the peculiar conclusion that classical interactions cannot produce a SWAP. The intuitive reason behind this statement is that it takes at least _two_ steps to implement swapping with \(d_{A}=d_{B}=d_{M}\). We first exchange \(A\) and \(M\), then we exchange \(B\) and \(M\), and we still must exchange \(A\) and \(M\) again to complete the implementation. In fact, any \(AB\) interaction can be implemented in two steps by first exchanging \(A\) and \(M\), applying the interaction on \(BM\) and finally swapping \(A\) and \(M\) back.
We wish to give one more insight into the structure of maps with decomposable dilations. Clearly, the sets are nested: \(\overline{\mathsf{DEC}}(m)\subseteq\overline{\mathsf{DEC}}(m+1)\). In fact, the inclusions are strict as we show in Appendix C.2.
### Measuring non-decomposability
In the spirit of the previous section, we would like to extend Eq. (7) to bound the distance to \(\overline{\mathsf{DEC}}(m)\) based solely on correlations measured on systems \(AB\). Of course the \(ABM\) operator distance to \(\mathsf{DEC}\) and the \(AB\) operator distance to \(\overline{\mathsf{DEC}}(m)\) are closely related. For contractive distances \(d\) on states we have \(D(\Lambda_{ABM},\lambda_{ABM})\geq D(\Lambda_{AB},\lambda_{AB})\), which unfortunately is opposite to what we need. To overcome this we use so-called completely bounded variant of the operator distance [44]:
\[\mathcal{D}(\Lambda_{1},\Lambda_{2})=\sup_{\sigma_{XY}}d((\Lambda_{1}\otimes 1 \hskip-2.5pt1_{Y})(\sigma),(\Lambda_{2}\otimes 1\hskip-2.5pt1_{Y})(\sigma)), \tag{15}\]
where \(\Lambda_{1},\Lambda_{2}:X\to X\) and \(Y\) is a finite dimensional system. The benefit of the completely bounded operator distance is that it behaves nicely on dilations. This makes it easier to jump from the distance to \(\mathsf{DEC}\) to the distance to \(\overline{\mathsf{DEC}}(m)\). Indeed, the completely bounded distance can be written in terms of the dilations as follows:
\[\mathcal{D}(\Lambda_{1},\Lambda_{2})=\inf_{\tilde{\Lambda}_{i}}\mathcal{D}( \tilde{\Lambda}_{1},\tilde{\Lambda}_{2}). \tag{16}\]
On one hand, for contractive distances on states, the left-hand side cannot be larger than the right-hand side. On the other hand, the bound can be achieved by an exemplary dilation \(\tilde{\Lambda}_{i}=\Lambda_{i}\otimes 1\hskip-2.5pt1\).
As a measure of non-decomposability, that we will link to the violation of Eq. (14), we propose the analog of the degree of non-decomposability written in terms of completely bounded distance
\[\operatorname{NDm}(\Lambda_{AB})=\inf_{\lambda_{AB}\in\overline{\mathsf{DEC}}(m )}\mathcal{D}(\Lambda_{AB},\lambda_{AB}). \tag{17}\]
With these concepts and tools, it is proven in Appendix C that the amount of violation of Eq. (14) lower bounds the quantity just introduced:
\[\operatorname{NDm}(\Lambda_{AB})\geq g^{-1}(Q_{A:B}(\rho_{t})-\mathcal{B}(\rho _{0})), \tag{18}\]
where \(\mathcal{B}\) is the right-hand side of Eq. (14). Note that all these quantities involve states and maps on \(AB\) only.
### Non-classical gravity
Our second application of these methods is in foundations. A prime example of an inaccessible mediator is a
mediating field. The methods described above allow us to make conclusions about the field from the behaviour of objects coupled through it. Gravitational interaction is especially interesting from this perspective as there is no direct experimental evidence of its quantum properties today. As discussed in the introduction, observation of quantum entanglement between gravitationally-coupled masses is a plausible near-future experiment closing this gap [45]. In this section, we show that our methods allow a concise derivation of the non-classicality witnesses presented in the literature [3; 5; 6], and lead to new conclusions about the interactions that can be drawn from the observation of considerable gravitational entanglement.
Assume first a completely classical situation where both states and interactions are classical. Recall that within our framework, this means zero-discord state at all times, \(D_{AB|M}=0\) (with one and the same basis on the mediator at all times), and dynamical maps admitting decomposable dilations. As correlation measure consider quantum entanglement, measured by relative entropy of entanglement. Then the amount of entanglement \(A:B\) that can be produced via these classical maps is
\[E_{A:B}(\rho_{t})\leq\sup_{\sigma_{AM}}E_{A:M}(\sigma_{A:M})+I_{A:B}(\rho_{0}), \tag{19}\]
where supremum is over all the states allowed in the theory, here \(d_{M}\leq m\) and \(D_{AB|M}=0\). It is reasonable to assume that the initial state in the laboratory will be close to a product state and we therefore take \(I_{A:B}(\rho_{0})=0\). Furthermore, all states admitting \(D_{AB|M}=0\) are disentangled across \(A:M\) and therefore the supremum runs over states for which \(E_{A:M}(\sigma_{A:M})=0\). We therefore arrive at the conclusion that entanglement \(A:B\) cannot grow, and hence observation of any gain implies non-classical states or non-classical interactions or both.
If we assume that the interactions are classical (decomposable) but the state might have non-zero discord, then entanglement still satisfies the bound in Eq. (19). Therefore, observation of non-zero value of \(E_{A:B}\) means that the supremum on the right-hand side is at least this non-zero value, i.e. the mediator must be capable of being entangled to \(A\), and in fact to \(AB\) due to monotonicity, to at least the degree that has been measured. Note that this is stronger than saying that the mediator needs to be discorded.
Finally, by violating the bound in Eq. (19) it is possible to demonstrate in the laboratory that unknown interactions are not decomposable. We stress that it is not sufficient to demonstrate that entanglement grows, we have to demonstrate that the entanglement is above a certain threshold. This threshold depends on the dimension of the mediator and we therefore ask how high entanglement can be generated by gravity. The answer depends on the concrete setup via which gravitational interaction is studied. If we take two nearby masses prepared in squeezed states with squeezing parameters \(s_{A}\) and \(s_{B}\), it has been shown that the maximal gravitational entanglement in terms of logarithmic negativity equals \(E^{\rm max}_{A:B}=|s_{A}+s_{B}|/\ln 2\), which holds for large squeezing [8]. Since in principle \(s_{i}\rightarrow\infty\), gravity cannot be understood as classical interaction with any finite-dimensional mediator. More practically, the highest optical squeezing achieved today is \(s_{A,B}=1.73\)[46], and assuming it can be transferred to mechanical systems gives entanglement \(E^{\rm max}_{A:B}\approx 5\) ebits, which would restrict still possible decomposable dilations to use mediators with dimension \(m>2\)[5]. It is rather unlikely that this amount of entanglement will be observed in near future, as it would take tens of hours to produce it via gravity in typical setups. Yet, a violation of the unit bound, and hence disproval of classical interactions via a two-level system, which would already be interesting, could be achieved within a second [5; 8; 11].
Another route would be to use gravity to execute dynamics which by other means is known to be non-decomposable. For example, we have shown below Eq. (14) that maximally entangling maps do not admit decomposable dilations for \(d_{M}\leq\min(d_{A},d_{B})\). The schemes in Refs. [5; 6] indeed use gravity to implement maximal entanglement, but only between two-level quantum systems encoded in path degree of freedom. It would therefore be interesting to determine whether gravity could be used to maximally entangle masses in more paths. Along the same line, we showed that swapping does not admit decomposable dilation even for unbounded mediator. Accordingly, we leave it as an open problem to verify if gravity could implement the SWAP gate.
## IV Conclusions
We have proposed notions of classicality of mediated interactions (commutativity of Hamiltonians and decomposability of dynamical maps) and introduced their mathematical measures. Our main results are inequalities in terms of any continuous correlation quantifiers with the property that their violations place lower bounds on the amount of introduced non-classicality. These quantitative methods are therefore experiment ready and applicable in variety of physical situations due to minimalistic assumptions under which they were derived. As examples, we showed that accurate simulations of dynamics with high correlations necessarily require a large number of Trotter steps, and that gravitational interaction cannot be understood as classical (commuting particle-field couplings) via any finite-dimensional mediator.
## V Acknowledgements
This work is supported by the Polish National Agency for Academic Exchange NAWA Project No. PPN/PPO/2018/1/00007/U/00001 and Xiamen University Malaysia Research Fund (Grant No. XMUMRF/2022-C10/IPHY/0002). RG is supported
by the National Science Centre, Poland, within the QuantERA II Programme (No 2021/03/Y/ST2/00178, acronym ExTRaQT) that has received funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreement No 101017733. EP acknowledges funding from QuantERA/2/2020, an ERA-Net cofund in Quantum Technologies, under the project eDICT. This work is supported by Foundation for Polish Science (FNP), IRAP project ICTQT, contract no. 2018/MAB/5, co-financed by EU Smart Growth Operational Programme.
## Appendix A Classicality and decomposability
### Classical states
For completeness let us start with elementary relations. A state is said to be classical (or incoherent) if it is diagonal in a preferred basis \(\{\ket{m}\}\). A multipartite state is called quantum-classical (or admits vanishing discord \(D_{AB\ket{M}}=0\)) if it can be written as \(\rho_{\text{qc}}=\sum_{m}\rho_{AB\ket{m}}\otimes\Pi_{m}\), where \(\Pi_{m}=\ket{m}\bra{m}\) is the projector on the preferred basis and the systems are enumerated as in Fig. 1. In words, the whole tripartite state explores only one basis in the Hilbert space of the mediator. Let us introduce a measurement map along the preferred basis, \(\Pi\), whose action on an arbitrary input state is to produce average post-measurement state: \(\Pi(\rho)=\sum_{m}\Pi_{m}\rho\Pi_{m}\). A state \(\rho\) is qc (quantum-classical) if and only if \(\rho=\Pi(\rho)\). Alternatively, the definition of classicality can be phrased in terms of commutation with the basis elements.
**Proposition 1**.: _Let \(\Pi(X)=\sum_{m}\Pi_{m}X\Pi_{m}\) be a projection map, where \(\Pi_{m}\Pi_{m^{\prime}}=\delta_{mm^{\prime}}\Pi_{m}\). Then_
\[X=\Pi(X)\iff\forall m,\,[X,\Pi_{m}]=0. \tag{10}\]
Proof.: The 'if' direction is trivial. For the 'only if' direction, consider the following argument:
\[X\Pi_{m} =\Pi_{m}X, \tag{11}\] \[X\Pi_{m} =\Pi_{m}X\Pi_{m},\] (12) \[X =\sum_{m}\Pi_{m}X\Pi_{m}=\Pi(X), \tag{13}\]
where we multiplied the first equation by \(\Pi_{m}\) from the right and used \(\Pi_{m}^{2}=\Pi_{m}\), and then we summed the second equation over \(m\) and used the completeness relation \(\sum_{m}\Pi_{m}=\openone\).
### Classical interactions
The definition of classicality of interactions in terms of commutativity is justified by the following lemma. It shows that the Hamiltonians preserving classicality of states are invariant under dephasing in the preferred basis. The commutativity is then a corollary.
**Proposition 2**.: _Let \(H\) be a time-independent Hamiltonian. Then \(H\) is classical, i.e. \(H=\Pi(H)\), if and only if for any classical initial state \(\rho_{0}\), \(\rho_{t}=e^{-it\hat{H}}\rho_{0}e^{itH}\) is also classical._
Proof.: The 'only if' direction. Let us write the assumption explicitly:
\[e^{-itH}\rho_{0}e^{itH} =\Pi(e^{-itH}\rho_{0}e^{itH}), \tag{14}\] \[e^{-itH}[\rho_{0},H]e^{itH} =\Pi(e^{-itH}[\rho_{0},H]e^{itH}), \tag{15}\]
where the second line is the time derivative of the first one and \(\rho_{0}\) denotes the initial (classical) state. By evaluating at \(t=0\), we find that the commutator is invariant:
\[[\rho_{0},H]=\Pi([\rho_{0},H]). \tag{16}\]
In particular, taking \(\rho_{0}=\Pi_{m}\) shows that for all the basis states:
\[[\Pi_{m},H]=\Pi([\Pi_{m},H])=0, \tag{17}\]
where the last equation is simple to verify. Applying Proposition 1 proves the claim.
The 'if' direction. From the assumption, the Hamiltonian has the block form \(H=\sum_{m}h_{m}\otimes\Pi_{m}\), where \(h_{m}\) acts on all the systems other than the mediator. In this case, the orthonormality of the preferred basis implies
\[e^{\pm itH}=\sum_{m}e^{\pm ith_{m}}\otimes\Pi_{m}. \tag{18}\]
Accordingly, the initially classical mediator stays classical at all times, and the remaining systems evolve conditionally depending on the state of the mediator.
In the case of tripartite systems that we consider, where \(H=H_{AM}+H_{BM}\), this shows classicality is preserved when both \(H_{AM}\) and \(H_{BM}\) are block diagonal with the same basis on system \(M\), i.e. they commute.
### One-way decomposability
The following proposition gives an example of decomposable unitary which nevertheless cannot be generated by classical interactions.
**Proposition 3**.: _There are no two-qubit unitaries \(V_{AM},V_{BM}\) such that \(U_{AM}U_{BM}=V_{BM}V_{AM}\), where_
\[U_{AM} =\frac{1}{\sqrt{2}}\left(\openone+iZ_{A}X_{M}\right) \tag{19}\] \[U_{BM} =\frac{1}{\sqrt{2}}\left(\openone+iZ_{B}Z_{M}\right), \tag{20}\]
_and \(Z\) and \(X\) stand for Pauli matrices._
Proof.: By contradiction. Suppose that there exist unitaries \(V_{AM},V_{BM}\) such that \(U_{AM}U_{BM}=V_{BM}V_{AM}\). Note that we can write \(U_{AM},U_{BM}\) as
\[U_{AM}= \left|0\right\rangle\left\langle 0\right|_{A}\otimes\frac{1}{\sqrt{2}} \left(\openone_{M}+iX_{M}\right)\] \[+\left|1\right\rangle\left\langle 1\right|_{A}\otimes\frac{1}{ \sqrt{2}}\left(\openone_{M}-iX_{M}\right), \tag{101}\] \[U_{BM}= \left|0\right\rangle\left\langle 0\right|_{B}\otimes\frac{1}{ \sqrt{2}}\left(\openone_{M}+iZ_{M}\right)\] \[+\left|1\right\rangle\left\langle 1\right|_{B}\otimes\frac{1}{ \sqrt{2}}\left(\openone_{M}-iZ_{M}\right). \tag{102}\]
Therefore, the product \(U_{AM}U_{BM}\) is given by
\[\left|00\right\rangle\left\langle 00\right|_{AB}\otimes\frac{1}{2} \left(\openone+iX_{M}+iY_{M}+iZ_{M}\right)\] \[+\left|01\right\rangle\left\langle 01\right|_{AB}\otimes\frac{1} {2}\left(\openone+iX_{M}-iY_{M}-iZ_{M}\right)\] \[+\left|10\right\rangle\left\langle 10\right|_{AB}\otimes\frac{1} {2}\left(\openone-iX_{M}-iY_{M}+iZ_{M}\right)\] \[+\left|11\right\rangle\left\langle 11\right|_{AB}\otimes\frac{1} {2}\left(\openone-iX_{M}+iY_{M}-iZ_{M}\right). \tag{103}\]
Observe that we can always write \(V_{AM}=\sum_{i,j=0}^{1}\left|i\right\rangle\left\langle j\right|_{A}\otimes V _{M}^{A,ij}\) for some matrices \(V_{M}^{A,ij}\), and similarly for \(V_{BM}\). However, because we assumed \(V_{BM}V_{AM}=U_{AM}U_{BM}\) and the \(AB\) part in Eq. (103) is expressed solely in terms of projectors, we can express \(V_{BM}V_{AM}\) as
\[V_{BM}V_{AM}=\sum_{i,j}\left|ij\right\rangle\left\langle ij\right|_{AB} \otimes V_{M}^{B,jj}V_{M}^{A,ii}, \tag{104}\]
where each product \(V_{M}^{B,jj}V_{M}^{A,ii}\) is a unitary on \(M\). Comparing Eqs. (103) and (104), we find
\[V_{M}^{B,00}V_{M}^{A,00} =\tfrac{1}{2}\left(\openone+iX_{M}+iY_{M}+iZ_{M}\right) \tag{105}\] \[V_{M}^{B,11}V_{M}^{A,00} =\tfrac{1}{2}\left(\openone+iX_{M}-iY_{M}-iZ_{M}\right)\] (106) \[V_{M}^{B,00}V_{M}^{A,11} =\tfrac{1}{2}\left(\openone-iX_{M}-iY_{M}+iZ_{M}\right)\] (107) \[V_{M}^{B,11}V_{M}^{A,11} =\tfrac{1}{2}\left(\openone-iX_{M}+iY_{M}-iZ_{M}\right) \tag{108}\]
However, this leads to the contradiction
\[\begin{pmatrix}0&1\\ -1&0\end{pmatrix} =\left(V_{M}^{B,00}V_{M}^{A,00}\right)\left(V_{M}^{B,11}V_{M}^{A,0 0}\right)^{\dagger} \tag{109}\] \[=\left(V_{M}^{B,00}V_{M}^{A,11}\right)\left(V_{M}^{B,11}V_{M}^{A, 11}\right)^{\dagger}=\begin{pmatrix}0&-1\\ 1&0\end{pmatrix},\]
which completes the proof.
### Classicality and commuting decompositions
Here we show the relation between classicality of an interaction and decomposability of the corresponding unitary. In particular, we show the equivalence between classicality \([H_{AM},H_{BM}]=0\) and the existence of a commuting decomposition \(U=U_{AM}U_{BM}=U_{BM}U_{AM}\).
**Proposition 4**.: _A unitary \(U=e^{-itH}\) has a commuting decomposition \(U=U_{BM}U_{AM}=U_{AM}U_{BM}\) if and only if there exist Hamiltonians \(H_{AM}\) and \(H_{BM}\) such that \(H=H_{AM}+H_{BM}\) and \([H_{AM},H_{BM}]=0\)._
Proof.: Using the Baker-Campbell-Haussdorf (BCH) formula [47], one easily sees that if such \(H_{AM},H_{BM}\) exists, then \(U=e^{-itH_{AM}}e^{itH_{BM}}=e^{-itH_{BM}}e^{-itH_{AM}}\), showing that the unitary has a commuting decomposition.
To show the other direction, suppose the unitary \(e^{-itH}\) has a commuting decomposition. This means there exist \(U_{AM},U_{BM}\) such that \([U_{AM},U_{BM}]=0\) and \(e^{-itH}=U_{AM}U_{BM}\). Let \(H_{AM}=(i/t)\log U_{AM}\) and \(H_{BM}=(i/t)\log U_{BM}\). These interaction Hamiltonians must commute, because we can write the commutator using the series representation \(\log\left(\openone-X\right)=-\sum_{n=1}^{\infty}\frac{1}{n}X^{n}\) to get
\[[H_{AM},H_{BM}] =-\frac{1}{t^{2}}[\log U_{AM},\log U_{BM}]\] \[=-\frac{1}{t^{2}}\left[\sum_{n=1}^{\infty}\frac{\left(\openone-U_ {AM}\right)^{n}}{n},\sum_{m=1}^{\infty}\frac{\left(\openone-U_{BM}\right)^{m }}{m}\right]\] \[=0. \tag{110}\]
Using the BCH formula, we obtain
\[e^{-itH}=e^{-itH_{AM}}e^{-itH_{BM}}=e^{-it(H_{AM}+H_{BM})}. \tag{111}\]
Differentiating the expression above with respect to \(t\) and using the identity \(\frac{d}{dt}e^{\left(tA\right)}\big{|}_{t=0}=A\) shows that \(H=H_{AM}+H_{BM}\), which proves the claim.
### Consistency
Let us start with recalling the two definitions of decomposability given in the main text.
**Definition 1** (unitary).: _Let \(U\) be a unitary acting on a tripartite system \(\mathcal{H}_{A}\otimes\mathcal{H}_{B}\otimes\mathcal{H}_{M}\). \(U\) is decomposable if there exist unitaries \(U_{AM},U_{BM}\) such that_
\[U_{ABM}=U_{BM}U_{AM}. \tag{112}\]
**Definition 2** (map).: _Let \(\Lambda\) be a map acting on a tripartite system \(\mathcal{H}_{A}\otimes\mathcal{H}_{B}\otimes\mathcal{H}_{M}\). \(\Lambda\) is decomposable if there exist maps \(\Lambda_{AM},\Lambda_{BM}\) such that_
\[\Lambda(\rho)=\Lambda_{BM}\Lambda_{AM}(\rho). \tag{113}\]
The following proposition shows that these two definitions are consistent.
**Proposition 5**.: _A unitary \(U\) is decomposable if and only if the map \(\Lambda(\rho)=U\rho U^{\dagger}\) is decomposable._
Proof.: If \(U\) is decomposable, choosing \(\Lambda_{AM}(\rho)=U_{AM}\rho U_{AM}^{\dagger}\) and \(\Lambda_{BM}(\rho)=U_{BM}\rho U_{BM}^{\dagger}\) shows that \(\Lambda\) is also decomposable.
To show the other implication, suppose there exists two maps \(\Lambda_{AM},\Lambda_{BM}\) such that \(U\rho U^{\dagger}=\Lambda_{BM}\Lambda_{AM}(\rho)\). It is enough to show that we can choose the maps \(\Lambda_{AM},\Lambda_{BM}\) to be unitaries. This is indeed possible by the following argument: Since \(U\rho U^{\dagger}=\Lambda_{BM}\Lambda_{AM}(\rho)\), we see that \(\sigma\mapsto U^{\dagger}\Lambda_{BM}(\sigma)U\) is a CPTP-inverse of \(\Lambda_{AM}\). Since the only CPTP-maps that have a CPTP-inverse are unitaries [48], we conclude that \(\Lambda_{AM}\) must be a unitary map. The fact that \(\Lambda_{BM}\) is also unitary follows from \(\Lambda_{BM}(\rho)=U\Lambda_{AM}^{\dagger}(\rho)U^{\dagger}\).
Another question regarding the consistency between the two definitions concerns unitary dilations: is decomposability of a map equivalent to the existence of a decomposable unitary dilation? This would be desirable since this would imply any decomposable map is generated by some 'classical' interaction on a larger system. Here we show that the implication holds in at least in one direction.
**Proposition 6**.: _Let \(\Lambda\) be a decomposable map. Then there exists a Stinespring dilation of \(\Lambda\)_
\[\Lambda(\rho_{ABM})=\operatorname{Tr}_{R}U_{ABMR}(\rho_{ABM}\otimes\sigma_{R} )U_{ABMR}^{\dagger}, \tag{100}\]
_such that \(U_{ABMR}\) is decomposable._
Proof.: Since \(\Lambda\) is decomposable, there exist maps \(\Lambda_{AM},\Lambda_{BM}\) such that \(\Lambda=\Lambda_{BM}\Lambda_{AM}\). Let us denote a Stinespring dilation of \(\Lambda_{AM}\) as
\[\Lambda_{AM}(\rho_{AM})=\operatorname{Tr}_{R_{A}}U_{AMR_{A}}(\rho_{AM} \otimes\sigma_{R_{A}})U_{AMR_{A}}^{\dagger}, \tag{101}\]
where \(R_{A}\) is the purifying system for \(\Lambda_{AM}\). Similarly, \(\Lambda_{BM}\) must have a dilation with purifying system \(R_{B}\). We prove the claim by identifying \(R=R_{A}R_{B}\), \(U_{ABMR}=U_{BMR_{B}}U_{AMR_{A}}\), and \(\sigma_{R}=\sigma_{R_{A}}\otimes\sigma_{R_{B}}\).
## Appendix B Accessible mediator
The following proposition proves the 'correlation capacity' bound when the initial state is product \(\rho_{0}=\rho_{AM}\otimes\rho_{M}\).
**Proposition 7**.: _Let \(\Lambda\) be a decomposable map. Any correlation measure satisfies:_
\[Q_{A:MB}(\Lambda(\rho_{AM}\otimes\rho_{B}))\leq\sup_{\sigma_{AM}}Q_{A:M}( \sigma_{AM}). \tag{102}\]
Proof.: By assumption \(\Lambda=\Lambda_{BM}\Lambda_{AM}\). The bound follows solely from monotonicity of correlations under local operations:
\[Q_{A:MB}(\Lambda(\rho_{AM}\otimes\rho_{B})) =Q_{A:MB}(\Lambda_{BM}\Lambda_{AM}(\rho_{AM}\otimes\rho_{B}))\] \[\leq Q_{A:MB}(\Lambda_{AM}(\rho_{AM}\otimes\rho_{B})). \tag{103}\]
Since \(Q\) is monotonic under local operations, it must be invariant under invertible local operations. In particular, adding or discarding an uncorrelated system does not change the value of \(Q\). In our case, system \(B\) is completely uncorrelated and therefore \(Q_{A:MB}(\Lambda_{AM}(\rho_{AM}\otimes\rho_{B}))=Q_{A:M}(\Lambda_{AM}(\rho_{AM}))\). Of course, the last quantity is upper bounded by the supremum over all states.
For a general initial state, we have the following bound by continuity.
**Proposition 8**.: _Let \(\Lambda\) be a decomposable map and \(\rho\) any tripartite quantum state. Any gd-continuous correlation measure satisfies:_
\[Q_{A:MB}(\Lambda(\rho))\leq\sup_{\sigma_{AM}}Q_{A:M}(\sigma_{AM})+I_{AM:B}(\rho), \tag{104}\]
_where \(I_{AM:B}(\rho)=\inf_{\sigma_{AM}\otimes\sigma_{B}}g(d(\rho,\sigma_{AM}\otimes \sigma_{B}))\) is a measure of total correlations in the state \(\rho\) across the partition \(AM:B\)._
Proof.: We bound the difference in correlations between arbitrary state and the product state using gd-continuity
\[|Q_{A:MB}(\Lambda(\rho))-Q_{A:MB}(\Lambda(\sigma_{AM}\otimes \sigma_{B}))|\] \[\leq g(d(\Lambda(\rho),\Lambda(\sigma_{AM}\otimes\sigma_{B})))\] \[\leq g(d(\rho,\sigma_{AM}\otimes\sigma_{B})), \tag{105}\]
where in the last line we used the fact that \(g\) is monotonic and \(d\) contractive. The derived inequality holds for any \(\sigma_{AM}\otimes\sigma_{B}\), in particular for the one achieving the infimum of \(I_{AM:B}(\rho)\), leading to
\[Q_{A:MB}(\Lambda(\rho))\leq Q_{A:MB}(\Lambda(\sigma_{AM}\otimes\sigma_{B}))+I_{ AM:B}(\rho). \tag{106}\]
In the last step we use Proposition 7 to bound the first term on the right.
In order to simplify the notation, let us denote the bound on correlations due to decomposable dynamics as \(B(\rho)=\sup_{\sigma_{AM}}Q_{A:M}(\sigma_{AM})+I_{AM:B}(\rho)\), and the state at time \(t\) as \(\rho_{t}=\Lambda(\rho_{0})\).
**Proposition 9**.: _The degree of non-decomposability \(\operatorname{ND}(\Lambda)\) is lower bounded as follows:_
\[\operatorname{ND}(\Lambda)\geq g^{-1}(Q_{A:MB}(\rho_{t})-B(\rho_{0})) \tag{107}\]
Proof.: We will prove the theorem by combining the continuity bounds with the statement of Proposition 8. Consider a fixed, but arbitrary, decomposable map \(\lambda\). Due to gd-continuity we write
\[Q_{A:MB}(\rho_{t})-Q_{A:MB}(\lambda(\rho_{0}))\] \[\leq|Q_{A:MB}(\rho_{t})-Q_{A:MB}(\lambda(\rho_{0}))|\] \[\leq g(d(\rho_{t},\lambda(\rho_{0}))) \tag{108}\]
We rearrange and use the bound in Proposition 8:
\[Q_{A:MB}(\rho_{t}) \leq Q_{A:MB}(\lambda(\rho_{0}))+g(d(\rho_{t},\lambda(\rho_{0})))\] \[\leq B(\rho_{0})+g(d(\rho_{t},\lambda(\rho_{0}))). \tag{109}\]
The amount of violation is now brought to the left-hand side, and below we use the fact that \(g\) is invertible and
take supremum over states \(\rho_{0}\) to identify the degree of non-decomposability
\[Q_{A:MB}(\rho_{t})-B(\rho_{0}) \leq g(d(\rho_{t},\lambda(\rho_{0})))\] \[g^{-1}(Q_{A:MB}(\rho_{t})-B(\rho_{0})) \leq d(\rho_{t},\lambda(\rho_{0})),\] \[g^{-1}(Q_{A:MB}(\rho_{t})-B(\rho_{0})) \leq\mathrm{ND}(\Lambda). \tag{100}\]
which proves the claim.
### Spectral norm
We link the operator norm of unitary maps with the spectral distance between them.
**Lemma 1**.: _Let \(U,V\) be unitaries. Then \(D_{\infty}(U,V)\leq 2\|U-V\|_{\infty}\)._
Proof.: By simple algebra, we verify
\[U\rho U^{\dagger}-V\rho V^{\dagger}= \tfrac{1}{2}(U-V)\rho(U+V)^{\dagger} \tag{101}\] \[+\tfrac{1}{2}(U+V)\rho(U-V)^{\dagger}, \tag{102}\]
where \(\rho\) is a density matrix. Taking the spectral norm on both sides, we get
\[\left\|U\rho U^{\dagger}-V\rho V^{\dagger}\right\|_{\infty} \tag{103}\] \[=\left\|\tfrac{1}{2}(U-V)\rho(U+V)^{\dagger}+\tfrac{1}{2}(U+V) \rho(U-V)^{\dagger}\right\|_{\infty}\] (104) \[\leq\left\|U-V\right\|_{\infty}\!\left\|\rho\right\|_{\infty}\! \left\|U+V\right\|_{\infty}\] (105) \[\leq 2\|U-V\|_{\infty} \tag{106}\]
where we used triangle inequality and submultiplicativity of spectral norm \(\left\|AB\right\|_{\infty}\leq\left\|A\right\|_{\infty}\!\left\|B\right\|_{\infty}\). Using the bounds \(\left\|\rho\right\|_{\infty}\leq\left\|\rho\right\|_{1}=1\) and \(\left\|U+V\right\|_{\infty}\leq\left\|U\right\|_{\infty}+\left\|V\right\|_{ \infty}=2\) on the last inequality finishes the proof.
## Appendix C Inaccessible mediator
First, we derive a necessary condition on maps admitting a decomposable \(m\)-dilation.
**Proposition 10**.: _A gd-continuous correlation measure \(Q\) admits the following bound under the evolution generated by \(\lambda\in\overline{\mathsf{DEC}}(m)\):_
\[Q_{A:B}(\lambda(\rho_{AB})) \leq\sup_{\sigma_{AM}}Q_{A:M}(\sigma_{AM})+I_{A:B}(\rho_{AB}), \tag{107}\]
_where the supremum is over all \(AM\) states with the dimension \(d_{M}\leq m\), and \(I_{A:B}(\rho_{AB})=\inf_{\sigma_{A}\otimes\sigma_{B}}g(d(\rho_{AB},\sigma_{A} \otimes\sigma_{B}))\) measures total correlations across \(A:B\)._
Proof.: Consider the following argument:
\[Q_{A:B}(\lambda(\rho_{AB})) \leq Q_{A:MB}(\tilde{\lambda}(\rho_{AB}\otimes\sigma_{M}))\] \[\leq\sup_{\sigma_{AM}}Q_{A:M}(\sigma_{AM})+I_{AM:B}(\rho_{AB} \otimes\sigma_{M})\] \[=\sup_{\sigma_{AM}}Q_{A:M}(\sigma_{AM})+I_{A:B}(\rho_{AB}), \tag{108}\]
where the first line follows from the monotonicity of \(Q\) and the existence of a decomposable \(m\)-dilation, the second line restates Proposition 8 restricted to \(m\)-dimensional mediator, and the last line follows from the fact that tracing out an uncorrelated particle is a reversible process and hence equality.
Note that we have assumed that any map with a decomposable dilation starts with the joint \(ABM\) state of a product form \(\rho_{AB}\otimes\sigma_{M}\). Although this is a restrictive condition, it has been shown that this is essentially the only consistent choice if we require that the dynamics can start from any \(AB\) state and the assignment is linear [49].
Next, we show that the violation of the inequality provides a bound on the degree of non-decomposability
**Proposition 11**.: _The degree of non-decomposability satisfies the following lower bound:_
\[\mathrm{NDm}(\Lambda_{AB}) \geq g^{-1}(Q_{A:B}(\Lambda_{AB}(\rho_{AB}))-\mathcal{B}(\rho_{AB }))\]
_where \(\mathcal{B}\) is the two-particle version of the bound \(B\):_
\[\mathcal{B}(\rho_{AB}) =\sup_{\sigma_{AM}}Q_{A:M}(\sigma_{AM})+I_{A:B}(\rho_{A:B}),\]
_and supremum over \(\sigma_{AM}\) assumes the dimension of mediator satisfies \(d_{M}\leq m\)._
Proof.: Consider a fixed but arbitrary dilation \(\tilde{\Lambda}\) of the map \(\Lambda_{AB}\) and a decomposable map \(\tilde{\lambda}\) (acting on all subsystems) that is a dilation of the map \(\lambda_{AB}\in\overline{\mathsf{DEC}}(m)\). The same steps as in Proposition 9, Eqs. (102) and (103), lead to the following inequality
\[g^{-1}(Q_{A:B}(\Lambda(\rho_{AB}))-\mathcal{B}(\rho_{AB})))\] \[\leq d(\tilde{\Lambda}(\rho_{AB}\otimes\sigma_{M}),\tilde{ \lambda}(\rho_{AB}\otimes\sigma_{M})) \tag{109}\]
where we have used monotonicity and the definition of dilation to write \(Q_{A:B}(\Lambda(\rho_{AB}))\leq Q_{A:MB}(\tilde{\Lambda}(\rho_{AB}\otimes \sigma_{M}))\) and invariance of total correlations under tracing out uncorrelated system in the bound \(B\), which therefore becomes \(\mathcal{B}\). The left-hand side is accordingly fully expressed in terms of bipartite quantities and we now similarly bound the right-hand side.
To show the claim, it is enough to show that the distance on the right hand side gives a lower bound to the degree of non-decomposability. By taking the supremum over \(\rho_{AB}\), the right hand side is upper bounded by the operator distance:
\[\sup_{\rho_{AB}}d(\tilde{\Lambda}(\rho_{AB}\otimes\sigma_{M}), \tilde{\lambda}(\rho_{AB}\otimes\sigma_{M}))\leq D(\tilde{\Lambda},\tilde{ \lambda}), \tag{110}\]
where the inequality is due to the optimisation over states of \(AB\) only, not over all three systems. Analogous reasons
show that the operator distance is upper bounded by the completely bounded distance
\[D(\tilde{\Lambda},\tilde{\lambda})\leq\mathcal{D}(\tilde{\Lambda},\tilde{\lambda }). \tag{100}\]
This time because the right-hand side involves additional optimisation over the ancillary states. Finally, note that this reasoning holds for any dilation and the best bound is obtained by taking the dilations producing the infimum: \(\inf_{\lambda_{AB}\in\overline{\mathsf{BEC}}(m)}\inf_{\tilde{\Lambda},\tilde{ \lambda}}D(\tilde{\Lambda},\tilde{\lambda})=\inf_{\lambda_{AB}\in\overline{ \mathsf{BEC}}(m)}\mathcal{D}(\Lambda_{AB},\lambda_{AB})\).
With these tools, we investigate in the next sections the structure of maps that admit decomposable \(m\)-dilations.
### Non-decomposability of swapping
**Proposition 12**.: _The map \(\mathtt{SWAP}\) on two qubits has no decomposable \(m\)-dilation, for any \(m\)._
Proof.: We will prove this by contradiction. Suppose that \(\mathtt{SWAP}\) has a decomposable \(m\)-dilation. Let us compare the action of \(\mathtt{SWAP}\) on \(\left|00\right\rangle_{AB}\) and on \(\left|01\right\rangle_{AB}\). By definition, there exists two maps \(\Lambda_{AM},\Lambda_{BM}\) and some initial state \(\sigma_{M}\) such that
\[\left|00\right\rangle\left\langle 00\right|_{AB} =\mathtt{SWAP}(\left|00\right\rangle\left\langle 00\right|_{AB})\] \[=\mathtt{Tr}_{M}\,\Lambda_{BM}\Lambda_{AM}\left(\left|00\right\rangle \left\langle 00\right|_{AB}\otimes\sigma_{M}\right), \tag{101}\] \[\left|10\right\rangle\left\langle 10\right|_{AB} =\mathtt{SWAP}(\left|01\right\rangle\left\langle 01\right|_{AB})\] \[=\mathtt{Tr}_{M}\,\Lambda_{BM}\Lambda_{AM}\left(\left|01\right\rangle \left\langle 01\right|_{AB}\otimes\sigma_{M}\right). \tag{102}\]
Let us define \(\sigma_{AM}^{0}=\Lambda_{AM}\left(\left|0\right\rangle\left\langle 0\right|_{A} \otimes\sigma_{M}\right)\). By Eqs. (101) and (102), we have
\[\left|0\right\rangle\left\langle 0\right|_{A} =\mathtt{Tr}_{B}\,\mathtt{SWAP}(\left|00\right\rangle\left\langle 00 \right|_{AB})\] \[=\mathtt{Tr}_{BM}\,\Lambda_{BM}\left(\left|0\right\rangle\left\langle 0 \right|_{B}\otimes\sigma_{AM}^{0}\right), \tag{103}\] \[\left|1\right\rangle\left\langle 1\right|_{A} =\mathtt{Tr}_{B}\,\mathtt{SWAP}(\left|01\right\rangle\left\langle 0 1\right|_{AB})\] \[=\mathtt{Tr}_{BM}\,\Lambda_{BM}\left(\left|1\right\rangle\left\langle 1 \right|_{B}\otimes\sigma_{AM}^{0}\right). \tag{104}\]
But because \(\Lambda_{BM}\) is trace preserving and \(\mathrm{Tr}_{B}\) factors out when applied to product states, we have
\[\mathrm{Tr}_{BM}\,\Lambda_{BM}\left(\left|0\right\rangle\left\langle 0 \right|_{B}\otimes\sigma_{AM}^{0}\right)\] \[=\mathrm{Tr}_{BM}\left(\left|0\right\rangle\left\langle 0\right|_{B} \otimes\sigma_{AM}^{0}\right)\] \[=\mathrm{Tr}_{M}\,\sigma_{AM}^{0}\] \[=\mathrm{Tr}_{BM}\,\Lambda_{BM}\left(\left|1\right\rangle\left\langle 1 \right|_{B}\otimes\sigma_{AM}^{0}\right). \tag{105}\]
Combining this with Eqs. (103) and (104), we obtain
\[\left|0\right\rangle\left\langle 0\right|_{A} =\mathrm{Tr}_{BM}\,\Lambda_{BM}\left(\left|0\right\rangle\left\langle 0 \right|_{B}\otimes\sigma_{AM}^{0}\right) \tag{106}\] \[=\left|1\right\rangle\left\langle 1\right|_{A}, \tag{107}\]
which is clearly a contradiction.
### Strict inclusions
**Proposition 13**.: _The inclusion \(\overline{\mathsf{BEC}}(m)\subsetneq\overline{\mathsf{BEC}}(m+1)\) is strict for all \(m\)._
Proof.: Let us fix \(m\) and take \(d_{A}=d_{B}>d_{M}=m\). Let \(\Lambda_{m}(\rho_{AB})=\mathrm{Tr}_{M}\mathtt{SWAP}_{BM}\Lambda_{AM}(\rho_{AB} \otimes\left|0\right\rangle\left\langle 0\right|_{M})\), where \(\Lambda_{AM}\) is a maximally entangling map. By this construction, \(\Lambda_{m}\) has a decomposable \(m\)-dilation, i.e. \(\Lambda_{m}\in\overline{\mathsf{BEC}}(m)\). Choosing \(\rho_{AB}=\left|00\right\rangle\left\langle 00\right|_{AB}\) and \(Q\) to be relative entropy of entanglement, we obtain \(E_{A:B}(\Lambda_{m}(\rho_{AB}))=\log m\), whereas by Proposition 10, for all maps \(\lambda\in\overline{\mathsf{BEC}}(m-1)\) we have (recall that \(\rho_{AB}\) is product)
\[E_{A:B}(\lambda(\rho_{AB})) \leq\sup_{\sigma_{AM}}E_{A:M}(\sigma_{AM})+I_{A:B}(\rho_{AB}) \tag{108}\] \[=\log{(m-1)}, \tag{109}\]
Therefore \(\Lambda_{m}\not\in\overline{\mathsf{DEC}}(m-1)\), and the claim is shown.
|
2308.08702 | Finding a Second Wind: Speeding Up Graph Traversal Queries in RDBMSs
Using Column-Oriented Processing | Recursive queries and recursive derived tables constitute an important part
of the SQL standard. Their efficient processing is important for many real-life
applications that rely on graph or hierarchy traversal. Position-enabled
column-stores offer a novel opportunity to improve run times for this type of
queries. Such systems allow the engine to explicitly use data positions (row
ids) inside its core and thus, enable novel efficient implementations of query
plan operators.
In this paper, we present an approach that significantly speeds up recursive
query processing inside RDBMSes. Its core idea is to employ a particular aspect
of column-store technology (late materialization) which enables the query
engine to manipulate data positions during query execution. Based on it, we
propose two sets of Volcano-style operators intended to process different query
cases.
In order validate our ideas, we have implemented the proposed approach in
PosDB, an RDBMS column-store with SQL support. We experimentally demonstrate
the viability of our approach by providing a comparison with PostgreSQL.
Experiments show that for breadth-first search: 1) our position-based approach
yields up to 6x better results than PostgreSQL, 2) our tuple-based one results
in only 3x improvement when using a special rewriting technique, but it can
work in a larger number of cases, and 3) both approaches can't be emulated in
row-stores efficiently. | Mikhail Firsov, Michael Polyntsov, Kirill Smirnov, George Chernishev | 2023-08-16T23:17:20Z | http://arxiv.org/abs/2308.08702v1 | Finding a Second Wind: Speeding Up Graph Traversal Queries in RDBMSs Using Column-Oriented Processing
###### Abstract
Recursive queries and recursive derived tables constitute an important part of the SQL standard. Their efficient processing is important for many real-life applications that rely on graph or hierarchy traversal. Position-enabled column-stores offer a novel opportunity to improve run times for this type of queries. Such systems allow the engine to explicitly use data positions (row ids) inside its core and thus, enable novel efficient implementations of query plan operators.
In this paper, we present an approach that significantly speeds up recursive query processing inside RDBMSes. Its core idea is to employ a particular aspect of column-store technology (late materialization) which enables the query engine to manipulate data positions during query execution. Based on it, we propose two sets of Volcano-style operators intended to process different query cases.
In order validate our ideas, we have implemented the proposed approach in PosDB, an RDBMS column-store with SQL support. We experimentally demonstrate the viability of our approach by providing a comparison with PostgreSQL. Experiments show that for breadth-first search: 1) our position-based approach yields up to 6x better results than PostgreSQL, 2) our tuple-based one results in only 3x improvement when using a special rewriting technique, but it can work in a larger number of cases, and 3) both approaches can't be emulated in row-stores efficiently.
Keywords:Query Processing Column-stores Recursive Queries Late Materialization Breadth-First Search PosDB.
## 1 Introduction
The ANSI'99 SQL standard introduced the concept of recursion into SQL with syntactic constructs to define recursive views and recursive derived tables. This allows users to store graph data in a tabular form and to express some graph queries using CTEs and recursive syntax. The admissible subset is rather limited compared to the specialized graph systems, but it is sufficient to solve a number of common tasks. Such tasks originate from many real-life applications and
usually concern some hierarchy traversal which comes as a breadth-first search computation.
In this paper, we present another outlook on RDBMS architecture that significantly improves system performance at least for some types of graph queries expressed by recursive SQL. More specifically, we present a column-oriented approach that will improve run times for queries that perform breadth-first search.
Having emerged about fifteen years ago, column-stores quickly became ubiquitous in analytic processing of relational data. Their idea is simple: store data in a columnar form in order to read only necessary columns for evaluating the query. Such approach also provides better data compression rates [1], improves CPU cache utilization, facilitates SIMD-enabled data processing, and offers other benefits. However, some column-stores additionally allow the query engine to explicitly use data positions during query execution. This made way for a number of optimizations and techniques that offered various benefits for query processing. Thus, we differentiate the "position-enabled" column-stores from the rest as the column-stores that are able to reap benefits from explicit position manipulation inside their engine. We believe that such an approach can give RDBMs a second wind in handling graph queries.
Positions (also called row ids or offsets) are integers that refer to some record or individual attribute value inside a table. Operating on data positions allows query engine to achieve savings by deferring switching to data values. This group of techniques is called late materialization and it was successfully employed for various query plan operators [18, 2, 20, 12].
We employ this technique to design two sets of Volcano-style [7] operators intended to handle different query cases that involve recursive processing. We have implemented them inside a position-enabled column-store PosDB [4, 3]. Next, in order to evaluate them we run experiments with queries that perform breadth first search. We have also performed the comparison of our approach with PostgreSQL.
The overall contribution of this paper is the following:
1. A survey of existing query processing techniques in RDBMSs that concern recursive queries.
2. A design of two query operators for position-enabled column store that speed up recursive query evaluation.
3. An experimental evaluation of proposed techniques and a comparison with state-of-the-art row-store RDBMS.
This paper is organized as follows. In Section 2 we survey various aspects of implementation and usage of recursive queries inside relational DBMSs. Then, in Section 3 we present the main features of PosDB and discuss its query processing internals. After this, in Section 4 we describe implementation details of the proposed recursive operators and their use in the existing query plan model of PosDB. Section 5 contains an evaluation that compares PosDB with PostgreSQL using a series of experiments on trees of different size, height and additional payload. Finally, in Section 6 we conclude this paper and discuss future work.
## 2 Related Work and Motivation
### Related Work
In this section we review existing papers that address graph query processing in SQL-supporting systems, paying special attention to recursive evaluation.
One of the earliest papers that addressed the problem of recursive query evaluation was the paper [11]. There, author introduces several query optimizations for recursive queries with graphs, namely: early evaluation of row selection conditions, elimination of duplicate rows in intermediate tables, and using an index to accelerate join computation.
The authors of the papers [6, 5] describe several issues with query optimization in relational databases when implementing recursive queries. The first approach they mention is the full feedback approach (FFB), which provides the optimizer with the demographics of each recursion iteration so that it can generate a new plan for the subsequent iteration. However, FFB interrupts potential pipelining and cannot take advantage of global query optimizations, making it unsuitable for parallel DBMS. The next approach, look ahead with feedback (LAWF), generates plans for the subsequent \(k\) iterations in advance, with k depending on the query planning cost and propagation of join estimation errors. The authors present a dynamic feedback mechanism based on passive monitoring to collect feedback and to determine when re-planning is necessary. The LAWF method supports both pipelining and global query optimization.
In the paper [14] the authors consider two graph problems: transitive closure computation and adjacency matrix multiplication. In order to solve them, the authors study the optimization of queries that involve recursive joins and recursive aggregations in column- and row-oriented DBMS. They evaluate the impact of several query optimization techniques, such as pushing aggregation through recursion and using ORDER BY with merge joins in column-store instead of hash joins. The authors evaluate effects of indexing columns containing vertices and effects of sorting rows in a row-store to evaluate the iteration of \(k\) joins.
In the paper [13] the author evaluates various recursive query optimizations for the plan generator. The paper considers five techniques: storage and indexing for efficient join computation, early selection, early evaluation of non-recursive joins, pushing duplicate row elimination, and pushing aggregation. The author uses four types of graphs: tree, list, cyclic, and complete graphs. However, similarly to previous work [14] author uses a sequence of SQL commands (including INSERTs) to implement the proposed optimizations. Such approach may suffer from various overheads, as opposed to implementing an operator node in the engine source code. This, in turn, may lead to inaccurate results.
The Recursive-aggregate-SQL (RaSQL) [8] system extends the current SQL standard to support aggregates in recursion. It can express powerful queries and declarative algorithms, such as graph and data mining algorithms. The RaSQL compiler allows mapping declarative queries into one basic fixpoint operator supporting aggregates in recursive queries. The aggregate-in-recursion optimization
brought by the PreM property and other improvements make the RaSQL system more performant than other similar systems.
In the paper [17] the authors address the problem of storing large property data graphs inside relational DBMS. They adapt the SQLGraph [19] approach to reduce the disk volume and increase processing speeds. They evaluate their schema using the PostgreSQL on LDBC-SNB and show that their schema not only performs better on read-only queries but also performs better on workloads that include update operations.
Graph databases are good for storing and querying provenance data. One of the earliest papers that evaluated this possibility was the study [23]. The authors compare relational and graph databases on different types of queries. This study demonstrated that for traversal queries, graph databases were clearly faster, sometimes by a factor of 10. This result was expected since relational databases are not designed to perform traversals such as standard breadth-first-search.
Another paper that concerned graph databases in data provenance domain was the study [16]. The authors propose an improved version of the DPHQ framework for capturing and querying the provenance data. They conclude that graph databases offer significant performance gains over relational databases for executing multi-depth queries on provenance. The performance gains become much more pronounced with the increase in traversal depth and data volumes.
### Motivation
The related works discussed above demonstrated popularity and relevance of graph queries and graph database systems. However, they also showed that there is only a handful of studies that address processing of graph queries (BFS, transitive closure) using the recursion technique in SQL-supporting systems.
Moreover, despite the existence of studies which touch upon the processing of recursive SQL queries in column-stores (e.g. [14]), there are no studies that propose to leverage data positions. On the other hand, in our paper we propose an in-depth operator redesign, which is based on this idea.
## 3 Background
PosDB is a disk-based distributed column-store which features explicit position manipulation, i.e. it is a "position-enabled" system. In this regard it is close to the ideas of early systems such as the C-Store [18] and the MonetDB [10].
PosDB uses the pull-based Volcano model [7] with block-oriented processing. Its core idea is to employ two types of intermediate representations: tuple- and position-based. In the tuple-based representation operators exchange blocks of value tuples. This type of representation is similar to most existing DBMSs. On the other hand, position-based representation is a characteristic feature of PosDB. In the positional form, intermediates are represented by a generalized join index [22] which is presented in Fig. 0(a). Join index stores an array of record indices, i.e. positions, for each table it covers (top of Fig. 0(a)). Tuples are encoded
using rows in the join index. Most operators in PosDB are either positional or tuple-based, with positional ones having specialized Reader entities for reading values of individual table attributes. The query plan in PosDB is divided into positional and tuple parts, and the moment of converting positions into tuples is called materialization. Materialization is to be performed at some moment of query plan, since user needs not positions, but tuples. It can be performed by either a special Materialize operator or by some operators, such as an aggregation operator.
In the query plan presented in Fig. 0(b), the materialization point is indicated by a brown dotted line. Below the line, positional representation is used and above the line uses tuple representaion. In the latest version of PosDB, a query plan may contain several materialization points, in such a manner that every leaf-root path will have one.
Such architecture leads to several different classes of query plans which are discussed in reference [4]. Operating on positions instead of tuple values allows to achieve significant cost savings for some queries. For example, in case of filtering join it is possible to reduce the total amount of data read from disk if join will be performed on positions first, and then the rest of necessary columns will be read. This is a general idea of late materialization and it was extensively used for implementing many [1, 9, 12, 21, 15] operators and their combinations. At the same time, in PosDB it is possible to build plans equivalent to naive [1] column-stores, i.e. which will read only necessary columns, construct tuples and continue as it was row-store. In this paper we are going to discuss an application of this technique for processing recursive queries.
PosDB is a large project and it has many features and implementation details. However, they are out of the scope of this work and are not necessary for its understanding. A detailed description of baseline architecture can be found in paper [3], and the recent additions are described in [4]. Finally, an interactive demo of PosDB can be seen at the following link1.
Footnote 1: [https://pos-db.com/](https://pos-db.com/)
Figure 1: PosDB internals
## 4 Proposed Approach
In order to implement recursive queries in the PosDB, we have introduced two new operator groups into its operator set. These groups share the same use pattern and differ only in the used data representation (rows or positions). Their generalized operation flow is presented in Fig. 2, which is as follows:
* The Recursive operator stores pointers to RecursiveCTE, ChildOperator and RecursiveChildOperator. ChildOperator is used for the non-recursive part of the query, with its help PosDB gets initial rows or initial positions. The RecursiveChildOperator is a regular operator, but internally it either explicitly or implicitly (via several intermediate operators) receives data from RecursiveCTE.
* RecursiveCTE stores a pointer to the Recursive, from which it asks for new records to be passed by the Next method in RecursiveChildOperator.
Recall that there are two types of intermediate data representation in PosDB: tuple-based and positional. This results in two sets of operators:
* TRecursive and TRecursiveCTE that only work with blocks of tuples.
* PRecursive and PRecursiveCTE that only work with position blocks.
We have designed only these two sets, each focusing on one particular data representation, either tuple-based or positional. However, the first thing which comes to mind is to use a combination of tuple-based and positional operators. For example, consider a case when ChildOperator and RecursiveCTE return a position block and RecursiveChildOperator returns a tuple block. In this case, the query engine will have to translate the tuples received from RecursiveChildOperator back to positions in order to use them for the second and subsequent steps of the recursion. However, this may be impossible in certain circumstances, if, for example, a generated attribute (e.g. \(value*2+1\)) is present in the tuple block. In this case, there will be no original column which may be pointed to by a position.
Figure 2: Sample query plan representation using Recursive and RecursiveCTE
For the sake of clarity, we are going to work through an example. Consider the following recursive query to find all neighbors of a vertex with id = 0 up to depth 4:
```
1 WITHRECURSVEedges_cte(id,from,to)AS
2(SELECTedges.id,edges.from,edges.to
3FROMedgesWHEREedges.from=0
4UNIONALL
5SELECTedges.id,edges.from,edges.to
6FROMedgesJOINedges_cteASe
7ONedges.from=e.to_v)
8SELECTedges_cte.id,edges_cte.from,edges_cte.to
9FROMedges_cte
10OPTION (MAXRECURSION4);
```
Listing 1: Recursive query example
The plan of this query using the introduced structures can be represented via the diagram in Fig. 3. In this figure:
* The left Materialize operator is a ChildOperator: it will be executed once in order to initialize the starting set of tuples.
* The Join is a RecursiveChildOperator.
* The set of tuple blocks of the current recursion step is stored inside TRecursive: we will call it curLevel. In addition, TRecursive will store the position of the block in curLevel, which should be passed to TRecursive next time.
Figure 3: Query tree using TRecursive and TRecursiveCTE
The evaluation itself is as follows:
1. TRecursive requests blocks from the left Materialize as long as they are not empty and stores them in curLevel.
2. TRecursive passes all blocks from curLevel up until it reaches the end of curLevel.
3. To get the block of the new recursion step, TRecursive requests the block from Join.
4. Join requests blocks from TRecursiveCTE and from the right Materialize.
5. TRecursiveCTE asks TRecursive for blocks.
6. TRecursive increments the internal counter and passes TRecursiveCTE in response to its requests for blocks from curLevel.
7. After typing a new block of a certain size, Join gives it to TRecursive. If it is a non-empty block, then TRecursive will store it in nextLevel temporary storage and proceed to Step 3. If it is an empty block, then curLevel is replaced with nextLevel. If now curLevel is an empty set of blocks, then we say that TRecursive has finished its work, otherwise TRecursive proceeds to Step 2.
The plan of the same query using PRecursive operator can be represented via the diagram in Fig. 4. Evaluation of this query tree will proceed in a similar
Figure 4: Query tree using PRecursive and PRecursiveCTE
way. An important limitation here is that we can only work with positions of the same table, which means that the Join operator must return the positions of the same table as the left Filter operator. However, curLevel stores not tuple blocks, but positions blocks. In all other respects, the logic of the PRecursive and PRecursiveCTE operators is completely identical to the logic of their tuple-based counterparts.
## 5 Evaluation
### Methodology
We evaluate our implementations using hierarchical recursive queries on a tree graph. We generated the datasets for the experiments with a simple script2. All evaluated queries solve the task of finding all nodes that lie at a distance of \(n\) hops from the root using the BFS algorithm. A test graph was stored in PosDB and PostgreSQL as an edge list. Columns are of the following types: id, from, to are int (4 bytes); name is varchar(15) (32 bytes); each additional column in the second and third test sets is varchar(20) (42 bytes). The number of table rows is indicated above the figures with test results.
Footnote 2: [https://github.com/Firsov62121/tree_generator](https://github.com/Firsov62121/tree_generator)
In order to evaluate our solution, we have selected a baseline of PostgreSQL. Our choice is based on the following considerations. First of all, we needed a classic row-store system in order to demonstrate the advantages of our approach. Second, PostgreSQL meets another important requirement: it is free from the DeWitt clause3.
Footnote 3: [https://www.brentozar.com/archive/2018/05/the-dewitt-clause-why-you-rarely-see-database-benchmarks](https://www.brentozar.com/archive/2018/05/the-dewitt-clause-why-you-rarely-see-database-benchmarks),
PostgreSQL was configured as follows: JIT compilation and parallelism were disabled since JIT compilation is not implemented in PosDB and enabling parallelism would add unnecessary complexity without contributing anything important in the scope of this paper. To ensure that hash join is used in the query plan as in PosDB, merge and nested loop joins were turned off using planner parameters. The temp_buffers and work_mem parameters were set to values that ensure that any table under test fits into memory. To prevent caching from affecting the results, PostgreSQL internal caches were cleared between runs.
PosDB buffer manager was set to 1GB (32K pages of 32KB size).
Each experiment was repeated 10 times, and the average of the results was calculated.
We pose the following research questions:
* Does our position-based approach bring any performance gain in a special case when all attributes of a table are required in the recursive part of a query?
* What performance advantage does our position-based approach offer when introducing additional payload through auxiliary attributes used in projection?
* Is it possible to emulate our approach inside a row-store by rewriting a query in such a way that it will keep a minimum number of columns inside the recursive core and then join the rest? To answer these questions, we have designed the following experiments, corresponding to each RQ: 1. For the first experiment, we used a BFS query with the table consisting only of attributes required for the traversal, giving no benefits to PosDB (see Listing 1.1 and corresponding plans in Figures 3,4). 2. For the second experiment, we used the query from the first experiment modified by adding payload attributes to the input table itself and to all projections. SQL queries of the following type were used for PostgreSQL and PosDB: 1 WITH RECURSVE edges_cte (id, from, to, column1, 2..., columnN, depth) AS 3 (SELECT edges.id, edges.from, edges.to, 4 edges.column1,..., edges.columnN, 0 5 FROM edges WHERE edges.id = 0 6 UNION ALL 7 SELECT edges.id, edges.from, edges.to, 8 edges.column1,..., edges.columnN, 9 e.depth + 1 FROM edges JOIN edges_cte AS e 0 ON edges.from = e.to AND e.depth < DEPTHVAL) 1 SELECT edges.cte.id, edges_cte.from, edges_cte.to, 2 edges.cte.column1,..., edges_cte.columnN 3 FROM edges_cte;
3. For the third experiemnt, we have created a special type of query: 1 WITH RECURSVE edges_cte(id, to, depth) AS 2 (SELECT edges.id, edges.to, 0 FROM edges 3 WHERE edges.from = 0 4 UNION ALL 5 SELECT edges.id, edges.to, e.depth + 1 6 FROM edges JOIN edges_cte AS e 7 ON edges.from = e.to AND e.depth < DEPTHVAL) 8 SELECT edges.id, edges.to, edges.from, 9 column1,..., columnN FROM edges JOIN 0 edges_cte ON edges.id = edges_cte.id;
In all plans of all evaluated systems the edges_cte was hashed in the hash join (default PostgreSQL behavior).
### Experimental Setup
Experiments were performed using the following hardware and software configuration. Hardware: Intel(r) Core(tm) i7-8550U CPU @ 1.80GHz (8 cores), 16 GiB RAM, 500GB Samsung PSSD T7. Software: Ubuntu 20.04.5 LTS x86_64, Kernel 5.15.0-60-generic, gcc 9.4.0, PostgreSQL 14.2.
### Experiments & Discussion
**Experiment 1.** The results are presented in Figure 5. The TRecursive approach exhibits performance that is similar to that of PostgreSQL, as expected, since the underlying query engines perform identical operations. Meanwhile, PRecursive outperforms TRecursive significantly, because it uses only two out of the four attributes (from and to) during the search, and materializes values of the third attribute (id) only when the required table rows are known. The number of rows scheduled to be materialized is much smaller than the total number of rows in the table (by roughly 200 times), resulting in operators passing around intermediate results of much smaller size. The index in PostgreSQL was built over from, since it is used to find edges in the join in the recursive part of the query. As we can see, this helps improve PostgreSQL performance, although it is a small improvement. Moreover, with the increase in depth, TRecursive demonstrates slightly better results compared to PostgreSQL with Index.
**Experiment 2.** The queries considered in this experiment were executed on a dataset with an additional parameter, denoted as N, which corresponds to the number of additional columns. The query plans for PosDB and PostgreSQL remain almost identical to those used in the first experiment, with the addition of ancillary columns to the Materialize operators in PosDB and to the projections in PostgreSQL, respectively.
Due to space constraints, we did not include PostgreSQL with INDEX in this and subsequent experiments, as its behavior is equivalent to that of PostgreSQL when changing the parameter N. Furthermore, results of PRecursive were only included for the maximum number of additional columns, as the time taken by PRecursive was predictably found to be almost independent of N.
Figure 5: Experiment 1 results
The run times of the queries depending on the depth of the traversal are presented in Fig. 6. As we can see, with increasing depth, the gap in the runtime on tables of different "widths" grows. PosDB PRecursive outperforms all other approaches. This happens due to late materialization reducing sizes of intermediate results significantly. In this experiment, it matters even more due to the substantial overhead associated with passing "wide" intermediates (all columns) between operators, even though only two of them are required for the recursive part (from and to). It's important to mention that as the "width" of passed row grows, PosDB TRecursive falls behind PostgreSQL. This happens because of columnar nature of PosDB. With TRecursive it requires more disk accesses (one for each column) to retrieve a single row from a table. In contrast, PostgreSQL can do this with a single access since all the data for table rows is stored together.
**Experiment 3.** This experiment is similar to the previous one, but now we are trying to conserve space in edges_cte by reducing the size of the intermediates. We only store the data necessary for reconstructing the original table row and navigating through the tree. This query requires more RAM since a second copy of the edges table is required by the top-level join. Therefore, we had to reduce the dataset due to the memory constraints of the employed hardware.
The run times of the queries depending on the depth of the traversal are presented in Fig. 7. It can be seen that the performance of PRecursive is similar to its performance in the previous experiment, it has the best performance in this experiment too among all compared approaches. TRecursive, however, beats PostgreSQL in this experiment.
As we can see by the performance improvement of TRecursive, this method helps reduce disk access overhead of row reconstruction inside the TRecursive operator. In PosDB, unnecessary columns are now only materialized once at
Figure 6: Experiment 2 results
the very end. Whereas in PostgreSQL, the internal hash join involves inefficient sequential data reads that discard unnecessary columns. Finally, this experiment shows that our approach cannot be emulated inside PostgreSQL via join.
## 6 Conclusion
In this paper, we proposed two approaches to implementing recursive queries in a position-enabled column-oriented DBMS: TRecursive and PRecursive, with the latter utilizing positions to implement the late materialization approach. We implemented two sets of operators: 1) a tuple-based set which is similar to the operators that can be found in classical row-stores but leveraging columnar data access, and 2) a positional-based set which is the main contribution of the paper.
We conducted experiments to evaluate the performance of the proposed approaches and used PostgreSQL as the baseline. Experiments demonstrated that both approaches offer improvement, with PRecursive offering up to 6 times performance gain over PostgreSQL and 3 times over TRecursive. However, TRecursive remains the only option if there are two (or more) distinct tables in the RECURSIVE part used, due to implementation-related restrictions. Also, TRecursive yields up to 3x improvement over PostgreSQL when an additional payload columns which are not required in the RECURSIVE part exist and the query can be rewritten to project them only in the top-level. Finally, we shown that it is not possible to emulate our approach inside a row-store efficiently.
|
2305.08926 | Perceptive Locomotion through Whole-Body MPC and Optimal Region
Selection | Real-time synthesis of legged locomotion maneuvers in challenging industrial
settings is still an open problem, requiring simultaneous determination of
footsteps locations several steps ahead while generating whole-body motions
close to the robot's limits. State estimation and perception errors impose the
practical constraint of fast re-planning motions in a model predictive control
(MPC) framework. We first observe that the computational limitation of
perceptive locomotion pipelines lies in the combinatorics of contact surface
selection. Re-planning contact locations on selected surfaces can be
accomplished at MPC frequencies (50-100 Hz). Then, whole-body motion generation
typically follows a reference trajectory for the robot base to facilitate
convergence. We propose removing this constraint to robustly address unforeseen
events such as contact slipping, by leveraging a state-of-the-art whole-body
MPC (Croccodyl). Our contributions are integrated into a complete framework for
perceptive locomotion, validated under diverse terrain conditions, and
demonstrated in challenging trials that push the robot's actuation limits, as
well as in the ICRA 2023 quadruped challenge simulation. | Thomas Corbères, Carlos Mastalli, Wolfgang Merkt, Ioannis Havoutis, Maurice Fallon, Nicolas Mansard, Thomas Flayols, Sethu Vijayakumar, Steve Tonneau | 2023-05-15T18:02:20Z | http://arxiv.org/abs/2305.08926v2 | # Perceptive Locomotion through Whole-Body MPC and Optimal Region Selection
###### Abstract
Real-time synthesis of legged locomotion maneuvers in challenging industrial settings is still an open problem, requiring simultaneous determination of footsteps locations several steps ahead while generating whole-body motions close to the robot's limits. State estimation and perception errors impose the practical constraint of fast re-planning motions in a model predictive control (MPC) framework. We first observe that the computational limitation of perceptive locomotion pipelines lies in the combinatorics of contact surface selection. Re-planning contact locations on selected surfaces can be accomplished at MPC frequencies (50-100 Hz). Then, whole-body motion generation typically follows a reference trajectory for the robot base to facilitate convergence. We propose removing this constraint to robustly address unforeseen events such as contact slipping, by leveraging a state-of-the-art whole-body MPC (Croccodyl). Our contributions are integrated into a complete framework for perceptive locomotion, validated under diverse terrain conditions, and demonstrated in challenging trials that push the robot's actuation limits, as well as in the ICRA 2023 quadruped challenge simulation.
perceptive locomotion, model predictive control, contact planning, quadruped robots
## I Introduction
Reliable and autonomous locomotion for legged robots in arbitrary environments is a longstanding challenge. The hardware maturity of quadruped robots [1, 2, 3, 4] motivates the development of a motion synthesis framework for applications including inspections in industrial areas [5]. However, synthesising legged motions in this context requires handling the issues of _contact decision_ (where should the robot step?) and _Whole-Body Model Predictive Control_ (WB-MPC) of the robot (what motion creates the contact?).
Each contact decision defines high-dimensional, non-linear geometric and dynamic constraints on the WB-MPC that prevent a trivial decoupling of the two issues: How to prove that a contact plan is valid without finding a feasible whole-body motion to achieve it? Even worse, the environments we consider comprise holes and gaps, introducing a _combinatorics_ problem: On which contact surface(s) should the robot step?
### _State of the art_
The mathematical complexity of the legged locomotion problem in arbitrary environments is such that an undesired decoupling between contact planning and whole-body control is required. Typically, a contact plan describing the contact locations is first computed, assumed to be feasible, and provided as input to a WB-MPC framework to generate whole-body motions along it. As a result, the contact decision must be made using an approximated robot model, under the uncertainty that results from imperfect perception and state estimation. The complexity of the approximated model has, unsurprisingly, a strong correlation with the versatility and computational efficiency of the proposed approach.
#### I-A1 Offline contact planning with full kinematics
Early approaches to contact planning are robust and complete, because they integrate a whole-body kinematic model in the planning phase, with quasi-static feasibility guarantees. These _contact-before-motion_ approaches [6, 7, 8] mix graph-based search with contact-posture sampling to explicitly tackle the problem's combinatorics. Whole-body feasibility is explicitly checked before validating each new contact. The generality of these approaches is unmatched, but the associated computational cost is high (from minutes to hours), which prevents online re-planning. The computation time
Fig. 1: Industrial staircase descent with onboard perception. Video: [https://youtu.be/bNVxTh0eixI](https://youtu.be/bNVxTh0eixI).
can be reduced to a few seconds by constraining the root's path [9], and then by approximating the robot with low-dimensional abstractions [10, 11]. These abstractions use hand-crafted heuristics for collision avoidance and geometry [12], while dynamic feasibility is often asserted using centroidal dynamics (e.g., [13, 14]). Such approximations proved unreliable for the most challenging scenarios (such as car egress motions [10]), mostly because of the difficulty of approximating collision avoidance constraints. Still, most "2.5D" environments (which can be accurately represented with a heightmap) composed of quasi-flat contact surfaces (i.e., a friction cone containing a gravity vector) can be handled with such constraints [10]. Unsurprisingly, these environments, including rubles, stepping stones and staircases, are the application targets for most contributions in the literature.
#### I-A2 Online contact planning with reduced dynamics
The combination of reduced dynamic models and simplified collision constraints makes optimisation-based techniques tractable. Optimal control is attractive as it allows us to find solutions robust to uncertainties through the minimisation of selected criteria. Sampling-based approaches, instead, only look at feasibility. To model the combinatorics, the first approach is to relax the problem by modelling the discrete (boolean) variables that represent the contact decisions with continuous variables, resulting in a formulation that can be readily solved by off-the-shelf nonlinear programming (NLP) solvers [15, 16]. However, there is no guarantee that the system's dynamic constraints will be satisfied even with reduced models, with contacts potentially planned where no surfaces exist [17]. Alternatively, Linear Complementary Constraints (LCP) can be used to accurately model contact constraints, but they are notoriously difficult to handle by NLP solvers [18]. In both cases learning initial guesses can help the solver converge to a feasible solution [19, 20]. The second approach is to explicitly handle combinatorics using Mixed-Integer Programming (MIP). MIP solvers tackle contact planning problems for bipeds [21] and quadrupeds [22], provided that the underlying optimisation problem is convex, which results in a conservative approximation of the dynamics [23]. Monte Carlo Tree Search (MCTS) has recently been proposed as a promising alternative to MIP that could provide a relevant trade-off between exploration and exploitation [24]. In this work, we choose MIP for planning contacts as it has experimentally led to the most effective results in terms of computation time and reliability [25].
#### I-A3 Perceptive locomotion with instantaneous decisions
Whether the environment is fully known or not impacts on the validity of a method. Reactive perceptive pipelines exist on the LittleDog robot [26, 27, 28] and have inspired further works [29]. However, they all require high-precision terrain pre-mapping and an external motion capture system. When the environment is not fully known, it is typically modelled as an elevation map by fusing depth sensor information within proprioceptive information [30, 31]. Recent approaches propose to directly optimise the next contact position, the torso orientation and obstacle avoidance for the foot trajectory based on this input [32, 33, 34]. The approaches share similarities with the framework we propose in terms of the model's proposition. However, their main difference is that they focus on planning the immediate contact location and posture, which is why we argue that a preview window of several steps ahead is required for the scenarios we consider. As discussed in Section IX, we believe that combining these approaches is a promising research avenue.
#### I-A4 Whole-body predictive control relying on CoM motions
Because of the nonlinearity induced by any changes to the contact plan, the WB-MPC rarely challenges the step locations, even though the approximations do not guarantee feasibility. The uncertainties resulting from state estimation and environment perception motivate frequent re-computation of the contact plan, which is not possible as their frequency is limited to \(5\,\mathrm{Hz}\). Furthermore, the WB-MPC is usually additionally constrained to track a reference trajectory for the Centre Of Mass (COM) or the base [35, 36] to facilitate convergence, but we argue that this tracking is problematic when perturbations such as contact slipping occur. Our conclusion is that the use of reduced models for contact planning necessarily leads to errors in the WB-MPC that result in slipping contacts. In the current state of the art, reduced models appear necessary for satisfying real-time constraints. Mitigating this issue involves allowing to adapt a contact plan at a higher frequency. However, it involves writing a WB-MPC that robustly accommodates these errors and gives as much freedom as possible when following a contact plan.
### _Contribution_
This work proposes a novel perceptive locomotion architecture comprised of the following features: terrain segmentation, real-time surface selection and footstep planning, free-collision foot-swing planning, and whole-body MPC which considers torque limits and generates local controllers. It relies on four technical contributions:
1. a theoretical contribution to the decoupling between contact planning and WB-MPC,
2. an empirical demonstration of the added value of WB-MPC in perceptive locomotion,
3. a convex segmentation approach using onboard terrain elevation maps, and
4. exhaustive hardware trials on challenging terrains demonstrating increased capabilities for the ANYmal B robot.
This paper extends our conference paper [25], where we propose a _contact repositioning_ module between the contact surface planner and the MPC to adapt to the robot's estimated state and perceived environment uncertainties. To achieve this, we decouple the contact surface selection from the control, but not the computation of the contact position on that surface, which we instead update syncronously with the MPC output and updated state of the robot, at \(50\,\mathrm{Hz}\).
Additionally, we have incorporated a feedback WB-MPC [37, 38] into our framework as opposed to the conventional MPC-WBC architecture used in our previous work. Constraining only the end-effector trajectories, instead of the CoM motions, allows the WB-MPC to freely accommodate substantial perturbations and perception errors, and to maximise the robot's capabilities by optimising its posture.
Finally, we extend the plane segmentation algorithm [39], which decomposes potential contact surfaces into a sequence of convex surfaces needed by our contact planner. We use the Visvalingam-Whyatt and Tess2 algorithms to correctly handle overlapping surfaces and conservatively reduce the complexity of the scene, leading to a more versatile and robust environment construction. Our framework results in state-of-the-art locomotion capabilities under a wide range of conditions, robust to strong perturbations including sliding contacts and missed steps during stair climbing.
## II Architecture overview
An overview of the locomotion pipeline is presented in Fig. 2. Walkable surfaces are described via convex planes extracted from the terrain elevation map at \(1\,\mathrm{Hz}\). Given the current state of the robot (position / orientation of the base, position of active contacts) and a desired velocity (joystick input), a mixed-integer program is used to select the convex surfaces on which the next 6 or 8 steps will occur, depending on the gait used. A new surface plan is computed at the beginning of each new phase, which corresponds to approximately 3-5 Hz in our experiments. Given the next contact surfaces, the trajectory of each end-effector is updated at \(50\,\mathrm{Hz}\), before each iteration of the whole-body MPC. The control policy is then sent to the robot with a Ricatti gain controller. State estimation is performed onboard at \(400\,\mathrm{Hz}\) by fusing inertial sensors from IMU and odometry provided by the ANYbotics software [40]. Finally, a LIDAR is used to correct the drift of these measurements by analysing fixed points in the environment.
## III Definitions and notations
In line with the notations used in our previous work [25], the _robot state_ is formally described by the:
* Centre Of Mass (COM) position, velocity and acceleration \(\mathbf{c}\), \(\mathbf{\dot{c}}\) and \(\mathbf{\ddot{c}}\), each in \(\mathbb{R}^{3}\);
* base transformation matrix in the world frame;
* 3D position of each end-effector in the world frame;
* gait, i.e., the list of effectors currently in contact, as well as the contacts to be activated and deactivated over the planning horizon.
The _horizon_\(n\) is defined as the number of future contact creations considered. In the case of the trotting gait, a horizon \(n=6\) describes three steps, as at each step two contacts are created simultaneously.
At the _Surface Selection_ planning stage, motion is decomposed into _contact phases_. Each contact phase is associated with a number of feet in contact and one or more contacts are broken/created at each phase. In the case of a trotting gait with a horizon \(n=6\), it corresponds to three contact phases since two feet move at the same time. For a walking gait, the horizon \(n=8\) contains 8 contact phases since only one foot moves at a time.
The _environment_ is the union of \(m+1\) disjoint quasi-flat1 contact surfaces \(\mathcal{S}=\bigcup_{i=0}^{m}\mathcal{S}^{i}\). Each set \(\mathcal{S}^{i}\) is a polygon embedded in a 3D plane, i.e.,
Footnote 1: i: ie such that its associated friction cone contains the gravity vector.
\[\mathcal{S}^{i}:=\left\{\mathbf{p}\in\mathbb{R}^{3}|\mathbf{S}^{i}\mathbf{p} \leq\mathbf{s}^{i}\right\}, \tag{1}\]
where \(\mathbf{S}^{i}\in\mathbb{R}^{h\times 3}\) and \(\mathbf{s}^{i}\in\mathbb{R}^{h}\) are respectively a constant matrix and a vector defining the \(h\) half-space bounding surface.
The _contact plan_ is described as a list of contact surfaces \(\mathcal{S}^{j}_{k}\in\mathcal{S},1\leq j\leq l\) with \(l\) being the total number of end-effectors and \(k\) being the \(k\)-th contact phase.
## IV Perception
This section reviews the _Elevation map_ and _Convex Plane Segmentation_ components of the architecture shown in Fig. 2.
### _Sensors_
Regarding the proprioceptive sensors, the state is estimated by fusing leg odometry and the Xsens MTi-100 IMU [40]. Additionally, a rotating Hokuyo UTM-30LX-EW lidar sensor is placed at the back of the robot to correct the drift of the state estimation with an iterative closest point (ICP) method
Fig. 2: Overview of our perceptive locomotion pipeline. Around 1Hz, the perceptive elements _Elevation map_ and _Convex Plane Segmentation_ extract convex planes from the surrounding environment (Section IV). The _Surface Selection_ block, running around 5Hz (depending on the gait chosen), chooses between them the next surfaces of contact (Section V). At the frequency of the MPC, \(50\,\mathrm{Hz}\), **Footstep and Collision free-trajectory** elements generate the curve for each moving foot (Section VI). Finally, the _Whole-Body MPC_ and **Ricatti gain controller** synthesise the motion (Section VII).
[41]. A single depth camera Intel RealSense D435, mounted at the front of the robot, extracts height information from the surrounding area into a point cloud. High-accuracy presets are used on the camera to prioritize the accuracy of the point cloud over speed. This increases the quality of the elevation map and no filters are needed to post-process the point clouds. However, this preset degrades the frequency in our setup to a range between 2 and \(5\,\mathrm{Hz}\) for point cloud collection.
### _Contact surfaces extraction_
The probabilistic and local mapping method in [30] converts point cloud data into an elevation map locally around the robot's pose. Including proprioceptive localisation from kinematic and inertial measurements produces an estimate of the surroundings as a 2.5D heightmap. Potential surfaces are then extracted with the Plane-Seg algorithm [39] by clustering planar points with similar normals. The planes are extracted without memory and therefore the surfaces can change suddenly with each height map update. The quality and consistency of the surfaces then depend entirely on the quality of the height map, as developed in Sec. IX.
### _Refinement and margin of safety_
Post-processing of Plane-Seg planes is essential to ensure safe footstep decisions. We have added it for three reasons. First, the complexity of the contact surfaces (i.e., the number of points) has a significant impact on the computation time of the surface selection algorithm. We propose conservatively approximate surfaces with more than 8 vertices with an 8-vertex polygon. Filtering is also done to remove unreachable surfaces. For example, the plane extraction method could return overlapping surfaces when considering a staircase. Here, the ground surface is often detected below the steps and planning a footstep inside it will obviously result in failures. Finally, a safety margin is applied, around \(4\,\mathrm{cm}\) on each surface, to avoid putting feet on the edges of the surfaces. This can be harmful in the event of estimation errors. This margin is also useful to avoid knee collisions of the knees with the environment, which are not explicitly accounted for.
#### Iii-C1 Vertices number reduction
The number of vertices is first reduced using the conservative Visvalingam-Whyatt line simplification algorithm [42]. It eliminates progressively the points from a line that forms the smallest area with its closest neighbours as described in pseudo-code 1. No hyperparameters other than the final number of points (which is 8) are needed. Fig. 2(a) shows an example of this reduction.
#### Iii-C2 Safety margins
On this updated contour, an inner and an outer margin are computed parallel to each edge as shown in Fig. 2(b). The inner margin allows the robot to step into a safe area. The outer margin artificially increases the size of the obstacle to prevent the end-effector from getting too close of the obstacle. This is done to avoid collisions while computing swing-foot trajectories. These two margins are used to avoid stepping on the corner of an obstacle due to state estimation errors and for collision avoidance with the body.
#### Iii-C3 Convex decomposition
Starting from the lowest surface, the overlapping surfaces are removed and a convex decomposition is performed on the resulting contour using the Tess2 algorithm [43] as shown in Fig. 3(a). Smaller areas under 0.03 m\({}^{2}\) are deleted. Algorithm 2 gives the pseudo-code of this post-processing. In addition, a rectangle is added below the robot's position, at the estimated height of the feet. This is to ensure that there is always a surface under the robot's feet if the elevation map has not been built. This surface is treated differently in the process to avoid overlapping it with real obstacles but has been removed from the pseudo-code for clarity.
Fig. 4: Example of surface processing and convex decomposition. These two figures represent the same 3D scene with 2 air overlapping the ground surface. On the right Fig. 3(b), the scene is viewed from a top perspective.
Fig. 3: Reduction to an 8-vertex polygon using the Visvalingam–Whyatt algorithm on an initial 20-vertex polygon on the left 2(a). Applying inner and outer margins to an 8-vertex polygon 2(b).
## V Surfaces selection
Given the current state of the robot (active contacts and position, COM location), the environment given as a union of non-intersecting surfaces, as well as a desired target velocity for the robot and a desired gait, our _Surface Selection_ (Fig. 2 - yellow) algorithm computes a feasible contact plan, composed of \(n\) contact surfaces that the robot should step on for the planning horizon (Sec. III). We set \(n=8\) for a walking gait and \(n=6\) for a trotting gait in our experiments. The surface selection algorithm is executed between 3 and \(5\,\mathrm{Hz}\). This means that the contact plan is updated before each new step is made by the robot. The frequency depends on the gait and each optimisation starts at the beginning of each step.
The algorithm is implemented as a Mixed-Integer Program (MIP) [44]. We use the SL1M formulation of this algorithm [17], with adaptations to better match the desired robot behaviour. These adaptations are in [25] and are described here for completeness. In this previous work, the number of contacts optimised was set to 4 for the Solo robot [45]. Here the ANYmal's robot dynamics are slower and the timing between each new contact is longer2. This gives more time for computing the contact plan, allowing us to increase the planning horizon. We refer the reader to [25] for empirical justifications for these choices.
Footnote 2: The duration of a step was set to \(160\,\mathrm{ms}\) on Solo whereas it is set to \(600\,\mathrm{ms}\) for a walking gait and \(300\,\mathrm{ms}\) for a trotting gait on ANYmal.
We first describe how the potential contact surfaces are pre-filtered to improve the algorithm's computational performance without loss of generality. We then provide the mathematical formulation of the Surface Selection algorithm. We conclude this section with the details of the cost function used in this optimisation problem.
### _Pre-selection_
We first pre-filter the number of contact surfaces using the robot's range of motion (ROM) to reduce the combinatorics. To do so, a CoM trajectory is extrapolated from the joystick velocity command (Fig. 5a) over the contact phases. The position of each state and the yaw angle (orientation around the z-axis) are integrated from the linear and angular (yaw only) desired velocity. The roll (orientation around the x-axis) and pitch (orientation around the y-axis) angles of the guide are computed as the average slope of the terrain around the robot position, given by solving (Fig. 5a)
\[\min_{\mathbf{n}} \|\mathbf{A}\mathbf{n}-\mathbf{B}\|^{2}\] \[\text{with}\quad\forall\ i,j\in N_{x}\times N_{y}, \mathbf{A}[i+jN_{x},\ :\ ]=[x_{i},y_{j},1.],\] \[\mathbf{B}[i+jN_{x}]=\text{elevation}(x_{i},y_{j}),\]
where \(x_{i}\) and \(y_{j}\) are respectively the positions on the x and y axis around the robot's position with a user-defined resolution of \((N_{x},N_{y})\in\mathbb{N}^{+2}\). The elevation function gives the terrain height at the position \((x_{i},y_{j})\) and can be obtained directly from the heightmap or by evaluating the convex plane corresponding to this 2D position. \(\mathbf{n}=[a,b,c]\in\mathbb{R}^{3}\) is the vector optimised to form the plane of equation \(ax+by-z+c=0\). The experiments use \(10\times 10\) as resolution. For each extrapolated state \(\mathbf{c}_{j}^{*}\) (8 states for a walking gait as \(n=8\) and 3 for a trotting gait as \(n=6\) but two feet contacts are created at each phase), the 6D configuration is given by:
\[\mathbf{c}_{j}^{*}=\begin{bmatrix}x_{0}+\int_{0}^{t_{j}}(v_{x}^{*}\cos(\dot{ \mathbf{\dot{v}}}^{*}t)+v_{y}^{*}\sin(\dot{\mathbf{\dot{v}}}^{*}t))dt\\ y_{0}-\int_{0}^{t_{j}}(v_{x}^{*}\sin(\dot{\mathbf{\dot{v}}}^{*}t)-v_{y}^{*}\cos( \dot{\mathbf{\dot{v}}}^{*}t))dt\\ ax_{j}+by_{j}+c+h_{\mathrm{ref}}\\ \arctan(b)\\ -\arctan(a)\\ \psi_{0}+\psi^{*}t_{j}\end{bmatrix}, \tag{3}\]
where the state is described using a 3D position \([x_{j},y_{j},z_{j}]\) and 3 Euler angles. The current extrapolated positions \(x_{j}\) and \(y_{j}\) are used to compute the configuration height. \(v_{x}^{*}\), \(v_{y}^{*}\) are the x-y linear velocities and \(\dot{\psi}^{*}\) the yaw angular velocity coming from the joystick input. \(h_{\mathrm{ref}}=0.48\) is the robot's nominal height. \(t_{j}\) is the step duration, manually defined for each gait.
For each \(\mathbf{c}_{j}^{*}\), the ROM of each moving leg is intersected with the surfaces. The ROM is approximated by a convex set and represented by the green surfaces in Fig. 5b. Only the surfaces that intersect this ROM are selected. The distance between two convex sets is computed efficiently with the GJK algorithm [46] from the Pinocchio and HPP-FCL libraries [47]. In our experiments, this usually reduces the number of potential surfaces from 20 to 3 for each moving foot on average. This significantly reduces combinatorics (from about \(20^{n}\) to \(3^{n}\) possible combinations).
### _Surface Selection algorithm as a MIP_
The surface selection module computes a contact plan for the robot that satisfies linearized kinematic constraints [13, 48]. In the following, we recall the mathematical formulation of the problem as a mixed-integer program (MIP). This MIP outputs the contact surfaces selected. It also computes the 3D locations of footsteps as a by-product. However, they are discarded as the target footsteps will be adapted at a higher frequency by the footstep planner.
#### Iii-B1 Contact constraint representation
The perception pipeline provides a set of \(m+1\in\mathbb{N}^{+}\) disjoint quasi-flat surfaces, which define the environment \(\mathcal{S}\), and the motion is decomposed into \(n\) contact phases (Sec. III), as shown in Fig. 6. To constrain a point \(\mathbf{p}\in\mathbb{R}^{3}\) to be in contact, we must write:
\[\exists\ i,\mathbf{S}^{i}\mathbf{p}\leq\mathbf{s}^{i}\Leftrightarrow\mathbf{ S}^{0}\mathbf{p}\leq\mathbf{s}^{0}\vee\cdots\vee\mathbf{S}^{m}\mathbf{p}\leq \mathbf{s}^{m}. \tag{4}\]
The _or_ constraint is classically expressed as an integer constraint using the Big-M formulation [2] as follows. We introduce a vector of binary variables \(\mathbf{a}=[a_{0},\ldots,a_{m}]\in\{0,1\}^{m+1}\) and a sufficiently large constant \(M\in\mathbb{R}^{+},M>>0\). (4) is equivalently rewritten as:
\[\forall i,\mathbf{S}^{i}\mathbf{p}\leq\mathbf{s}^{i}+M(1-a_{i});\sum_{i=0}^{ m}a_{i}=1. \tag{5}\]
Under this formulation if \(a_{i}=1\), then \(\mathbf{p}\) belongs to \(\mathcal{S}^{i}\) (and is thus in contact). Instead, if \(a_{i}=0\) for a sufficiently large M then the constraint \(\mathbf{S}^{i}\mathbf{p}\leq\mathbf{s}^{i}+M(1-a_{i})\) will be satisfied for any value of \(\mathbf{p}\), in other words, the constraint is inactive. \(\sum_{i=0}^{n}a_{i}=1\) implies that \(\exists i,a_{i}=1\). The obtained behaviour is thus the desired one: if (5) is true then \(\mathbf{p}\) is in contact with a surface.
#### Iii-B2 Problem formulation
The final MIP problem is obtained by combining contact and surface constraints as follows. For simplicity, and without loss of generality, we assume that the candidate contact surfaces are the same for each step.
\[\begin{split}\textbf{find}&\mathbf{P},\mathbf{A}=[ \mathbf{a}_{0},\cdots,\mathbf{a}_{m}]\in\{0,1\}^{(m+1)\times n}\\ \textbf{min}& l(\mathbf{P})\\ \text{s.t.}&\mathbf{K}\mathbf{P}\leq\mathbf{k},\\ &\mathbf{P}\in\mathcal{C},\\ &\forall j\in\{0,\ldots,n-1\}:\\ &\forall i,\mathbf{S}^{i}\mathbf{p}_{j}\leq\mathbf{s}^{i}+M(1-a_ {i}^{j});\sum_{i=0}^{m}a_{i}^{j}=1,\end{split} \tag{6}\]
where \(\mathbf{P}=[\mathbf{p}_{0}\ \ldots\ \mathbf{p}_{n}]\in\mathbb{R}^{3\times n}\) is the vector comprising the variable \(n\) next foot positions (6 or 8 in our experiments); \(\mathbf{a}^{j}=[a_{0}^{j},\ldots,a_{m}^{j}]\) is the vector of binary variables associated with the \(j\)-th optimised foot position; \(l(\mathbf{P})\) is an objective function; \(\mathcal{C}\) is a set of user-defined convex constraints (in our case, constraints on initial contact positions); \(\mathbf{K}\) and \(\mathbf{k}\) are constant matrix and vector representing the linearised kinematic constraints on the position of each effector with respect to the others [48, 13, 16].
#### Iii-B3 Cost function details
As mentioned in [25], two quadratic costs are used in the problem. The first attempts at regularising the footstep locations using Raibert's heuristic. Since the optimisation is triggered at the beginning of each step (between 3 and \(5\,\mathrm{Hz}\)), and not at each time step (\(50\,\mathrm{Hz}\)), an approximation of the Raibert heuristic is applied. The idea is to interpolate the base position given the desired velocity and place the foot accordingly to the estimated hip position. The second term penalises the distance between the hip and the foot location. This aims at penalising solutions close to
Fig. 5: (a) CoM extrapolation along the horizon. (b) ROMs of the 4 effectors for the current state. (c) ROMs along the horizon for the front right foot.
Fig. 6: Environment \(\mathcal{S}\) with 3 contact surfaces for a walking gait (1 foot moving at a time). Circles: initial position; squares: next steps locations.
reaching the robot's kinematic limits.
\[l(\mathbf{P})=\sum_{j}^{n}w_{1}\|\mathbf{p}_{j}-\mathbf{p}_{j}^{*}\|_{x,y}^{2}+w_{ 2}\|\mathbf{p}_{j}-\mathbf{p}_{\text{hip},j}^{*}\|^{2}, \tag{7}\]
where \(\mathbf{p}_{j}^{*}\) is the extrapolated foot position taking into account the linear and angular reference velocity at the corresponding contact phase. The weights \(w_{1}\) and \(w_{2}\) are set to 1 and 0.5. This penalisation only considers the \(\ell^{2}\) norm on the x- and y-axis. Similarly, \(\mathbf{p}_{\text{hip},j}^{*}\) is the extrapolated hip position from the desired velocity. The computation of the reference foot location is detailed in the next section Sec. VI.
## VI Foot trajectory generation
Our _Footstep and Collision free-trajectory_ (Fig. 2 - red) algorithms compute the end-effector trajectory as a Bezier curve given the current robot state (COM position/orientation), the next moving foot within the horizon of the MPC, the contact plan obtained by the Surface Planner, the timings of contact depending on the gait chosen, as well as a desired target velocity for the robot. At the MPC frequency, set to \(50\,\mathrm{Hz}\), a first quadratic program (QP) computes the next foot location and a second QP calculates its trajectory during the flight. Their formulation differs from our previous work [25] as the base velocity is not optimised here.
### _Foot position optimisation_
We use an alternative to Raibert's heuristic [50] as it is commonly done on quadruped robots [51, 52, 53]. In our case, we are simply interested in coherent foot locations according to the base position, reference velocity and gait period. The measured velocity is not part of this heuristic, making it a feed-forward strategy. Instead, our MPC plans the CoM trajectory (Sec. VII). A target 2D position of the foot \(\mathbf{p}^{*}\) is computed as follows:
\[\mathbf{p}^{*}=\mathbf{p}_{\text{hip}}+\frac{T_{s}}{2}\mathbf{v}_{ref}+\sqrt{ \frac{h}{g}}\mathbf{v}_{ref}\times\omega_{ref}, \tag{8}\]
where \(T_{s}\) is the stance phase time extracted from the phase-based gait pattern, \(\mathbf{v}_{\text{base}}\), and \(\mathbf{v}_{\text{ref}}\) are respectively the current base velocity and the reference velocity commanded by the user. Finally, \(h\), \(g\) and \(\omega_{\text{ref}}\) are respectively the nominal height of the robot, the gravity constant and the reference angular velocity around the z-axis. The estimated hip position at contact \(\mathbf{p}_{\text{hip}}\) accounts for the accumulated delay in the estimation of the base position, since the moving average is used to reject oscillations of the same period as gait, as discussed in section VIII-G. The final position \(\mathbf{p}\) is the closest to \(\mathbf{p}^{*}\) that lies on the selected surface \(\mathcal{S}\) under the locally-approximated kinematic constraints \(\mathcal{K}\):
\[\min_{\mathbf{p}} \|\mathbf{p}-\mathbf{p}^{*}\|^{2}\] (9a) s.t. \[S\mathbf{p}\leq s, \tag{9b}\] \[\mathbf{p}\in\mathcal{K}. \tag{9c}\]
The difference with our MIP optimisation lies in the \(50\,\mathrm{Hz}\) computation frequency. As a result, the footstep is constantly updated relative the base position.
### _Collision free trajectory_
A collision-free trajectory \(\mathbf{p}(t):\mathbb{R}\rightarrow\mathbb{R}^{3}\) connects the current foot location to the optimised location. \(\mathbf{p}(t)\) aims at following a reference trajectory while avoiding obstacles and keeping computational time low as it is parameterized using a Bezier curve of degree d on the Bernstein basis \((B_{i})_{i\leq d}\):
\[\mathbf{p}(t)=\sum_{i=0}^{d}B_{i}^{d}(\frac{t}{T})\mathbf{P}_{i}, \tag{10}\]
where \(\mathbf{P}=\begin{bmatrix}P_{0}&\dots&P_{d}\end{bmatrix}^{T}\) are the \(d+1\) control points and \(T\) is the total time of the trajectory.
#### Vi-B1 Reference trajectory
The trajectory \(\mathbf{p}_{ref}(t):\mathbb{R}\rightarrow\mathbb{R}^{3}\) is composed of a degree 6 polynomial curve of degree 6 on the z-axis and a degree 5 one on the x and y axes. All of which have with the first 3 control points fixed to ensure the continuity in position, velocity and acceleration of the curve from the current state. End velocity and acceleration are set to 0 to avoid slippage at the end position. Additionally, the height at \(\frac{T}{2}\) is fixed on the z-axis, constraining all degrees of freedom.
#### Vi-B2 Collision avoidance
The trajectory \(\mathbf{p}(t_{k})\) is a 3D curve of degree \(d=7\) that follows \(\mathbf{p}_{ref}(t)\) while avoiding collisions. To do so, we enforce collision avoidance along the foot trajectory. This is achievable iteratively [54] by solving a sequence of QPs and adding constraints where collisions occur. For computational efficiency, we empirically identify the points likely to be in a collision and add collision constraints to them (Fig. 7 - Yellow dots). These \(n_{c}+1\) points \(\mathbf{p}(t_{k})=\mathbf{A}_{k}\mathbf{P},\forall k\in[0,\dots,n_{c}]\) are linearly defined by Bezier's control points, with \(\mathbf{A}_{k}\in\mathbb{R}^{3\times(d+1)}\). We choose the active constraint to be the half-space traversed by the reference curve (Fig. 7 - pink half-space). We thus write \(\forall k,\mathbf{S}_{m}^{i}\mathbf{p}(t_{k})\geq\mathbf{s}_{m}^{i}\), where \(\mathbf{S}_{m}^{i}\in\mathbb{R}^{3}\) and \(\mathbf{s}_{m}^{i}\in\mathbb{R}\) define what constitutes the current
Fig. 7: Adaptation of the end-effector trajectory to climb a step. The active constraint corresponds to the half-plane crossed by the reference trajectory -red curve-. Both curves are discretised to perform the optimisation and the orange points highlight the ones under constraints.
half-space. All the collected constraints are then stacked into a single matrix and vector \(\mathbf{G}\) and \(\mathbf{g}\), leading to the QP:
\[\min_{\mathbf{p}} \sum_{k=0}^{n_{c}}\|\mathbf{p}(t_{k})-\mathbf{p}_{\text{ref}}(t_{k })\|^{2}\] (11a) s.t. \[\mathbf{p}(0)=\mathbf{p}_{\text{ref}}(0),\qquad\mathbf{p}(T)= \mathbf{p}_{\text{ref}}(T), \tag{11b}\] \[\dot{\mathbf{p}}(0)=\dot{\mathbf{p}}_{\text{ref}}(0),\qquad\dot{ \mathbf{p}}(T)=\mathbf{0}_{\mathbb{R}^{3}},\] (11c) \[\ddot{\mathbf{p}}(0)=\ddot{\mathbf{p}}_{\text{ref}}(0),\qquad \ddot{\mathbf{p}}(T)=\mathbf{0}_{\mathbb{R}^{3}},\] (11d) \[\mathbf{G}\mathbf{P}\leq\mathbf{h}. \tag{11e}\]
This approach can be expanded to avoid local obstacles along the trajectory by imposing constraints that depend on the environment's heightmap. Currently, our trajectory avoidance is based only on obstacles perceived as walkable surfaces, which has been found to work effectively in many scenarios. Nevertheless, if an obstacle is along the trajectory but not detected as such, it will not be avoided. This highlights the importance of including height-map information when available.
## VII Motion Generation
Given the current state of the robot (6D position and velocity), the next contact sequence and the end-effector trajectories, our _Whole-Body MPC_ (WB-MPC) and _Ricatti gain controller_ elements (Fig. 2 - red and violet) generates the motion of the legged robot. The WB-MPC optimises a trajectory at \(50\,\mathrm{Hz}\) and the Ricatti gain controller applies it at \(400\,\mathrm{Hz}\).
### _Optimal control formulation_
The legged robot generates motions through a model predictive controller that relies on the robot's full-body dynamics. The formulation is based on two previous papers [37, 38]. We solve the optimal control (OC) problems using Crocoddyl's advanced solvers [36]. Our pipeline relies on two different OC formulations based on forward and inverse dynamics. Both yield similar performances, demonstrating our method's generalisability. Both formulations can be written as [37, 38]:
\[\min_{\{\mathbf{q},\mathbf{v}\},\{\boldsymbol{\tau}\}\mathrm{or} \{\boldsymbol{\dot{\mathbf{v}}},\boldsymbol{\lambda}\}} \ell_{N}(\mathbf{x}_{N})+\sum_{k=0}^{N-1}\int_{t_{k}}^{t_{k}+1}\ell_{k}( \mathbf{q}_{k},\mathbf{v}_{k},\boldsymbol{\lambda}_{k},\boldsymbol{\tau}_{k} )\ dt \tag{12a}\] \[\text{s.t.}\qquad\mathbf{q}_{k+1}=\ \mathbf{q}_{k}\ \oplus\ \int_{t_{k}}^{t_{k+1}}\ dt,\] \[\mathbf{v}_{k+1}=\ \mathbf{v}_{k}\ +\ \int_{t_{k}}^{t_{k+1}} \dot{\mathbf{v}}_{k}\ dt,\] \[\begin{bmatrix}\dot{\mathbf{v}}_{k}\\ -\mathbf{\lambda}_{k}\end{bmatrix}=\begin{bmatrix}\mathbf{M}_{k}&\mathbf{J}_{ c}^{\top}\\ \mathbf{J}_{c}&\end{bmatrix}^{-1}\begin{bmatrix}\boldsymbol{\tau}_{b}\\ -\mathbf{a}_{c,}\end{bmatrix}\text{ (forward dyn.)}\] \[\text{or}\] \[\text{ID}(\mathbf{q}_{k},\mathbf{v}_{k},\dot{\mathbf{v}}_{k}, \boldsymbol{\lambda}_{k})=\mathbf{0},\quad\text{(inverse dyn.)}\]
where the robot state \(\mathbf{x}=(\mathbf{q},\mathbf{v})\) contains the generalized position and velocity vectors. More precisely, \(\mathbf{q}\in\mathbb{SE}(3)\times\mathbb{R}^{n_{j}}\) with the generalized velocity lying in the tangent space \(\mathbf{v}\in\mathfrak{se}(3)\times\mathbb{R}^{n_{j}}\) where \(n_{j}\) is the number of articulated joints and \(\oplus\) denotes the integration operator in \(\mathbb{SE}(3)\) inspired by [55] and used in [36].
For the forward-dynamics formulation, the input command is \(\mathbf{u}=\{\boldsymbol{\tau}\}\) corresponds to the joint torques. \(k\), \(\mathbf{M}_{k}\) and \(\mathbf{J}_{c}\) corresponds to the discrete-time \(t_{k}\), the generalised mass matrix and the contact Jacobian, respectively. \(\boldsymbol{\tau}_{b}\) includes the joint torque command, Coriolis and gravitation terms and \(\mathbf{a}_{c}\) is the desired acceleration which results from the rigid contact constraint (contact point velocity is null) and includes the Baumgarte stabilisation term [56]. The impulse dynamics described in more detail in [37] which allow velocity changes at impact have also been omitted from the formulation. This contact dynamics formulation comes from the application of the Gauss principle of least constraint [48], as described in detail in [57]. By handling this constraint in the backward pass, the decision variables can be condensed with forward dynamics [37, 57, 36] and contact forces can be removed. This drastically reduces the number of decision variables.
Regarding the inverse-dynamics formulation, where \(\mathbf{u}=\{\dot{\mathbf{v}},\boldsymbol{\lambda}\}\), the contact forces and the acceleration become decision variables of the problem [38, 58]. We can perform efficient factorizations by applying nullspace parametrization as proposed in [38]. Furthermore, an alternative inverse dynamic equation that allows us to remove joint torques from the control vector is possible. Both strategies reduce the size of the problem needed to enable MPC applications.
Regularisation terms included in the cost function \(\ell\) are summarized in Table I. These different formulations are transcribed into quadratic terms, where \(\mathbf{q}_{\mathbf{a}},\dot{\mathbf{q}}_{\mathbf{a}}\in\mathbb{R}^{12}\) are respectively the actuated joint positions and velocities. \(\mathbf{p}_{\text{base}}\in\mathbb{SE}(3)\) describes the base pose in the world frame. \(\mathbf{p}_{\dot{\mathbf{g}}_{k}}\odot\mathbf{p}_{\text{ref},\dot{\mathbf{g} }_{k}}=\mathbf{p}_{\dot{\mathbf{g}}_{k}}^{-1}\mathbf{p}_{\text{ref},\dot{ \mathbf{g}}_{k}}\) represents the pose error in \(\mathbb{SE}(3)\) of the feet relative to the reference position. It is penalised by considering it in its tangent space [55] with the log function that maps \(\mathbb{SE}(3)\) to \(\mathfrak{se}(3)\). Feet position and velocity tracking error are expressed in the robot's inertial frame. The base position is not penalised whereas its rotation is penalised around the x- and y-axis only, corresponding to the roll and pitch angles. \(\mathbf{C}\) and \(\mathbf{c}\) are the matrix and vector of the linearised friction cone. \(\mathcal{C}_{k}\) represents
the set of feet in contact with the ground and inversely \(\mathcal{G}_{k}\) is the set of feet in the swing phase for the node k. Costs are set to zero if the corresponding feet are in the contact phase.
### _Ricatti-gain controller_
OC formulations with reduced-order dynamics require an instantaneous whole-body controller for tracking computed forces and maintaining robot balance [59, 53, 51]. Such instantaneous controllers may compete with the MPC policy and not necessarily generate the motion predicted by them [60]. At a higher cost in computing time, which has a non-negligible influence, many benefits can appear when including whole-body dynamics in the OC problem, e.g., imposing the joint effort limits. Another benefit is to derive local feedback controllers from optimisation principles. This control policy looks like this:
\[\boldsymbol{\tau}_{d} =\boldsymbol{\tau}_{ff}+\mathbf{K}(\mathbf{x}\ominus\mathbf{x}^ {*}), \tag{13a}\] \[=-\mathbf{Q}_{\mathbf{uu}}{}^{-1}\mathbf{Q}_{\mathbf{u}}-\mathbf{ Q}_{\mathbf{uu}}{}^{-1}\mathbf{Q}_{\mathbf{ux}}(\mathbf{x}\ominus\mathbf{x}^ {*}), \tag{13b}\]
where \(\ominus\) denotes the \(\mathbb{SE}(3)\) error. \(\mathbf{Q}_{\mathbf{uu}}\), \(\mathbf{Q}_{\mathbf{u}}\) and \(\mathbf{Q}_{\mathbf{ux}}\) are respectively the partial derivatives of the value function for the joint-effort command and the state [37]. While the MPC runs every \(0.02\,\mathrm{s}\) (\(50\,\mathrm{Hz}\)), the discretisation is set to \(0.01\,\mathrm{s}\). Instead, the low-level Ricatti-gain controller runs at \(400\,\mathrm{Hz}\), and an interpolation step is necessary. For this, the feedback term is computed by using an interpolation of the reference state \(\mathbf{x}^{*}\) by integrated the dynamic with the contact model dynamic 12. The optimal effort command is then provided at \(400\,\mathrm{Hz}\) in addition to a joint impedance controller based on the joint command and velocity obtained from \(\mathbf{x}^{*}\).
## VIII Results
### _Implementation_
Communication between each module is achieved via a low-latency ROS communication layer (TCP, no-delay) [61]. Three onboard computers (Intel(R) Core(TM) i7-5600U CPU @ 2.6GHz) share the main tasks of locomotion, perception and estimation. Point-cloud framework, heightmap generator, state estimation and Ricatti-gain controller are performed onboard. We used an additional computer (Intel(R) Core(TM) i5-8365U CPU @ 1.60GHz) to extract the segmented planes from the heightmap, post-process the extracted surfaces and run the mixed-integer program in a different thread. A final computer (Intel(R) Core(TM) i9-9900KF CPU @ 3.60GHz) is used to run the MPC which sends the plan to the Ricatti-gain lower controller at \(50\,\mathrm{Hz}\).
### _Experiments_
The pipeline has been tested in various scenarios, first with onboard perception and then with a model of the environment. This is to emphasize the motion generation part and break away from the perception constraints. During all experiments, the user commands the robot's velocity with a joystick. Some of the experiments (_1.3_, _1.4_ and _2.1_) were conducted using the inverse dynamic formulation and presented in the related paper [38] as evidence of the formulation's effectiveness. We present these experiments again in this paper to specifically highlight the planning aspect.
#### Vi-B1 Using onboard perception
We evaluated our complete pipeline on three major scenarios, listed below.
* _Experiment 1.1_ (Fig. 1): The first experiment is a 5-minute experiment, representative of the type of environment our perceptive locomotion pipeline enables. The robot starts at the bottom of an industrial staircase with 7 steps, each \(17\,\mathrm{cm}\) high and \(29\,\mathrm{cm}\) deep. The average terrain slope is 30 degrees. Once at the top, the robot makes a U-turn on the industrial platform and performs its descent.
* _Experiment 1.2_ (Fig. 8-a): The second scenario corresponds to two platforms of 1 by 1 meters connected by a piece of wood placed diagonally to the right of the assembly. Once the robot is on the first platform, we manually remove the ground from the contact surface list to prevent the robot from moving forward. A 20x30 cm block was then added to the left of the assembly, and after a few seconds, the platform is detected and the robot moved forward. This demonstrates the pipeline's reactive capabilities.
* _Experiment 1.3_ (Fig. 8-b): It is the same configuration as the previous experiment with 2 pieces of wood connecting the two platforms. This experiment highlights the accurate execution of footstep plans.
* _Experiment 1.4_ (Fig. 8-c): Using the robot onboard perception, the final experiment aims to mix various terrains types and heights. Since the metallic plate was not fixed, it slid at the end of the experiment, showing the stability of the controller.
#### Vi-B2 Using a model of the environment
To further evaluate the locomotion capabilities of our pipeline, we conducted additional experiments in which the contact surfaces are known a priori, and not given by Plane-Seg. The robot's pose with respect to the world frame is still estimated using the onboard sensors (LIDAR and proprioceptive sensors) and in general, the rest of the framework remains the same.
* _Experiment 2.1_ (Fig. 8-d) : The first experiment was to climb stairs with 2 missing steps. We removed steps 2 and 6 of the stairs. It corresponds to climbing a slope of 30 degrees with two gaps of 34 cm. This pushes the robot to its kinematic and the actuation limits. This, therefore, highlights the benefit of taking into account the entire robot model to plan the motion and adapt the posture. This justifies our approach, as discussed in Sec. VIII-E.
* _Experiment 2.2_ (Fig. 8-e): The second experiment is similar and corresponds to descending stairs with 1 missing step (step 6). It corresponds to crossing a slope of 30 degrees with a gap of \(34\,\mathrm{cm}\).
* _Experiment 2.3_ (Fig. 8-f) : The last experiment was conducted on a platform of size \(1\,\mathrm{m}\times 1\,\mathrm{m}\) and \(38\,\mathrm{cm}\) in height. It is higher than the robot's height in its nominal position. It is the only experiment where we had to increase the safety margin around the obstacle (outer
Fig. 8: Screenshots of different experiments highlighting our architecture’s capabilities. The first three rows of results were obtained using the onboard camera and a complete perception pipeline for active obstacle detection. Rows (d), (e), and (f) were performed without perception setup but instead used a pre-computed model of the environment to overcome the perception system limitations and test the controller’s limits. The last two rows were obtained through simulation to further test our pipeline in challenging scenarios. Video accessible at [https://youtu.be/bDWvxTh0eix1](https://youtu.be/bDWvxTh0eix1).
margin in Sec. IV) to \(12\,\mathrm{cm}\) since the robot's shoulders are at the same height as the obstacle and collide with it.
#### Iv-B3 Simulation experiments
A final set of experiments were run in the PyBullet simulator [62] to show more dynamic gaits such as trotting. This was done to illustrate the diversity of terrains the framework can target. The setup and communication protocol are identical to the one used on the hardware and described previously in Sec. VIII-A.
* _Experiment 3.1_ (Fig. 8-g): The first experiment corresponds to a set of stepping stones of different sizes and heights crossed with a trotting gait.
* _Experiment 3.2_ (Fig. 8-h): The second experiment involves the ICRA 2023 Quadruped Challenge, which comprises a parkour task featuring a range of obstacles, including pallets, inclined ramps, rounded rubber ramps, and a stack of plastic crates. This challenge serves as a valuable benchmark for evaluating our pipeline capabilities. For most parts, our pipeline navigas parkour using a dynamic trot. For some of them, we had to marginally modify some parameters. The wooden pallets with gaps between the wooden slats imply many potential surfaces for each contact, up to 12, even in the pre-selection section. Hence, we had to reduce the surface planner's horizon to optimise the next 4 contact sequences (and not 6 as used previously). Step height need to be increased by up to \(25\,\mathrm{cm}\) on flat terrain with wood panels of \(20\,\mathrm{cm}\). It would be ideal to include height map directly in trajectory optimization. This is rather than solely relying on surfaces for obstacle avoidance. as discussed previously in Sec. VI. Furthermore, when dealing with a stack of plastic crates, adapting the gait to a walking gait (in which only one foot moves at a time) is more stable and enables safe passage across the terrain, compared to a more dynamic trotting gait.
### _Evaluation of the perception pipeline_
In this section, we evaluate the perception pipeline, from the heightmap to the resulting surfaces filtered and reshaped for security margins. Fig. 9 shows the scene during experiment _1.1_, before the manual removal of the ground. The necessity of the filtering routine, described in Sec. IV is highlighted in this example since the convex planes extracted from the height map overlap. Additionally, we note that the security margin is applied inside the obstacles to avoid walking on the edges. This is due in part to estimation errors, which are around evaluated to 3-4 cm. More height maps and resulting potential surfaces are shown in Fig 10, taken from experiments _1.1_ and _1.2_. It is interesting to note the evolution of the terrain estimation around the robot during these experiments. When climbing or descending stairs, the robot's camera only identifies the next 1 or 2 stairs ahead. During the second experiment _1.2_, where an obstacle was added in front of the robot, it took several iterations of the probabilistic algorithm to update the heightmap with the detected object and consequently 2 to 3 seconds to obtain a feasible surface to walk on.
### _Computation performance_
Table II presents the computation time statistics for each module of the pipeline during experiment _1.1_, i.e., climbing up and down stairs with active perception.
#### Iv-D1 Surface Processing
It takes \(90\,\mathrm{ms}\) on average to post-process the incoming surfaces from perception with Algorithm 2 presented in Sec. IV. In comparison of the plane's update frequency, which is between \(0.5\,\mathrm{Hz}\) and \(1\,\mathrm{Hz}\) with an
Fig. 9: Perception output during experiment 1-(ii) before ground removal. Black surfaces: initial convex surfaces; coloured surfaces: filtered.
non-optimised code, it represents roughly an increase of 5-10%.
#### Vi-B2 Surface Selection
Surface selection, as described in Sec. V encompasses the pre-selection step to reduce the number of potential surfaces considered by the mixed-integer program. It takes around \(120\,\mathrm{ms}\) to find the next surface of contact in this experiment. Pre-selection reduces the potential surfaces from 23 to 3 for each foot. During this experiment, we optimise over 8 contact phases, which correspond to 8-foot positions optimised with a walking gait (1 contact is created/broken for each contact phase). The mixed-integer optimisation starts at the beginning of each upcoming foot trajectory and the next surface information needs to be received before the beginning of the upcoming cycle. In this experiment, the foot trajectory is \(600\,\mathrm{ms}\), which is enough for the maximum time taken. However, for a trotting gait, the foot trajectory is set to \(300\,\mathrm{ms}\). We must reduce the number of foot locations to 6, thus allowing 3 contact phases (2-foot locations optimised for each contact phase) to ensure a safe margin regarding the computing time. We note that the computing time of mixed-integer optimisation depends heavily on the number of potential surfaces for each contact and the number of contact locations optimised. For a detailed analysis of the MIP computation times, we refer the reader to [17].
#### Vi-B3 Foot trajectory
The foot trajectory encompasses the optimisation of the footstep inside the attributed surface and the end-effector curve optimisation as described in Sec. VI. It represents only 1% of every MPC step.
#### Vi-B4 Motion generation (MPC)
On average the MPC step takes around \(13\,\mathrm{ms}\), including the update of the optimal control problem and solving this latter with 1 iteration which takes around \(9\,\mathrm{ms}\). This is below the maximum \(20\,\mathrm{ms}\) time required for an MPC running at \(50\,\mathrm{Hz}\).
### _Analysis of the whole-body MPC_
In scenarios involving climbing steps, the torque limit is reached on at least one actuator in each of these experiments. It is in this case that the whole-body MPC becomes crucial as it adjusts body posture to reduce joint torques. Experiment _2.1_ is a representative case. First, we observe that the COM motion leans forward and close to the feet, almost in contact with the stairs, at the moment of giving the last motion to cross the gap (Fig. 11). The torque limit is reached on both joints of the hind right leg during this motion, as shown in Fig. 12. We analyse the following quantities: torques, angular position and angular velocities at the HFE joint (hip flexion/extension) and the KFE joint (knee flexion/extension). The HAA joints (hip abduction/abduction) are less prone to reach torque limits. While crossing the gap, we can observe two torque peaks that correspond exactly to the robot configuration shown in Fig. 12. In this instance, the overshoot of the torque command above the joint limits (a few \(\mathrm{Nm}\)) can be attributed to two main reasons. First, the constraints on torque limits can be violated (Sec. VII) as the Riccati controller guarantees joint limits within a neighbourhood.
### _Evaluation of the collision-free foot trajectory_
The end-effector trajectory is re-computed at \(50\,\mathrm{Hz}\) in order to get robust tracking. Some minor modifications were necessary when transitioning from simulation to hardware. An offset of -\(1\,\mathrm{cm}\) has been added on the z-axis to accommodate for perception errors, reaching 2/3 \(\mathrm{cm}\) at the end of our longest experiments (_1.1_), and ensuring contact creation occurs. A finer control of the feet's trajectory according to the contact detection was not necessary in our case, as the whole-body MPC is robust to state estimation errors (Sec. VIII-H). Additionally, the swing-foot trajectory and footstep are no longer updated once 70% of the flight phase has passed in order to avoid a sliding contact after a sudden change in the target position. Finally, the end-effector velocity feedback is not taken into account in the optimisation due to large uncertainties about the end-effector velocities. The foot velocity is therefore assumed to be tracked properly and the initial velocity while re-computing the curve is taken from the previous control cycle.
Fig. 13 shows a swing-foot trajectory while the robot crosses an obstacle of \(20\,\mathrm{cm}\) and while climbs the stairs during experiment _1.1_. We observe the state estimation errors on the foot position when landing on the ground, resulting in the foot slightly bouncing.
### _Evaluation of the velocity tracking and design choices_
Not constraining the robot's pose and velocity to follow a reference trajectory has a significant impact on the COM trajectory, especially in a walking gait pattern when only one foot is lifted at a time (Fig. 14). Indeed, Fig. 15-(a) shows that the base trajectory during a forward walk oscillates in a range of \(10\,\mathrm{cm}\) around the middle of the feet positions. However,
Fig. 10: Height map and resulting surfaces during experiment 1-(i) (first row) and experiment 1-(ii) (second row). ([https://youtu.be/bDVvxth0eixI](https://youtu.be/bDVvxth0eixI)).
constraining the base position and velocity would result in a fixed base orientation. This can be explained by the motion stability found in the optimization process. The centre of pressure is well located in the middle of the support polygon. The COM position is dragged inside this region resulting in this wave motion. Since it is a periodic motion, the state position has been filtered with a moving average for the walk period (Fig. 15), rejecting all frequencies in synchronisation
Fig. 11: Posture adjustment (experiment _2.1_). The hind right leg reaches the torque limits on both HFE (hip flexion/extension) and KFE (knee flexion/extension) when crossing the gap. This corresponds to the peak torque at 52s. The whole-body MPC adjusts the body posture to compensate. The body leans forward and lowers as much as possible, reaching the kinematic limit of the hind leg to reduce torque on it. ([https://youtu.be/BWvXTh@eixI](https://youtu.be/BWvXTh@eixI)).
Fig. 14: Screenshots of the motion resulting from our OCP regularization setup for a walking gait pattern. The blue area between represents the support polygon and the yellow circle is the Centre Of Pressure (COP) applied.
Fig. 12: Torque, angular position and velocities of the joints HAA (hip abduction/induction), HFE (hip flexion/extension) and KFE (knee flexion/extension) of the hind right leg during experiment _2.1_. Blue quantities are computed by the state feedback controller and orange ones are measured on the robot. The maximum torque limit is approximately \(40\,\mathrm{N}\) m and is represented by red dashed curves. Each actuator can reach \(12\,\mathrm{m}\,\mathrm{s}^{-1}\). Two torque peaks reaching the boundaries can be observed around the \(40\,\mathrm{s}\) and \(52\,\mathrm{s}\) which correspond to the robot configurations shown above.
Fig. 13: Side view of the front-left foot’s trajectory while crossing a \(20\,\mathrm{cm}\) obstacle (left) and the staircase (right) using a walking gait pattern at a reference velocity of \(0.1\,\mathrm{m}\,\mathrm{s}^{-1}\). The grey curves on the left graph are reference Bezier curves computed during each control cycle. Their transparency indicates the prediction horizon. More transparent curves represent further predictions for the future. Grey areas on the right correspond to the cumulative position of surfaces detected by the camera.
with the gait. This behaviour does not appear with a trotting gait since two opposite legs are left at the same time and the resulting oscillation is much smaller.
### _Qualitative analysis: robustness of the pipeline_
A crucial point is to understand the repeatability of our experiments and how the locomotion controller adapts to unforeseen events. Since the latter is challenging to quantify, we discuss it qualitatively. In the absence of the perceptive part (experiments _2.1_, _2.2_ and _2.3_), the locomotion pipeline is stable and the experiments were carried out on the first attempt. Using onboard perception, climbing the stairs with onboard perception has an approximate success rate of \(80\%\) over 10 runs, based on our perceptive pipeline. The failures come mostly from joystick manipulation errors while trying to correct orientation drift or state estimation errors with one of the feet slipping. An interesting point attributed to the whole-body MPC is the ability to recover to unplanned situations, as in experiment _2.2_ when ANYmal misses a step during the descent. Similarly, at the end of experiment _1.4_, a metal plate slips when walking on it. In both cases, the robot recovered properly (Fig. 16).
## IX Discussion
In this paper, we propose a complete pipeline for perceptive quadrupedal locomotion; from onboard perception to locomotion generation and control. Our experimental results highlighted capabilities such as crossing challenging scenarios like climbing and descending industrial stairs or obstacle parkour with moving obstacles. Therefore, we have experimentally validated our architecture. This is based on the strong design assumption that a high-level planner, such as mixed-integer optimization, only selects the stepping contact surfaces. The utility of mixed-integer optimisation for design integration was partially validated in our previous work [25]. We used a different robot and integrated perception into our approach along with more complex experiments which reinforced interest in our decomposition.
#### Ix-H1 Perception pipeline
We have demonstrated the efficacy of our perceptive pipeline based on the probabilistic terrain mapping method [30] and the Plane-Seg algorithm [39] which allow us to obtain satisfactory behaviour, in particular with our filtering work. Additionally, we demonstrated that our architecture can successfully navigate complex parkour terrains with moving objects or industrial staircases. However, the perception module still remains a weak point in our system. While our architecture is capable of navigating challenging scenarios when perception is removed, improving perception capabilities could enhance the overall robustness and reliability of the system. This would be particularly beneficial in scenarios such as the staircase. In this scenario, vision detection is restricted to only a few steps ahead due to our single depth camera with a 20% downward angle. Additionally, the algorithm used to extract convex planes is not incremental and does not take into account previously computed planes. It generates a distinct set of surfaces each time an updated heightmap is generated. The stability of the planes would benefit from tracking previously detected planes over time.
#### Ix-H2 Collision avoidance of the body
The trade-off proposed in our approach is to strongly regularise the OCP around a reference end-effector trajectory while avoiding obstacles, which proved to work in a wide range of environments. This occurs even if a collision with the environment is not specif
Fig. 16: Screenshots of the robot recovering from an unplanned event. Top: the metal plate slips while the robot walks on it. Bottom: the two front feet slip and miss the intended step.
Fig. 15: Base trajectory during a forward walking gait of period \(2.4\)s. The joystick command is in 2 separate steps of \(0.05m.s^{-1}\) and \(0.1m.s^{-1}\) along the x-axis. To reject disturbances, the filter is a moving average on the walk period. (a) base position on the ground floor plane. (b) and (c) base velocity along respectively the x and y axes. Small disturbances at first in the filtered quantities are due to the filter initialisation phase.
ically considered in our approach. This could be considered within the MPC but would require a dedicated study since it is a challenging nonlinear optimisation problem. The safety margin around obstacles was sufficient in almost all scenarios to avoid collisions, except for the \(40\,\mathrm{cm}\) step in _2.3_, where we had to increase these margins to \(10\,\mathrm{cm}\). As mentioned before, our approach ensured that there was no collision in the end-effector trajectory. It could have been done inside the MPC with additional terms in the objective. However, this would increase complexity and possibly affect convergence rates. Knee collision is implicitly considered by the margin around obstacles. However, this could be addressed explicitely by planning the foot trajectory including the whole leg, as is the case in [26], although re-planning at \(50\,\mathrm{Hz}\) might become unfeasible.
#### Ix-B3 Comparison with the state of the art
Two recent works [34, 32] present interesting similarities to our architecture. We can first observe the decomposed approach used in both cases with the foot location optimised outside the MPC. As a future work, it might be relevant to "mix-and-match" the different components of the architecture to provide a fair comparison of a single system. In particular, we are interested in comparing contact decision algorithms. We are also interested in testing the perception components proposed in [31].
#### Ix-B4 Improvement points
As often occurs in model-based approaches to quadruped robots, the main limitation is the gait fixed beforehand. It would be interesting to replace the high-level planner with an acyclic planner to optimise the timing of contacts and the type of gait. This could lead to computing-time issues especially due to the increase in problem complexity. To take this further, it would be interesting to integrate this into a global planner to achieve autonomous behaviour. Although we consider a stable walking gait necessary for the most challenging scenarios, we also acknowledge the potential benefits of implementing a trotting gait on the hardware. However, transitioning to a trotting gait, which works well in simulation, proved more challenging to implement on the robot. We believe that this limitation could be overcome with a faster controller and more integrative efforts. Finally, our foot placement is based on Raibert's heuristic, which is optimal on flat ground when considering an inverse linear pendulum model for the robot. We extend it in 3D which produces satisfying foot placement as we demonstrated in our experiment. Nevertheless, this remains a heuristic and does not ensure the position of feet is feasible in terms of torque power limit.
## X Conclusion
In this paper, we present a complete methodology for crossing challenging terrains; from terrain perception to locomotion generation and control. We have demonstrated our pipeline on various terrains, such as an industrial staircase or a parkour-type environment with a moving object in the scene. These experiments have allowed us to further validate our approach based on a sub-division of the global problem. First, a high-level planner formulated as a mixed-integer optimisation selects only the next surfaces of contact with a horizon of a few contacts (6 to 8 in our experiments). To achieve this, one must adapt the perception to extract relevant potential surfaces as convex planes. For motion generation, we rely on an efficient whole-body MPC and a linear state feedback controller. Collision-free trajectory and footstep adaptation on the high-level planner are optimised separately. The OCP problem is then strongly regularized around this reference end-effector trajectory while robot's posture is adapted by the MPC. This represents an wise choice of design to leverage whole-body optimisation capabilities. To move further, we would like to use a more complex high-level planner to optimise contact timing to cross even more challenging terrains, for example, dynamic motions that include jumping over gaps or obstacles.
|
2302.12701 | Function spaces for decoupling | We introduce new function spaces
$\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})$ that yield a natural
reformulation of the $\ell^{q}$ decoupling inequalities for the sphere and the
light cone. These spaces are invariant under the Euclidean half-wave
propagators, but not under all Fourier integral operators unless $p=q$, in
which case they coincide with the Hardy spaces for Fourier integral operators.
We use these spaces to obtain improvements of the classical fractional
integration theorem, and local smoothing estimates. | Andrew Hassell, Pierre Portal, Jan Rozendaal, Po-Lam Yung | 2023-02-24T16:00:03Z | http://arxiv.org/abs/2302.12701v1 | # Function spaces for decoupling
###### Abstract.
We introduce new function spaces \(\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})\) that yield a natural reformulation of the \(\ell^{q}\) decoupling inequalities for the sphere and the light cone. These spaces are invariant under the Euclidean half-wave propagators, but not under all Fourier integral operators unless \(p=q\), in which case they coincide with the Hardy spaces for Fourier integral operators. We use these spaces to obtain improvements of the classical fractional integration theorem, and local smoothing estimates.
Key words and phrases:Decoupling inequality, Fourier integral operator, wave equation, local smoothing 2020 Mathematics Subject Classification: Primary 42B35. Secondary 42B37, 35L05, 35S30 The research leading to these results has received funding from the Norwegian Financial Mechanism 2014-2021, grant 2020/37/K/ST1/02765. This research was funded in part by the National Science Center, Poland, grant 2021/43/D/ST1/00667. Yung was partially supported by Future Fellowship FT200100399 from the Australian Research Council.
Introduction
### Background
Let \(\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})\) be a set of linear operators on \(\mathbb{R}^{n}\), and let \(\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})\) be the space of all bounded bounded operators on \(\mathbb{R}^{n}\). We say that \(\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})\) is _strongly bounded_ if \(\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})\) is strongly bounded if \(\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})\) is strongly bounded. The _strongly bounded_ operator \(\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})\
\(\mathcal{H}^{p}_{FIO}(\mathbb{R}^{n})\) replaced by \(\mathcal{H}^{p,q;0}_{\rm dec}(\mathbb{R}^{n})\) for more general values of \(p\) and \(q\), and in particular for all \(1<p<\infty\) if \(q=2\) (see Theorem 6.2). In fact, the latter case has additional features beyond those of \(\mathcal{H}^{p}_{FIO}(\mathbb{R}^{n})\), which allow us to improve the classical fractional integration theorem
\[W^{n(\frac{1}{p_{1}}-\frac{1}{p_{2}}),p_{1}}(\mathbb{R}^{n})\subseteq L^{p_{2} }(\mathbb{R}^{n}) \tag{1.3}\]
for \(1<p_{1}\leq 2\leq p_{2}<\infty\) (see Theorem 6.3 and Remark 6.4). Outside of this range, we obtain improved mapping properties for FIOs of negative order, cf. Corollary 6.5 and Remark 6.6. We also note that the spaces \(\mathcal{H}^{p,q;s}_{\rm dec}(\mathbb{R}^{n})\) behave in the natural way under complex interpolation and duality, as is shown in Section 4.3.
These are some of the basic properties of the spaces \(\mathcal{H}^{p,q;s}_{\rm dec}(\mathbb{R}^{n})\), and how they relate to those of \(\mathcal{H}^{p}_{FIO}(\mathbb{R}^{n})\). On the other hand, it is not immediately clear how these spaces relate to the theory of Fourier decoupling. For example, decoupling inequalities are typically formulated using a discrete decomposition of functions that have Fourier support in a fixed compact set, whereas (1.2) is only of interest at high frequencies. However, after rescaling and observing that the continuous decomposition in (1.2) is equivalent to a discrete one on dyadic frequency annuli, the \(\ell^{2}\) decoupling inequality for the sphere can be reinterpreted as an improvement over the Sobolev embeddings for \(\mathcal{H}^{p,2;s}_{\rm dec}(\mathbb{R}^{n})\), for functions with highly localized frequency support. More precisely, for all \(1<p<\infty\) and \(f\in\mathcal{H}^{p,2;0}_{\rm dec}(\mathbb{R}^{n})\) one has
\[\|f\|_{W^{-s(p),p}(\mathbb{R}^{n})}\lesssim\|f\|_{\mathcal{H}^{p,2;0}_{\rm dec }(\mathbb{R}^{n})}\lesssim\|f\|_{W^{s(p),p}(\mathbb{R}^{n})},\]
and these Sobolev exponents cannot be improved for general \(f\). On the other hand, the \(\ell^{2}\) decoupling inequality for the sphere is equivalent to the inequality
\[\|f\|_{W^{-\varepsilon,p}(\mathbb{R}^{n})}\lesssim_{\varepsilon}\|f\|_{ \mathcal{H}^{p,2;0}_{\rm dec}(\mathbb{R}^{n})}\]
for all \(2<p\leq 2(n+1)/(n-1)\) and \(\varepsilon>0\), if \(\operatorname{supp}(\tilde{f})\subseteq\{\xi\in\mathbb{R}^{n}\mid R-1\leq| \xi|\leq R+1\}\) for some \(R\geq 2\) (see Corollary 7.4). In Corollary 7.8 we reinterpret the \(\ell^{q}\) decoupling inequality for the cone in a similar manner, as an improved embedding for functions in \(\mathcal{H}^{p,q;s}_{\rm dec}(\mathbb{R}^{n+1})\) with frequency support near the light cone.
This reformulation of the decoupling inequality for the light cone seems to indicate that \(\mathcal{H}^{p,q;s}_{\rm dec}(\mathbb{R}^{n+1})\) is a natural space for the solutions to the Euclidean wave equation. Such a connection was not made in [20, 22, 23], where the relevant function spaces were only used for the initial data. On the other hand, just as in those cases, we obtain local smoothing estimates for the Euclidean wave equation using \(\mathcal{H}^{p,q;s}_{\rm dec}(\mathbb{R}^{n})\) as a space of initial data. Moreover, when combined with the fractional integration theorem for \(\mathcal{H}^{p,2;s}_{\rm dec}(\mathbb{R}^{n})\), one simultaneously obtains improved local smoothing estimates, as well as improvements of the classical Strichartz estimates for suitable initial data (see Theorem 8.1 and Remark 8.3). Just as in [20, 23], we proceed to apply these local smoothing estimates to nonlinear wave equations with initial data outside of \(L^{2}\)-based Sobolev spaces, in Section 8.2.
### Techniques
Apart from the specific results and the connection to decoupling theory, a significant difference between the present article and earlier contributions in this direction concerns the techniques that we use.
Namely, in [13, 28] the Hardy spaces for FIOs were defined using conical square functions over the cosphere bundle \(S^{*}\mathbb{R}^{n}=\mathbb{R}^{n}\times S^{n-1}\), which allows one to incorporate the theory of tent spaces to prove various fundamental properties of the spaces. Such a conical square function characterization over the cosphere bundle
is implicitly contained in (1.2) when \(p=q\), due to Fubini's theorem, but when \(p\neq q\) this argument breaks down. More generally, the fact that FIOs are typically not bounded on \(\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})\) indicates that one cannot expect to apply the same techniques to \(\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})\) when \(p\neq q\).
On the other hand, [23] introduced Besov spaces adapted to the half-wave group using a similar norm as in (1.2), for general \(p\) and \(q\), albeit with Besov spaces replacing Sobolev spaces over \(L^{p}(\mathbb{R}^{n})\). The latter does not make a difference on dyadic frequency annuli, so for our applications to decoupling theory one could also use the spaces in [23]. On the other hand, working with Besov spaces does not allow one to recover the optimal fixed-time \(L^{p}\) regularity for wave equations, and it does not yield improvements of the fractional integration theorem in (1.3). Moreover, on a technical level, for Besov spaces it typically suffices to obtain estimates on dyadic frequency annuli, as opposed to having to deal with all frequency scales simultaneously through a square function, as is necessary in our setting. In fact, [23] does not provide a full function space theory, focusing instead on applications to nonlinear wave equations, but (simpler versions of) the techniques in this article can also be used to derive properties of the adapted Besov spaces from [23].
So, instead of relying on existing techniques, we develop new tools and make connections to other areas to prove our main results. For example, we incorporate the theory of parabolic Hardy spaces from [6, 7], by observing that it is convenient to replace \(W^{s,p}(\mathbb{R}^{n})\) in (1.2) by \((1+\Delta)^{-s/2}H^{p}_{\omega}(\mathbb{R}^{n})\), where \(H^{p}_{\omega}(\mathbb{R}^{n})\) is a parabolic Hardy space associated to a family of dilations which is anisotropic in the direction of \(\omega\). For \(1<p<\infty\) one has \(L^{p}(\mathbb{R}^{n})=H^{p}_{\omega}(\mathbb{R}^{n})\), but in Proposition 2.6 we show that also
\[\|\varphi_{\omega}(D)f\|_{\mathcal{H}^{1}(\mathbb{R}^{n})}\eqsim\|\varphi_{ \omega}(D)f\|_{H^{1}_{\omega}(\mathbb{R}^{n})},\]
due to the fact that \(\varphi_{\omega}(D)\) localizes to a paraboloid in the direction of \(\omega\). One can then use anisotropic Calderon-Zygmund theory to prove embeddings and invariance properties of \(\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})\).
As in [13], we rely on wave packet transforms that lift functions on \(\mathbb{R}^{n}\) to phase space \(T^{*}\mathbb{R}^{n}\) minus the zero section, parametrized in spherical coordinates as \(\mathbb{R}^{n}\times S^{n-1}\times(0,\infty)\). This allows us to embed \(\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})\) into a larger, but simpler and more established, space of functions on phase space. We can then derive various properties of \(\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})\) from those of the encompassing space. However, unlike in [13], we do not use a tent space norm on \(T^{*}\mathbb{R}^{n}\), working instead with an \(L^{q}(S^{n-1};L^{p}(\mathbb{R}^{n};L^{2}(0,\infty)))\) norm which arises naturally from (1.2), via the Littlewood-Paley decomposition of \(L^{p}(\mathbb{R}^{n})\). This in turn means that we cannot rely on tools such as the atomic decomposition of tent spaces. Instead, to prove the fundamental Theorem 3.4, we use both the boundedness of the vector-valued Hardy-Littlewood maximal function over \(S^{n-1}\), as well as the boundedness of anisotropic maximal functions in each separate direction \(\omega\in S^{n-1}\). Theorem 3.4 then allows us to deduce properties of \(\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})\) from those of \(L^{q}(S^{n-1};L^{p}(\mathbb{R}^{n};L^{2}(0,\infty)))\).
### Organization
This article is organized as follows. In Section 2 we collect background on the anisotropic dilations on \(\mathbb{R}^{n}\) and the corresponding norms, on specific parabolic Hardy spaces, and on Fourier integral operators. In Section 3 we then introduce the parabolic frequency localizations and the associated wave packet transforms, and we prove various properties of these transforms. In Section 4 we introduce the spaces \(\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})\) and we derive many of their basic properties. The fact that they are invariant under the Euclidean wave propagators, but not
under general FIOs, is shown in Section 5. Section 6 then contains the Sobolev embeddings and fractional integration theorems for \(\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})\), while Section 7 connects these spaces to decoupling inequalities. Finally, in Section 8 we obtain local smoothing estimates using \(\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})\), as well as well-posedness results for nonlinear wave equations.
### Notation and terminology
The natural numbers are \(\mathbb{N}=\{1,2,\ldots\}\), and \(\mathbb{Z}_{+}:=\mathbb{N}\cup\{0\}\). Throughout this article we fix \(n\in\mathbb{N}\) with \(n\geq 2\).
For \(\xi\in\mathbb{R}^{n}\) we write \(\langle\xi\rangle=(1+|\xi|^{2})^{1/2}\), and \(\hat{\xi}=\xi/|\xi|\) if \(\xi\neq 0\). We use multi-index notation, where \(\partial_{\xi}=\nabla_{\xi}=(\partial_{\xi_{1}},\ldots,\partial_{\xi_{n}})\) and \(\partial^{\alpha}_{\xi}=\partial^{\alpha_{1}}_{\xi_{1}}\ldots\partial^{\alpha _{n}}_{\xi_{n}}\) for \(\xi=(\xi_{1},\ldots,\xi_{n})\in\mathbb{R}^{n}\) and \(\alpha=(\alpha_{1},\ldots,\alpha_{n})\in\mathbb{Z}_{+}^{n}\). Moreover, \(\partial^{2}_{x\eta}\Phi\) is the mixed Hessian of a function \(\Phi\) of the variables \(x\) and \(\eta\).
The Fourier transform of a tempered distribution \(f\in\mathcal{S}^{\prime}(\mathbb{R}^{n})\) is denoted by \(\mathcal{F}f\) or \(\widehat{f}\), and its inverse Fourier tranform by \(\mathcal{F}^{-1}f\). If \(f\in L^{1}(\mathbb{R}^{n})\) and \(\xi\in\mathbb{R}^{n}\), then \(\mathcal{F}f(\xi)=\int_{\mathbb{R}^{n}}e^{-ix\cdot\xi}f(x)\mathrm{d}x\). The standard distributional pairing between \(f\in\mathcal{S}^{\prime}(\mathbb{R}^{n})\) and \(g\in\mathcal{S}(\mathbb{R}^{n})\) is denoted by \(\langle f,g\rangle_{\mathbb{R}^{n}}\), and we write \(\varphi(D)\) for the Fourier multiplier with symbol \(\varphi\).
The volume of a measurable subset \(B\subseteq\Omega\) of a measure space \(\Omega\) will be denoted by \(|B|\), and its indicator function by \(\mathbf{1}_{B}\). The space of bounded linear operators between Banach spaces \(X\) and \(Y\) is \(\mathcal{L}(X,Y)\), and \(\mathcal{L}(X):=\mathcal{L}(X,X)\).
We write \(f(s)\lesssim g(s)\) to indicate that \(f(s)\leq Cg(s)\) for all \(s\) and a constant \(C>0\) independent of \(s\), and similarly for \(f(s)\gtrsim g(s)\) and \(g(s)\eqsim f(s)\).
## 2. Preliminaries
In this section we first introduce a family of norms, associated with groups of dilations. We then define parabolic Hardy spaces associated with these dilations, and finally we collect some background on a specific class of Fourier integral operators.
### A family of norms
In this subsection, we define a collection of norms on \(\mathbb{R}^{n}\), and the associated metrics. We then derive some of their basic properties.
For \(\omega\in S^{n-1}\), \(x\in\mathbb{R}^{n}\) and \(\sigma>0\), set
\[A_{\omega,\sigma}(x):=\sigma^{2}(\omega\cdot x)\omega+\sigma\Pi_{\omega}^{ \perp}x, \tag{2.1}\]
where \(\Pi_{\omega}^{\perp}:\mathbb{R}^{n}\to\mathbb{R}^{n}\) is the orthogonal projection onto the complement of the span of \(\omega\). Note that, with \(P_{\omega}(x):=x+(\omega\cdot x)\omega\), one has
\[A_{\omega,\sigma}=\sigma^{P_{\omega}}=\exp(\log(\sigma)P_{\omega}).\]
This implies that \((A_{\omega,\sigma})_{\sigma>0}\) is a group of transformations as in [6], and we will rely on the theory developed there in what follows.
Let \(|x|_{\omega}\) be the unique \(\sigma_{x}>0\) such that \(|A_{\omega,1/\sigma_{x}}(x)|=1\). Then \(|\cdot|_{\omega}\) is a norm on \(\mathbb{R}^{n}\) (see [6, Section 1.4]), and \(|A_{\omega,\sigma}x|_{\omega}=\sigma|x|_{\omega}\) for all \(\sigma>0\). Let \(\mathbb{R}^{n}_{\omega}\) be the metric measure space obtained by endowing \(\mathbb{R}^{n}\) with the associated metric and the standard Lebesgue measure. For \(\tau>0\), we write
\[B^{\omega}_{\tau}(x):=\{y\in\mathbb{R}^{n}\ |\ |x-y|_{\omega}<\tau\}\]
for the ball in \(\mathbb{R}^{n}_{\omega}\) around \(x\in\mathbb{R}^{n}\) with radius \(\tau\), and
\[B_{\tau}(x):=\{y\in\mathbb{R}^{n}\ |\ |x-y|<\tau\}\]
for the corresponding ball in \(\mathbb{R}^{n}\). Note that \(B^{\omega}_{\tau}(x)\) is a convex set with respect to the Euclidean metric, given that \(B^{\omega}_{\tau}(0)\) is the inverse image of \(B_{1}(0)\) under the
linear map \(A_{\omega,1/\tau}\). For the same reason, \(B_{1}^{\omega}(x)=B_{1}(x)\). Moreover, using that \(A_{\omega,\tau}\) has determinant \(\tau^{n+1}\), it follows that
\[|B_{\tau}^{\omega}(x)|=\tau^{n+1}|B_{1}(0)|. \tag{2.2}\]
In particular, \(\mathbb{R}_{\omega}^{n}\) is a doubling metric measure space.
We will often work with the equivalent expression for \(|\cdot|_{\omega}\) provided by the following lemma.
**Lemma 2.1**.: _Let \(\omega\in S^{n-1}\) and \(x\in\mathbb{R}^{n}\). Then_
\[|x|_{\omega}\leq|\omega\cdot x|^{1/2}+|\Pi_{\omega}^{\perp}x|\leq 2|x|_{\omega}, \tag{2.3}\]
_and_
\[|x|\leq|x|_{\omega}\text{ if and only if }x\in B_{1}^{\omega}(0)=B_{1}(0). \tag{2.4}\]
Proof.: Set \(\sigma:=|x|_{\omega}\). Then, by definition and by (2.1),
\[1=|A_{\omega,1/\sigma}(x)|=\sqrt{\sigma^{-4}|\omega\cdot x|^{2}+\sigma^{-2}| \Pi_{\omega}^{\perp}x|^{2}}\]
and
\[|x|_{\omega}=\sigma=\sqrt{\sigma^{-2}|\omega\cdot x|^{2}+|\Pi_{\omega}^{\perp }x|^{2}}. \tag{2.5}\]
This implies (2.4). Moreover, trivial estimates yield \(|x|_{\omega}\geq|x|_{\omega}^{-1}|\omega\cdot x|\) and \(|x|_{\omega}\geq|\Pi_{\omega}x|\), which proves the second inequality in (2.3).
For the other inequality, it follows from (2.5) that, for all \(y\in\mathbb{R}^{n}\), one has \(|y|_{\omega}=|\Pi_{\omega}^{\perp}y|\) if \(\omega\cdot y=0\). Moreover, \(|y|_{\omega}=|y|_{\omega}^{-1}|\omega\cdot y|\) and \(|y|_{\omega}=|\omega\cdot y|^{1/2}\) if \(\Pi_{\omega}^{\perp}y=0\). Hence one may use that \(|\cdot|_{\omega}\) is a norm to write
\[|x|_{\omega}\leq|(\omega\cdot x)\omega|_{\omega}+|\Pi_{\omega}^{\perp}x|_{ \omega}=|\omega\cdot x|^{1/2}+|\Pi_{\omega}^{\perp}x|.\qed\]
It follows from Lemma 2.1 that the metric associated with \(|\cdot|_{\omega}\) is essentially the restriction to the subspace \(\mathbb{R}^{n}\times\{\omega\}\subseteq S^{*}\mathbb{R}^{n}\) of the metric on the cosphere bundle from [13, 28]. Similarly, the square of the metric associated with \(|\cdot|_{\omega}\) essentially coincides with the quasi-distance in [11].
Since \(\mathbb{R}_{\omega}^{n}\) is a doubling metric measure space, it is natural to consider the (centered) Hardy-Littlewood maximal function \(M_{\omega}\) on \(\mathbb{R}_{\omega}^{n}\), given by
\[M_{\omega}f(x):=\sup_{\tau>0}\fint_{B_{\pi}^{\omega}(x)}|f(y)|\mathrm{d}y \tag{2.6}\]
for \(x\in\mathbb{R}^{n}\) and \(f\in L^{1}_{\mathrm{loc}}(\mathbb{R}^{n})\). We record the following standard lemma concerning the boundedness of the vector-valued extension of \(M_{\omega}\) to \(L^{p}(\mathbb{R}^{n};L^{q}(\mathbb{R}_{+}))\), for \(p,q\in(1,\infty)\). Here and throughout, we write \(\mathbb{R}_{+}\) for the measure space \((0,\infty)\), endowed with the Haar measure \(\mathrm{d}\sigma/\sigma\).
**Lemma 2.2**.: _Let \(p,q\in(1,\infty)\). Then there exists a \(C\geq 0\) such that_
\[\Big{(}\int_{\mathbb{R}^{n}}\Big{(}\int_{0}^{\infty}|M_{\omega}F(\cdot,\sigma )(x)|^{q}\frac{\mathrm{d}\sigma}{\sigma}\Big{)}^{\frac{p}{q}}\mathrm{d}x \Big{)}^{\frac{1}{p}}\leq C\Big{(}\int_{\mathbb{R}^{n}}\Big{(}\int_{0}^{\infty }|F(x,\sigma)|^{q}\frac{\mathrm{d}\sigma}{\sigma}\Big{)}^{\frac{p}{q}}\mathrm{d} x\Big{)}^{\frac{1}{p}}\]
_for all \(w\in S^{n-1}\) and \(F\in L^{p}(\mathbb{R}^{n};L^{q}(\mathbb{R}_{+}))\)._
Proof.: For a fixed \(\omega\in S^{n-1}\), the statement is a special case of [30, Section II.5.14] or [25, Lemma 2.8]. But by rotation, the resulting constant is independent of \(\omega\)
In later sections, we will combine Lemma 2.2 with the following pointwise variant of [25, Lemma 2.3].
**Lemma 2.3**.: _Let \(r,\kappa>0\). Then there exists a \(C\geq 0\) such that the following holds for all \(\omega\in S^{n-1}\), \(\sigma>0\) and \(f\in L^{\infty}(\mathbb{R}^{n})\) with_
\[\operatorname{supp}(\widehat{f})\subseteq\{\xi\in\mathbb{R}^{n}\mid|\xi|_{ \omega}\leq\kappa\sigma^{-1}\}.\]
_For all \(x,y\in\mathbb{R}^{n}\) one has_
\[|f(x-y)|^{r}\leq C(1+\sigma^{-1}|y|_{\omega})^{n+1}M_{\omega}(|f|^{r})(x). \tag{2.7}\]
Proof.: After replacing \(f(x)\) by \(\sigma^{n+1}f(A_{\omega,\sigma}x)\) and using (2.2), one may suppose that \(\sigma=1\). The proof is then completely analogous to that of the corresponding inequality for the classical Hardy-Littlewood maximal function (see e.g. [32, Theorem 1.3.1]), using the mean value theorem and the fact that balls in \(\mathbb{R}^{n}_{\omega}\) are convex with respect to the Euclidean metric.
**Remark 2.4**.: As was just indicated, Lemma 2.3 is an anisotropic version of a lemma involving the classical (centered) Hardy-Littlewood maximal function \(M\), given by
\[Mf(x):=\sup_{\tau>0}\fint_{B_{\tau}(x)}|f(y)|\mathrm{d}y \tag{2.8}\]
for \(x\in\mathbb{R}^{n}\) and \(f\in L^{1}_{\mathrm{loc}}(\mathbb{R}^{n})\). More precisely, using notation as in Lemma 2.3,
\[|f(x-y)|^{r}\lesssim(1+\sigma^{-1}|y|)^{n}M(|f|^{r})(x) \tag{2.9}\]
for all \(x,y\in\mathbb{R}^{n}\), \(\sigma>0\) and \(f\in L^{\infty}(\mathbb{R}^{n})\) with \(\operatorname{supp}(\widehat{f})\subseteq\{\xi\in\mathbb{R}^{n}\mid|\xi|\leq \kappa\sigma^{-1}\}\). Also recall that Lemma 2.2 holds with \(M_{\omega}\) replaced by \(M\) (see [30, Theorem II.1.1]).
### Hardy spaces
In this subsection we collect some basics on specific parabolic Hardy spaces. We refer to [6, 7] for the general theory of parabolic Hardy spaces.
Throughout this article, fix a real-valued and radial \(\Psi\in C^{\infty}_{c}(\mathbb{R}^{n})\) with \(\operatorname{supp}(\Psi)\subseteq\{\xi\in\mathbb{R}^{n}\mid|\xi|\in[1/2,2]\}\), such that
\[\int_{0}^{\infty}\Psi(\sigma\xi)^{2}\frac{\mathrm{d}\sigma}{\sigma}=1\quad( \xi\neq 0). \tag{2.10}\]
Recall that \(H^{p}(\mathbb{R}^{n})\), for \(p\in[1,\infty)\), consists of those \(f\in\mathcal{S}^{\prime}(\mathbb{R}^{n})\) such that
\[\Big{(}\int_{\mathbb{R}^{n}}\Big{(}\int_{0}^{\infty}|\Psi(\sigma D)^{2}f(x)|^{ 2}\frac{\mathrm{d}\sigma}{\sigma}\Big{)}^{p/2}\mathrm{d}x\Big{)}^{1/p}<\infty, \tag{2.11}\]
and this quantity is equivalent to \(\|f\|_{H^{p}(\mathbb{R}^{n})}\). For \(p\in(1,\infty)\) one has \(H^{p}(\mathbb{R}^{n})=L^{p}(\mathbb{R}^{n})\), and \(H^{1}(\mathbb{R}^{n})\) is the classical real Hardy space.
We define the parabolic Hardy spaces in an analogous manner, using the dilations from (2.1). In fact, to compare the classical Hardy spaces and the parabolic Hardy spaces, it will be convenient to make a specific choice of \(\Psi\). Namely, we will assume that \(\Psi(\xi)=\Psi_{0}(|\xi|)\), where \(\Psi_{0}\in C^{\infty}_{c}(\mathbb{R})\) is real-valued, with \(\Psi_{0}(\sigma)=0\) for \(\sigma\notin[1/2,2]\), such that
\[\int_{0}^{\infty}\Psi_{0}(\sigma)^{2}\frac{\mathrm{d}\sigma}{\sigma}=1.\]
Then \(\Psi\) has the required properties.
Next, for \(\omega\in S^{n-1}\) and \(\xi\in\mathbb{R}^{n}\), set \(\Psi_{\omega}(\xi)=\Psi_{0}(|\xi|_{\omega})\). Then \(\Psi_{\omega}\in C_{c}^{\infty}(\mathbb{R}^{n})\), due to the fact that \(\xi\mapsto|\xi|_{\omega}\) is smooth on \(\mathbb{R}^{n}\setminus\{0\}\) (see [6, Lemma 1.5]). Moreover, \(\operatorname{supp}(\Psi_{\omega})\subseteq\{\xi\in\mathbb{R}^{n}\mid|\xi|_{ \omega}\in[1/2,2]\}\), and
\[\int_{0}^{\infty}\Psi_{\omega}(A_{\omega,\sigma}\xi)^{2}\frac{\mathrm{d}\sigma }{\sigma}=1\quad(\xi\neq 0). \tag{2.12}\]
We write \(\Psi_{\omega}(A_{\omega,\sigma}D)\) for the Fourier multiplier with symbol \(\xi\mapsto\Psi_{\omega}(A_{\omega,\sigma}\xi)\). Note that \(\Psi_{\omega}(A_{\omega,\sigma}D)\) has kernel \(\sigma^{-(n+1)}\mathcal{F}^{-1}(\Psi_{\omega})(A_{\omega,1/\sigma}y)\), for \(y\in\mathbb{R}^{n}\). Moreover,
\[|\mathcal{F}^{-1}(\Psi_{\omega})(A_{\omega,1/\sigma}y)|\leq C_{N}(1+\sigma^{- 1}|y|_{\omega})^{-N} \tag{2.13}\]
for each \(N\geq 0\), where \(C_{N}\geq 0\) is a constant independent of \(\omega\), \(\sigma\) and \(y\). The fact that the constant is independent of \(\omega\) follows by rotation.
**Definition 2.5**.: Let \(p\in[1,\infty)\) and \(\omega\in S^{n-1}\). Then \(H^{p}_{\omega}(\mathbb{R}^{n})\) consists of those \(f\in\mathcal{S}^{\prime}(\mathbb{R}^{n})\) such that
\[\|f\|_{H^{p}_{\omega}(\mathbb{R}^{n})}:=\Big{(}\int_{\mathbb{R}^{n}}\Big{(} \int_{0}^{\infty}|\Psi_{\omega}(A_{\omega,\sigma}D)^{2}f(x)|^{2}\frac{\mathrm{ d}\sigma}{\sigma}\Big{)}^{p/2}\mathrm{d}x\Big{)}^{1/p}<\infty.\]
In fact, this is not how the parabolic Hardy spaces were originally defined in [6, 7]. However, it follows from [25] that, up to norm equivalence, the relevant definitions are equivalent.
We note that \(H^{p}_{\omega}(\mathbb{R}^{n})\subseteq L^{p}(\mathbb{R}^{n})\) for all \(w\in S^{n-1}\) and \(p\in[1,\infty)\). Moreover,
\[H^{p}_{\omega}(\mathbb{R}^{n})=L^{p}(\mathbb{R}^{n})\text{ if }p\in(1,\infty), \tag{2.14}\]
with equivalence of norms (see [7, Theorem 1.2]).
We conclude this subsection with the following proposition, which relates the parabolic and classical Hardy space norms of a function with frequency support inside a paraboloid.
**Proposition 2.6**.: _Let \(p\in[1,\infty)\) and \(\kappa_{1},\kappa_{2}\geq 1\). Then there exists a \(C>0\) such that the following holds. Let \(\omega\in S^{n-1}\) and \(f\in\mathcal{S}^{\prime}(\mathbb{R}^{n})\) satisfy_
\[\operatorname{supp}(\widehat{f}\,)\subseteq\{\xi\in\mathbb{R}^{n}\mid|\xi| \geq\kappa_{1}^{-1},|\hat{\xi}-\omega|\leq\kappa_{2}|\xi|^{-1/2}\}. \tag{2.15}\]
_Then \(f\in H^{p}(\mathbb{R}^{n})\) if and only if \(f\in H^{p}_{\omega}(\mathbb{R}^{n})\), in which case_
\[\frac{1}{C}\|f\|_{H^{p}(\mathbb{R}^{n})}\leq\|f\|_{H^{p}_{\omega}(\mathbb{R}^{ n})}\leq C\|f\|_{H^{p}(\mathbb{R}^{n})}.\]
Proof.: For \(p\in(1,\infty)\), the statement in fact follows directly from (2.14). However, our argument applies for general \(p\in[1,\infty)\), so there is no need to rely on (2.14).
Fix \(\omega\in S^{n-1}\) and \(f\in\mathcal{S}^{\prime}(\mathbb{R}^{n})\) with the prescribed support properties. Since \(H^{p}(\mathbb{R}^{n})\) and \(H^{p}_{\omega}(\mathbb{R}^{n})\) are both contained in \(L^{p}(\mathbb{R}^{n})\), we may suppose in the remainder that \(f\in L^{p}(\mathbb{R}^{n})\), which will allow us to apply Lemma 2.3.
We first make a preliminary observation which will also be useful later on. Namely, we claim that, if \(\xi\in\mathbb{R}^{n}\) is such that \(|\xi|\geq\kappa_{1}^{-1}\) and \(|\hat{\xi}-\omega|\leq\kappa_{2}|\xi|^{-1/2}\), then one has
\[\kappa_{1}^{-1}\leq|\xi|_{\omega}\quad\text{and}\quad(4+\kappa_{1}^{2})^{-1/2 }|\xi|^{1/2}\leq|\xi|_{\omega}\leq(1+\kappa_{2})|\xi|^{1/2}. \tag{2.16}\]
To prove this claim, note that the first statement follows from (2.4). On the other hand, one has \(|\omega\cdot\xi|^{1/2}\leq|\xi|^{1/2}\) and, by assumption,
\[|\Pi^{\perp}_{\omega}\xi|\leq\sqrt{|\omega\cdot\xi-|\xi||^{2}+|\Pi^{\perp}_{ \omega}\xi|^{2}}=|\xi-|\xi|\omega|\leq\kappa_{2}|\xi|^{1/2}.\]
Now (2.3) yields the right-most inequality in (2.16). For the remaining inequality, we can combine the first inequality in (2.16) with (2.3) to write
\[|\xi|\leq|\omega\cdot\xi|+|\Pi_{\omega}^{\perp}\xi|\leq 1+|\omega\cdot\xi|+|\Pi_{ \omega}^{\perp}\xi|^{2}\leq\kappa_{1}^{2}|\xi|_{\omega}^{2}+4|\xi|_{\omega}^{2},\]
which proves the claim.
Next, for a general \(\xi\in\mathbb{R}^{n}\) and \(\tau>0\), one has \(\Psi(\tau\xi)=0\) unless \(|\xi|\in[(2\tau)^{-1},2\tau^{-1}]\). Similarly, for \(\sigma>0\) one has \(\Psi(A_{\omega,\sigma}\xi)=0\) unless \(|A_{\omega,\sigma}\xi|_{\omega}\in[1/2,2]\), i.e. unless \(|\xi|_{\omega}\in[(2\sigma)^{-1},2\sigma^{-1}]\). Hence (2.16) implies that
\[\frac{1}{8(4+\kappa_{1}^{2})}\sigma^{2}\leq\tau\leq 8(1+\kappa_{2})^{2}\sigma^{2} \tag{2.17}\]
whenever \(\Psi_{\omega}(A_{\omega,\sigma}D)\Psi(\tau D)f\neq 0\).
Now fix \(r\in(0,\min(p,2))\). We claim that, with notation as in (2.6) and (2.8),
\[|\Psi_{\omega}(A_{\omega,\sigma}D)\Psi(\tau D)f(x)|\lesssim(M_{\omega}(|\Psi( \tau D)f|^{\tau})(x))^{1/r} \tag{2.18}\]
and
\[|\Psi_{\omega}(A_{\omega,\sigma}D)\Psi(\tau D)f(x)|\lesssim(M(|\Psi_{\omega}(A _{\omega,\sigma}D)f|^{\tau})(x))^{1/r}, \tag{2.19}\]
for implicit constants independent of \(\omega\), \(f\), \(\sigma,\tau>0\) and \(x\in\mathbb{R}^{n}\). To see why this is true, first consider (2.18), and let \(N>(n+1)(1+\frac{1}{r})\). We may suppose that \(\sigma\) and \(\tau\) satisfy (2.17). Then, by (2.16),
\[\operatorname{supp}(\mathcal{F}(\Psi(\tau D)f))\subseteq\{\xi\in\mathbb{R}^{n }\mid|\xi|_{\omega}\leq 4(1+\kappa_{2})(4+\kappa_{1}^{2})^{1/2}\sigma^{-1}\}.\]
Moreover, \(f\in L^{p}(\mathbb{R}^{n})\), by assumption, and then \(\Psi(\tau D)f\in L^{\infty}(\mathbb{R}^{n})\), by a standard Sobolev embedding. Hence (2.13), Lemma 2.3 and (2.3) yield
\[|\Psi_{\omega}(A_{\omega,\sigma}D)\Psi(\tau D)f(x)|\lesssim\int_{ \mathbb{R}^{n}}|\Psi(\tau D)f(x-y)|\sigma^{-(n+1)}(1+\sigma^{-1}|y|_{\omega})^ {-N}\mathrm{d}y\] \[\lesssim(M_{\omega}(|\Psi(\tau D)f|^{\tau})(x))^{1/r}\int_{ \mathbb{R}^{n}}(1+\sigma^{-1}|y|_{\omega})^{\frac{n+1}{r}}(1+\sigma^{-1}|y|_{ \omega})^{-N}\sigma^{-(n+1)}\mathrm{d}y\] \[=(M_{\omega}(|\Psi(\tau D)f|^{\tau})(x))^{1/r}\int_{\mathbb{R}^{ n}}(1+|y|_{\omega})^{-(N-\frac{n+1}{r})}\mathrm{d}y\lesssim(M_{\omega}(|\Psi( \tau D)f|^{\tau})(x))^{1/r}.\]
The argument for (2.19) is analogous, although one relies on (2.9) instead of Lemma 2.3, which is allowed because of (2.16). This proves the claim.
Now suppose that \(f\in H^{p}(\mathbb{R}^{n})\), and set \(a:=(8(4+\kappa^{2}))^{-1}\) and \(b:=8(1+\kappa)^{2}\). For \(\sigma>0\) and \(x\in\mathbb{R}^{n}\), (2.10), (2.18) and Holder's inequality yield
\[|\Psi_{\omega}(A_{\omega,\sigma}D)f(x)|^{2} \leq\Big{(}\int_{0}^{\infty}|\Psi_{\omega}(A_{\omega,\sigma}D) \Psi(\tau D)f(x)|\frac{\mathrm{d}\tau}{\tau}\Big{)}^{2}\] \[\lesssim\int_{a\sigma^{2}}^{b\sigma^{2}}(M_{\omega}(|\Psi(\tau D )f|^{\tau})(x))^{2/r}\frac{\mathrm{d}\tau}{\tau}.\]
Hence
\[\int_{0}^{\infty}|\Psi_{\omega}(A_{\omega,\sigma}D)f(x)|^{2} \frac{\mathrm{d}\sigma}{\sigma} \lesssim\int_{0}^{\infty}\int_{\sqrt{\tau}/\sqrt{b}}^{\sqrt{\tau }/\sqrt{a}}(M_{\omega}(|\Psi(\tau D)f|^{\tau})(x))^{2/r}\frac{\mathrm{d} \sigma}{\sigma}\frac{\mathrm{d}\tau}{\tau}\] \[\lesssim\int_{0}^{\infty}(M_{\omega}(|\Psi(\tau D)f|^{\tau})(x))^ {2/r}\frac{\mathrm{d}\tau}{\tau}.\]
Finally, we can use that \(M_{\omega}\) is bounded on \(L^{p/r}(\mathbb{R}^{n};L^{2/r}(0,\infty))\), cf. Lemma 2.2:
\[\|f\|_{H^{p}_{\infty}(\mathbb{R}^{n})}=\Big{\|}\Big{(}\int_{0}^{\infty}|\Psi_{ \omega}(A_{\omega,\sigma}D)f|^{2}\frac{\mathrm{d}\sigma}{\sigma}\Big{)}^{1/2} \Big{\|}_{L^{p}(\mathbb{R}^{n})}\]
\[\lesssim\Big{\|}\Big{(}\int_{0}^{\infty}(M_{\omega}(|\Psi(\tau D)f|^{r}))^{ 2/r}\frac{\mathrm{d}\tau}{\tau}\Big{)}^{1/2}\Big{\|}_{L^{p}(\mathbb{R}^{n})}\] \[\lesssim\Big{\|}\Big{(}\int_{0}^{\infty}(|\Psi(\tau D)f|^{r})^{2/ r}\frac{\mathrm{d}\tau}{\tau}\Big{)}^{1/2}\Big{\|}_{L^{p/r}(\mathbb{R}^{n})}^{1/r} \eqsim\|f\|_{L^{p}(\mathbb{R}^{n})},\]
where in the final step we used (2.11). This shows that \(f\in H^{p}_{\omega}(\mathbb{R}^{n})\), with the required norm bound. The proof of the reverse inequality is analogous, relying on (2.19) and the boundedness of the classical Littlewood-Paley maximal function.
### Fourier integral operators
Here we collect some background on Fourier integral operators. We refer to [8, 16] for the general theory of Fourier integral operators and the associated notions from symplectic geometry. On the other hand, in this article we will mostly work with concrete oscillatory integral representations for such operators, and this subsection contains the relevant definitions.
For \(m\in\mathbb{R}\) and \(\rho,\delta\in[0,1]\), recall that Hormander's symbol class \(S^{m}_{\rho,\delta}\) consists of all \(a\in C^{\infty}(\mathbb{R}^{2n})\) such that
\[\sup_{(x,\eta)\in\mathbb{R}^{2n}}\langle\eta\rangle^{-m+\rho|\alpha|-\delta| \beta|}|\partial_{x}^{\beta}\partial_{\eta}^{\alpha}a(x,\eta)|<\infty \tag{2.20}\]
for all \(\alpha,\beta\in\mathbb{Z}^{n}_{+}\). In [13] a slightly different symbol class was considered, the elements of which have additional regularity when differentiated in the radial direction in the fiber variable. This class, denoted by \(S^{m}_{\rho,\delta,1}\), consists of all \(a\in C^{\infty}(\mathbb{R}^{2n})\) such that
\[\sup_{(x,\eta)\in\mathbb{R}^{2n}\setminus o}\langle\eta\rangle^{-m+\rho| \alpha|-\delta|\beta|+\gamma}|(\hat{\eta}\cdot\partial_{\eta})^{\gamma} \partial_{x}^{\beta}\partial_{\eta}^{\alpha}a(x,\eta)|<\infty\]
for all \(\alpha,\beta\in\mathbb{Z}^{n}_{+}\) and \(\gamma\in\mathbb{Z}_{+}\). Note that \(S^{m}_{1/2,1/2,1}\) contains \(S^{m}_{1,1/2}\) but is strictly contained in \(S^{m}_{1/2,1/2}\), which is the critical symbol class for the calculus of Fourier integral operators, in several respects.
In the following definition, and throughout the rest of this article, \(o:=\mathbb{R}^{n}\times\{0\}\subseteq\mathbb{R}^{2n}\) denotes the zero section.
**Definition 2.7**.: Let \(m\in\mathbb{R}\), \(a\in S^{m}_{1/2,1/2,1}\) and \(\Phi\in C^{\infty}(\mathbb{R}^{2n}\setminus o)\) be such that \(\Phi\) is real-valued and positively homogeneous of degree \(1\) in the \(\eta\)-variable, and \(\det\partial_{x\eta}^{2}\Phi(x,\eta)\neq 0\) for \((x,\eta)\in\mathrm{supp}(a)\setminus o\). Set
\[Tf(x):=\int_{\mathbb{R}^{n}}e^{i\Phi(x,\eta)}a(x,\eta)\widehat{f}(\eta)\mathrm{ d}\eta \tag{2.21}\]
for \(f\in\mathcal{S}(\mathbb{R}^{n})\) and \(x\in\mathbb{R}^{n}\). Then \(T\) is a Fourier integral operator of order \(m\) and type \((1/2,1/2,1)\) in _standard form_. If, additionally, the following conditions hold:
1. \(\sup_{(x,\eta)\in\mathbb{R}^{2n}\setminus o}|\partial_{x}^{\beta}\partial_{ \eta}^{\alpha}\Phi(x,\hat{\eta})|<\infty\) for all \(\alpha,\beta\in\mathbb{Z}^{n}_{+}\) with \(|\alpha|+|\beta|\geq 2\);
2. \(\inf_{(x,\eta)\in\mathbb{R}^{2n}\setminus o}|\det\partial_{x\eta}^{2}\Phi(x,\eta)|>0\);
3. \((\partial_{\eta}\Phi(x,\eta),\eta)\mapsto(x,\partial_{x}\Phi(x,\eta))\) is a well-defined bijection on \(\mathbb{R}^{2n}\setminus o\),
then we say that \(T\) is associated with a _global canonical graph_.
**Remark 2.8**.: If (3) holds, then \((\partial_{\eta}\Phi(x,\eta),\eta)\mapsto(x,\partial_{x}\Phi(x,\eta))\) is in fact a homogeneous canonical transformation on \(\mathbb{R}^{2n}\setminus o\), and the canonical relation of \(T\) is the graph of this transformation.
If (1) and (2) hold, then (3) holds if and only if \(\eta\mapsto\partial_{x}\Phi(x,\eta)\) is a bijection on \(\mathbb{R}^{n}\setminus\{0\}\) for each \(x\in\mathbb{R}^{n}\), by Hadamard's global inverse function theorem [19, Theorem 6.2.8]. Another application of the global inverse function theorem then shows that condition (3) is superfluous for \(n\geq 3\). Moreover, (3) holds if
\(\Phi(x,\eta)=x\cdot\eta+\phi(\eta)\) for some \(\phi\in C^{\infty}(\mathbb{R}^{n}\setminus\{0\})\) which is positively homogeneous of degree \(1\). This case will be considered frequently by us, and it is characterized by the property that \(\partial_{x}\Phi(x,\eta)=\eta\) for all \((x,\eta)\in\mathbb{R}^{2n}\setminus o\).
Recall that a compactly supported Fourier integral operator of order \(m\) and type \((1,0)\), associated with a local canonical graph, can, modulo an operator with a Schwartz kernel which is a Schwartz function, be expressed as a finite sum of operators which in appropriate coordinate systems are Fourier integral operators in standard form (see e.g. [29, Proposition 6.2.4]), with symbol \(a\in S^{m}_{1,0}\). In this case, the symbol \(a\) has compact support in the \(x\)-variable, and thus (1) and (2) hold on the support of \(a\), while the map in (3) is a locally well-defined homogeneous canonical transformation. By contrast, for operators associated with a global canonical graph as in Definition 2.7, the symbols are not required to have compact spatial support, but the conditions on the phase function hold uniformly on all of \(\mathbb{R}^{2n}\setminus o\).
## 3. Wave packet transforms
In this section we introduce the parabolic frequency localizations which appear in the definition of the function spaces for decoupling, and we use them to define associated wave packets and wave packet transforms. We then prove some properties of these transforms, most notably the boundedness of specific operators on phase space associated with them.
### Wave packets
Throughout the rest of this article, we fix a family \((\varphi_{\omega})_{\omega\in S^{n-1}}\subseteq C^{\infty}(\mathbb{R}^{n})\) with the following properties:
1. For all \(\omega\in S^{n-1}\) and \(\xi\neq 0\), one has \(\varphi_{\omega}(\xi)=0\) if \(|\xi|<\frac{1}{4}\) or \(|\hat{\xi}-\omega|>|\xi|^{-1/2}\).
2. For all \(\alpha\in\mathbb{Z}^{n}_{+}\) and \(\beta\in\mathbb{Z}_{+}\), there exists a \(C_{\alpha,\beta}\geq 0\) such that \[|(\omega\cdot\partial_{\xi})^{\beta}\partial_{\xi}^{\alpha}\varphi_{\omega}( \xi)|\leq C_{\alpha,\beta}|\xi|^{\frac{n-1}{4}-\frac{|\alpha|}{2}-\beta}\] for all \(\omega\in S^{n-1}\) and \(\xi\neq 0\).
3. The map \((\xi,\omega)\mapsto\varphi_{\omega}(\xi)\) is measurable on \(\mathbb{R}^{n}\times S^{n-1}\), and (3.1) \[\int_{S^{n-1}}\varphi_{\omega}(\xi)^{2}\mathrm{d}\omega=1\] for all \(\xi\in\mathbb{R}^{n}\) with \(|\xi|\geq 1\).
**Remark 3.1**.: For the construction of such a collection, see e.g. [21, Section 3.1]. In fact, the functions constructed there have slightly different support properties than in (1). However, the construction in [21, Section 3.1] also yields functions satisfying (1), if one shrinks the support of the function \(\varphi\) used there, and slightly modifies the definition of \(\varphi_{\omega}\) itself. Moreover, instead of (3.1), from [21, Remark 3.3] one obtains
\[\int_{S^{n-1}}m(\xi)\varphi_{\omega}(\xi)^{2}\mathrm{d}\omega=1\quad(|\xi| \geq 1/2)\]
for a standard symbol \(m\in S^{0}(\mathbb{R}^{n})\). However, the proof of [21, Lemma 3.2] also shows that \(\sqrt{m}\in S^{0}(\mathbb{R}^{n})\), so that one may replace \(\varphi_{\omega}\) by \(\sqrt{m}\varphi_{\omega}\) to arrive at (3.1).
It is relevant to note that the exact support properties of the \(\varphi_{\omega}\) are not relevant for this article; all the arguments go through if the conditions in (1) are modified up to fixed constants. In our setting it is useful to note that, by (2.4) and (2.16),
\[\mathrm{supp}(\varphi_{\omega})\subseteq\{\xi\in\mathbb{R}^{n}\mid\tfrac{1}{ 4}\leq|\xi|_{\omega},\tfrac{1}{5}|\xi|^{1/2}\leq|\xi|_{\omega}\leq 2|\xi|^{-1/2}\}\]
for all \(\omega\in S^{n-1}\).
Next, we use the collection \((\varphi_{\omega})_{\omega\in S^{n-1}}\) to construct wave packets. For \(\omega\in S^{n-1}\), \(\sigma>0\) and \(\xi\in\mathbb{R}^{n}\), set \(\psi_{\omega,\sigma}(\xi):=\Psi(\sigma\xi)\varphi_{\omega}(\xi)\), where \(\Psi\in C_{c}^{\infty}(\mathbb{R}^{n})\) is as in (2.10). Also let
\[\rho(\xi):=\Big{(}1-\int_{S^{n-1}}\varphi_{\omega}(\xi)^{2}\mathrm{d}\omega \Big{)}^{1/2}. \tag{3.2}\]
These functions have the following properties, analogous to those in [13, Lemma 4.1].
**Lemma 3.2**.: _For all \(\omega\in S^{n-1}\) and \(\sigma>0\), one has \(\psi_{\omega,\sigma}\in C_{c}^{\infty}(\mathbb{R}^{n})\). Each \(\xi\in\mathrm{supp}(\psi_{\omega,\sigma})\) satisfies \(\frac{1}{2}\sigma^{-1}\leq|\xi|\leq 2\sigma^{-1}\) and \(|\hat{\xi}-\omega|\leq 2\sqrt{\sigma}\). For all \(\xi\in\mathbb{R}^{n}\) with \(|\xi|\geq 1\), one has_
\[\int_{0}^{\infty}\int_{S^{n-1}}\psi_{\omega,\sigma}(\xi)^{2}\mathrm{d}\omega \frac{\mathrm{d}\sigma}{\sigma}=1.\]
_For all \(\alpha\in\mathbb{Z}_{+}^{n}\) and \(\beta\in\mathbb{Z}_{+}\), there exists a constant \(C=C(\alpha,\beta)\geq 0\) such that_
\[|(\omega\cdot\partial_{\xi})^{\beta}\partial_{\xi}^{\alpha}\psi_{\omega,\sigma }(\xi)|\leq C\sigma^{-\frac{n-1}{4}+\frac{|\alpha|}{2}+\beta} \tag{3.3}\]
_for all \(\omega\in S^{n-1}\), \(\sigma>0\) and \(\xi\in\mathbb{R}^{n}\). Also, for each \(N\geq 0\) there exists a \(C_{N}\geq 0\) such that, for all \(\omega\in S^{n-1}\), \(\sigma>0\) and \(x\in\mathbb{R}^{n}\),_
\[|\mathcal{F}^{-1}(\psi_{\omega,\sigma})(x)|\leq C_{N}\sigma^{-\frac{3n+1}{4}}( 1+\sigma^{-1}|x|_{\omega}^{2})^{-N}. \tag{3.4}\]
_In particular, \(\{\sigma^{\frac{n-1}{4}}\mathcal{F}^{-1}(\psi_{\omega,\sigma})\mid\omega\in S ^{n-1},\sigma>0\}\subseteq L^{1}(\mathbb{R}^{n})\) is uniformly bounded. Finally, \(\rho\in C_{c}^{\infty}(\mathbb{R}^{n})\) is such that \(\rho(\xi)=0\) if \(|\xi|\geq 1\), and \(\rho(\xi)=1\) if \(|\xi|\leq 1/4\)._
Proof.: The proof is completely analogous to that of [13, Lemma 4.1]. The first few and the last two statements follow from properties (1), (2) and (3) of \((\varphi_{\omega})_{\omega\in S^{n-1}}\), and those of \(\Psi\). One then obtains (3.4) from integration by parts, using also the equivalent expression for \(|\cdot|_{\omega}\) from Lemma 2.1. Finally, since \(\rho\) is radial, to see that \(\rho\) is smooth it suffices to show that all its derivatives vanish where \(\rho(\xi)=0\).
### Wave packet transforms
In this subsection we introduce wave packet transforms that lift functions on \(\mathbb{R}^{n}\) to
\[S_{+}^{*}\mathbb{R}^{n}:=S^{*}\mathbb{R}^{n}\times\mathbb{R}_{+}=\mathbb{R}^{ n}\times S^{n-1}\times(0,\infty),\]
endowed with the measure \(\mathrm{d}x\mathrm{d}\omega\frac{\mathrm{d}\sigma}{\sigma}\). Note that the map \((x,\omega,\sigma)\mapsto(x,\sigma^{-1}\omega)\) identifies \(S_{+}^{*}\mathbb{R}^{n}\) with \(\mathbb{R}^{n}\times(\mathbb{R}^{n}\setminus\{0\})\), i.e. phase space minus the zero section.
To properly define our wave packet transforms, it will be convenient to work with a class of test functions on \(S_{+}^{*}\mathbb{R}^{n}\), and the associated distributions. As in [13, Section 2.2], we let \(\mathcal{J}(S_{+}^{*}\mathbb{R}^{n})\) consist of those \(F\in L^{\infty}(S_{+}^{*}\mathbb{R}^{n})\) such that
\[(x,\omega,\sigma)\mapsto(1+|x|+\Upsilon(\sigma)^{-1})^{N}F(x,\omega,\sigma)\]
is an element of \(L^{\infty}(S_{+}^{*}\mathbb{R}^{n})\) for all \(N\geq 0\), endowed with the topology generated by the corresponding weighted \(L^{\infty}\)-norms. Here \(\Upsilon:(0,\infty)\to(0,1]\) is given by
\[\Upsilon(t):=\min(t,t^{-1})\quad(t>0).\]
Let \(\mathcal{J}^{\prime}(S_{+}^{*}\mathbb{R}^{n})\) be the space of continuous linear \(G:\mathcal{J}(S_{+}^{*}\mathbb{R}^{n})\to\mathbb{C}\), endowed with the topology induced by \(\mathcal{J}(S_{+}^{*}\mathbb{R}^{n})\). We denote the duality between \(F\in\mathcal{J}^{\prime}(S_{+}^{*}\mathbb{R}^{n})\)
and \(G\in\mathcal{J}(S_{+}^{*}\mathbb{R}^{n})\) by \(\langle F,G\rangle_{S_{+}^{*}\mathbb{R}^{n}}\). If \(G\in L^{1}_{\rm loc}(S_{+}^{*}\mathbb{R}^{n})\) is such that
\[F\mapsto\int_{S_{+}^{*}\mathbb{R}^{n}}F(x,\omega,\sigma)G(x,\omega,\sigma) \mathrm{d}x\mathrm{d}\omega\frac{\mathrm{d}\sigma}{\sigma}\]
defines an element of \(\mathcal{J}^{\prime}(S_{+}^{*}\mathbb{R}^{n})\), then we write \(G\in\mathcal{J}^{\prime}(S_{+}^{*}\mathbb{R}^{n})\).
We will typically work with the function space
\[L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S_{+}^{*}\mathbb{R}^{n}):=L^{q}(S^{n-1};L ^{p}(\mathbb{R}^{n};L^{2}(\mathbb{R}_{+}))) \tag{3.5}\]
on \(S_{+}^{*}\mathbb{R}^{n}\), for \(p,q\in[1,\infty]\), consisting of all \(F\in L^{1}_{\rm loc}(S^{*}\mathbb{R}^{n})\) such that
\[\|F\|_{L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S_{+}^{*}\mathbb{R}^{n})}:=\Big{(} \int_{S^{n-1}}\Big{(}\int_{\mathbb{R}^{n}}\Big{(}\int_{0}^{\infty}|F(x,\omega, \sigma)|^{2}\frac{\mathrm{d}\sigma}{\sigma}\Big{)}^{p/2}\mathrm{d}x\Big{)}^{q /p}\mathrm{d}\omega\Big{)}^{1/q}<\infty.\]
It is then relevant to note that
\[\mathcal{J}(S_{+}^{*}\mathbb{R}^{n})\subseteq L^{q}_{\omega}L^{p}_{x}L^{2}_{ \sigma}(S_{+}^{*}\mathbb{R}^{n}) \tag{3.6}\]
for all \(p,q\in[1,\infty]\), and the inclusion is dense if \(p,q\in[1,\infty)\). Moreover,
\[L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S_{+}^{*}\mathbb{R}^{n})\subseteq \mathcal{J}^{\prime}(S_{+}^{*}\mathbb{R}^{n}) \tag{3.7}\]
for all \(p,q\in[1,\infty]\).
We can now define our wave packet transform, in a similar manner as in [13]. For \(f\in\mathcal{S}^{\prime}(\mathbb{R}^{n})\) and \((x,\omega,\sigma)\in S_{+}^{*}\mathbb{R}^{n}\), set
\[Wf(x,\omega,\sigma):=\begin{cases}\psi_{\omega,\sigma}(D)f(x)&\text{if }\sigma\in(0,8),\\ \mathbf{1}_{[8,8e]}(\sigma)\rho(D)f(x)&\text{if }\sigma\geq 8.\end{cases} \tag{3.8}\]
Next, for \(F\in\mathcal{J}(S_{+}^{*}\mathbb{R}^{n})\) and \(x\in\mathbb{R}^{n}\), set
\[VF(x):=\int_{0}^{8}\int_{S^{n-1}}\psi_{\nu,\tau}(D)F(\cdot,\nu,\tau)(x)\mathrm{ d}\nu\frac{\mathrm{d}\tau}{\tau}+\int_{8}^{8e}\rho(D)F(\cdot,\nu,\tau)(x)\frac{ \mathrm{d}\tau}{\tau}. \tag{3.9}\]
These transforms have the following properties.
**Proposition 3.3**.: _The following statements hold:_
1. \(W:L^{2}(\mathbb{R}^{n})\to L^{2}(S_{+}^{*}\mathbb{R}^{n})\) _is an isometry;_
2. \(W:\mathcal{S}(\mathbb{R}^{n})\to\mathcal{J}(S_{+}^{*}\mathbb{R}^{n})\) _and_ \(W:\mathcal{S}^{\prime}(\mathbb{R}^{n})\to\mathcal{J}^{\prime}(S_{+}^{*} \mathbb{R}^{n})\) _are continuous;_
3. \(V:\mathcal{J}(S_{+}^{*}\mathbb{R}^{n})\to\mathcal{S}(\mathbb{R}^{n})\) _is continuous, and_ \(V\) _extends uniquely to a continuous map_ \(V:\mathcal{J}^{\prime}(S_{+}^{*}\mathbb{R}^{n})\to\mathcal{S}^{\prime}( \mathbb{R}^{n})\)_;_
4. \(\langle VF,g\rangle_{\mathbb{R}^{n}}=\langle F,Wg\rangle_{S_{+}^{*}\mathbb{R}^ {n}}\) _for all_ \(F\in\mathcal{J}^{\prime}(S_{+}^{*}\mathbb{R}^{n})\) _and_ \(g\in\mathcal{S}(\mathbb{R}^{n})\)_, and_ \(\langle Wf,G\rangle_{S_{+}^{*}\mathbb{R}^{n}}=\langle f,VG\rangle_{\mathbb{R} ^{n}}\) _for all_ \(f\in\mathcal{S}^{\prime}(\mathbb{R}^{n})\) _and_ \(G\in\mathcal{J}(S_{+}^{*}\mathbb{R}^{n})\)_;_
5. \(VWf=f\) _for all_ \(f\in\mathcal{S}^{\prime}(\mathbb{R}^{n})\)_._
Proof.: The statement and the proof are essentially contained in [13, Lemma 4.3]. More precisely, (1) follows from the definition of \(\rho\) in (3.2) and from the properties of \(\Psi\) and the \(\varphi_{\omega}\), given that we have assumed that the surface measure \(\mathrm{d}\omega\) on \(S^{n-1}\) is unit normalized. Moreover, it is proved in [13, Lemma 4.3] that \(W:\mathcal{S}(\mathbb{R}^{n})\to\mathcal{J}(S_{+}^{*}\mathbb{R}^{n})\) and \(V:\mathcal{J}(S_{+}^{*}\mathbb{R}^{n})\to\mathcal{S}(\mathbb{R}^{n})\) are continuous. The remaining statements follow from this, upon confirming that \(\langle Wf,G\rangle_{S_{+}^{*}\mathbb{R}^{n}}=\langle f,VG\rangle_{\mathbb{R} ^{n}}\) for all \(f\in\mathcal{S}^{\prime}(\mathbb{R}^{n})\) and \(G\in\mathcal{J}(S_{+}^{*}\mathbb{R}^{n})\)
### Operators on phase space
In this subsection we prove a key result about the boundedness of certain operators on function spaces over \(S^{*}_{+}\mathbb{R}^{n}\).
For the proof of Proposition 4.5 below, it will be convenient for us to introduce wave packet transforms associated with other parabolic frequency localizations. More precisely, let \((\widetilde{\varphi}_{\omega})_{\omega\in S^{n-1}}\subseteq C^{\infty}(\mathbb{ R}^{n})\) be a family with the same properties (1), (2) and (3) that \((\varphi_{\omega})_{\omega\in S^{n-1}}\) has, from Section 3.1. Let \(\widetilde{\Psi}\in C^{\infty}_{c}(\mathbb{R}^{n})\) be such that \(\operatorname{supp}(\widetilde{\Psi})\subseteq\{\xi\in\mathbb{R}^{n}\mid 1/2 \leq|\xi|\leq 2\}\) and such that (2.10) holds with \(\Psi\) replaced by \(\widetilde{\Psi}\). For \(\omega\in S^{n-1}\), \(\sigma>0\) and \(\xi\in\mathbb{R}^{n}\), set \(\widetilde{\psi}_{\omega,\sigma}(\xi):=\widetilde{\Psi}(\sigma\xi)\widetilde{ \varphi}_{\omega}(\xi)\), and write
\[\widetilde{\rho}(\xi):=\Big{(}1-\int_{S^{n-1}}\widetilde{\varphi}_{\omega}( \xi)^{2}\mathrm{d}\omega\Big{)}^{1/2}. \tag{3.10}\]
Now set, for \(f\in\mathcal{S}^{\prime}(\mathbb{R}^{n})\) and \((x,\omega,\sigma)\in S^{*}_{+}\mathbb{R}^{n}\),
\[\widetilde{W}f(x,\omega,\sigma):=\begin{cases}\widetilde{\psi}_{\omega,\sigma }(D)f(x)&\text{if }\sigma\in(0,8),\\ \mathbf{1}_{[8,8e]}(\sigma)\widetilde{\rho}(D)f(x)&\text{if }\sigma\geq 1. \end{cases} \tag{3.11}\]
Similarly, for \(F\in\mathcal{J}(S^{*}_{+}\mathbb{R}^{n})\) and \(x\in\mathbb{R}^{n}\), write
\[\widetilde{V}F(x):=\int_{0}^{8}\int_{S^{n-1}}\widetilde{\psi}_{\nu,\tau}(D)F( \cdot,\nu,\tau)(x)\mathrm{d}\nu\frac{\mathrm{d}\tau}{\tau}+\int_{8}^{8e} \widetilde{\rho}(D)F(\cdot,\nu,\tau)(x)\frac{\mathrm{d}\tau}{\tau}. \tag{3.12}\]
These transforms have the same properties as \(W\) and \(V\), from Proposition 3.3.
**Theorem 3.4**.: _Let \(p,q\in(1,\infty)\). Then_
\[W\widetilde{V}:L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S^{*}_{+}\mathbb{R}^{n}) \to L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S^{*}_{+}\mathbb{R}^{n})\]
_boundedly._
Proof.: Let \(p,q\in(1,\infty)\) and \(F\in L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S^{*}_{+}\mathbb{R}^{n})\). We have that
\[W\widetilde{V}F=I+II+III+IV,\]
where, for \((x,\omega,\sigma)\in S^{*}_{+}\mathbb{R}^{n}\),
\[I(x,\omega,\sigma)=\psi_{\omega,\sigma}(D)\int_{0}^{8}\int_{S^{n-1}} \widetilde{\psi}_{\nu,\tau}(D)F(\cdot,\nu,\tau)(x)\mathrm{d}\nu\frac{\mathrm{ d}\tau}{\tau},\]
\[II(x,\omega,\sigma)=\mathbf{1}_{[8,8e]}(\sigma)\rho(D)\int_{0}^{8}\int_{S^{n-1 }}\widetilde{\psi}_{\nu,\tau}(D)F(\cdot,\nu,\tau)(x)\mathrm{d}\nu\frac{\mathrm{ d}\tau}{\tau},\]
\[III(x,\omega,\sigma)=\psi_{\omega,\sigma}(D)\int_{S^{n-1}}\int_{8}^{8e} \widetilde{\rho}(D)F(\cdot,\nu,\tau)(x)\mathrm{d}\nu\frac{\mathrm{d}\tau}{\tau},\]
\[IV(x,\omega,\sigma)=\mathbf{1}_{[8,8e]}(\sigma)\rho(D)\int_{S^{n-1}}\int_{8}^{8 e}\widetilde{\rho}(D)F(\cdot,\nu,\tau)(x)\mathrm{d}\nu\frac{\mathrm{d}\tau}{\tau}.\]
To estimate \(I\), we start with by using the support properties of \(\psi_{\omega,\sigma}\) and \(\widetilde{\psi}_{\nu,\tau}\) and a change of variable to obtain the following:
\[|I(x,\omega,\sigma)| =\left|\psi_{\omega,\sigma}(D)\int_{1/4}^{4}\int_{S^{n-1}} \widetilde{\psi}_{\nu,\alpha\sigma}(D)F(\cdot,\nu,\alpha\sigma)(x)\mathrm{d} \nu\frac{\mathrm{d}\alpha}{\alpha}\right|\] \[\lesssim\int_{1/4}^{4}\left|\sigma^{\frac{n-1}{2}}\psi_{\omega, \sigma}(D)\fint_{|\nu-\omega|\leq 6\sqrt{\sigma}}\widetilde{\psi}_{\nu,\alpha \sigma}(D)F(\cdot,\nu,\alpha\sigma)(x)\mathrm{d}\nu\right|\frac{d\alpha}{ \alpha}.\]
We now use the kernel estimates (3.4) to obtain the pointwise domination
\[|I(x,\omega,\sigma)|\lesssim\int_{1/4}^{4}M_{\omega}\left|\fint_{|\nu-\omega|\leq 6 \sqrt{\sigma}}\sigma^{\frac{n-1}{4}}\widetilde{\psi}_{\nu,\alpha\sigma}(D)F( \cdot,\nu,\alpha\sigma)(x)\mathrm{d}\nu\right|\frac{d\alpha}{\alpha},\]
where \(M_{\omega}\) is as in (2.6). We then apply, for a fixed \(\omega\in S^{n-1}\), the vector-valued boundedness of \(M_{\omega}\) on \(L^{p}(\mathbb{R}^{n};L^{2}(\mathbb{R}_{+};\frac{d\sigma}{\sigma}))\) from Lemma 2.2.
This gives, for every \(\omega\in S^{n-1}\) and \(\alpha\in[1/4,4]\),
\[\|I(x,\omega,\sigma)\|_{L^{p}_{x}(\mathbb{R}^{n};L^{2}(\mathbb{R} _{+},\frac{d\sigma}{\sigma}))}\] \[\lesssim\left\|M_{\omega}\left|\fint_{|\nu-\omega|\leq 6\sqrt{ \sigma}}\sigma^{\frac{n-1}{4}}\widetilde{\psi}_{\nu,\alpha\sigma}(D)F(\cdot, \nu,\alpha\sigma)(x)\mathrm{d}\nu\right|\right\|_{L^{p}_{x}(\mathbb{R}^{n};L^ {2}(\mathbb{R}_{+},\frac{d\sigma}{\sigma}))}\] \[\lesssim\left\|\fint_{|\nu-\omega|\leq 6\sqrt{\sigma}}\sigma^{ \frac{n-1}{4}}\widetilde{\psi}_{\nu,\alpha\sigma}(D)F(\cdot,\nu,\alpha\sigma) (x)\mathrm{d}\nu\right\|_{L^{p}_{x}(\mathbb{R}^{n};L^{2}(\mathbb{R}_{+}, \frac{d\sigma}{\sigma}))}\] \[=\left\|\fint_{|\nu-\omega|\leq 6\sqrt{\sigma}}G(x,\nu,\alpha \sigma)\mathrm{d}\nu\right\|_{L^{p}_{x}(\mathbb{R}^{n};L^{2}(\mathbb{R}_{+}, \frac{d\sigma}{\sigma}))},\]
where
\[G(x,\nu,\alpha\sigma):=\sigma^{\frac{n-1}{4}}\widetilde{\psi}_{\nu,\alpha\sigma }(D)F(\cdot,\nu,\alpha\sigma)(x).\]
Taking \(L^{q}(S^{n-1};d\omega)\) norm on both sides, we have that
\[\|I(x,\omega,\sigma)\|_{L^{q}_{x}L^{p}_{x}L^{2}_{\sigma}(S^{*}_{ +}\mathbb{R}^{n})}\] \[\lesssim\int_{1/4}^{4}\left\|\fint_{|\nu-\omega|\leq 6\sqrt{ \sigma}}G(\nu,x,\alpha\sigma)d\nu\right\|_{L^{q}_{x}L^{p}_{x}L^{2}_{\sigma}(S^ {*}_{+}\mathbb{R}^{n})}\frac{\mathrm{d}\alpha}{\alpha}\] \[\lesssim\int_{1/4}^{4}\|M_{HL}G(\omega,x,\alpha\sigma)\|_{L^{q}_{ x}L^{p}_{x}L^{2}_{\sigma}(S^{*}_{+}\mathbb{R}^{n})}\frac{\mathrm{d}\alpha}{\alpha}\]
where \(\mathcal{M}_{HL}\) denotes the Hardy-Littlewood maximal function on \(S^{n-1}\) with value in the UMD lattice \(L^{p}(\mathbb{R}^{n};L^{2}(\mathbb{R}_{+},\frac{d\sigma}{\sigma}))\).
By [24, Theorem 3], \(\mathcal{M}_{HL}\) is bounded on \(L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S^{*}_{+}\mathbb{R}^{n})\). Combining this fact with the boundedness of the vector-valued anisotropic maximal functions on \(L^{p}(\mathbb{R}^{n};L^{2}(\mathbb{R}_{+};\frac{d\sigma}{\sigma}))\) as before, we thus have that
\[\|I(x,\omega,\sigma)\|_{L^{q}_{x}L^{p}_{x}L^{2}_{\sigma}(S^{*}_{ +}\mathbb{R}^{n})} \lesssim\int_{1/4}^{4}\|G(\omega,x,\alpha\sigma)\|_{L^{q}_{x}L^{p }_{x}L^{2}_{\sigma}(S^{*}_{+}\mathbb{R}^{n})}\frac{\mathrm{d}\alpha}{\alpha},\] \[=\int_{1/4}^{4}\|\sigma^{\frac{n-1}{4}}\widetilde{\psi}_{\omega, \alpha\sigma}(D)F(\cdot,\omega,\alpha\sigma)(x)\|_{L^{q}_{x}L^{p}_{x}L^{2}_{ \sigma}(S^{*}_{+}\mathbb{R}^{n})}\frac{\mathrm{d}\alpha}{\alpha}\] \[\lesssim\int_{1/4}^{4}\|F(x,\omega,\alpha\sigma)\|_{L^{q}_{x}L^{p }_{x}L^{2}_{\sigma}(S^{*}_{+}\mathbb{R}^{n})}\frac{\mathrm{d}\alpha}{\alpha}\] \[\lesssim\|F\|_{L^{q}_{x}L^{p}_{x}L^{2}_{\sigma}(S^{*}_{+}\mathbb{R }^{n})}.\]
We now turn to \(II\), and first notice that, by Lemma 3.2, the support properties of \(\rho\) and \(\widetilde{\psi}_{\nu,\tau}\) are such that
\[II(x,\omega,\sigma)=\mathbf{1}_{[8,8e]}(\sigma)\rho(D)\int_{1/2}^{8}\int_{S^{n -1}}\widetilde{\psi}_{\nu,\tau}(D)F(\cdot,\nu,\tau)(x)\mathrm{d}\nu\frac{ \mathrm{d}\tau}{\tau}.\]
Using the fact that \(II\) is constant in \(\omega\), and only depends on \(\sigma\) via \(\mathbf{1}_{[8,8e]}(\sigma)\), we thus have that
\[\|II(x,\omega,\sigma)\|_{L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S^{* }_{+}\mathbb{R}^{n})} \lesssim\|\int_{1/2}^{8}\int_{S^{n-1}}\widetilde{\psi}_{\nu,\tau} (D)F(\cdot,\nu,\tau)\mathrm{d}\nu\frac{\mathrm{d}\tau}{\tau}\|_{L^{p}(\mathbb{R }^{n})},\] \[\lesssim\int_{S^{n-1}}\|\int_{1/2}^{8}\widetilde{\psi}_{\nu,\tau} (D)F(\cdot,\nu,\tau)\frac{\mathrm{d}\tau}{\tau}\|_{L^{p}(\mathbb{R}^{n})} \mathrm{d}\nu,\] \[\lesssim\int_{S^{n-1}}\|\tau^{\frac{n-1}{4}}\widetilde{\psi}_{ \nu,\tau}(D)F(\cdot,\nu,\tau)(x)\|_{L^{p}_{x}(\mathbb{R}^{n};L^{2}(\mathbb{R}_ {+};\frac{\mathrm{d}\tau}{\tau}))}\mathrm{d}\nu.\]
Using the kernel estimates (3.4) and the vector-valued boundedness of \(M_{\nu}\) as before, we then have that
\[\|II(x,\omega,\sigma)\|_{L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S^ {*}_{+}\mathbb{R}^{n})} \lesssim\int_{S^{n-1}}\|M_{\nu}F(\cdot,\nu,\tau)(x)\|_{L^{p}_{x} (\mathbb{R}^{n};L^{2}(\mathbb{R}_{+};\frac{\mathrm{d}\tau}{\tau}))}\mathrm{d}\nu,\] \[\lesssim\int_{S^{n-1}}\|F(x,\nu,\tau)\|_{L^{p}_{x}(\mathbb{R}^{n} ;L^{2}(\mathbb{R}_{+};\frac{\mathrm{d}\tau}{\tau}))}\mathrm{d}\nu\] \[\lesssim\|F\|_{L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S^{*}_{+} \mathbb{R}^{n})}.\]
Similarly, the support properties of \(\widetilde{\rho}\) and \(\psi_{\omega,\sigma}\) are such that
\[\|III(x,\omega,\sigma)\|_{L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S^ {*}_{+}\mathbb{R}^{n})}\] \[\lesssim\|\mathbf{1}_{[1/2,\infty)}(\sigma)\sigma^{\frac{n-1}{4} }\psi_{\omega,\sigma}(D)\widetilde{\rho}(D)\int_{8}^{8e}F(\cdot,\nu,\tau)(x) \frac{\mathrm{d}\tau}{\tau}\|_{L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S^{*}_{+ }\mathbb{R}^{n})}.\]
Using the kernel estimates (3.4) and the vector-valued boundedness of \(M_{\omega}\) again, we thus have that
\[\|III(x,\omega,\sigma)\|_{L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S^ {*}_{+}\mathbb{R}^{n})} \lesssim\int_{S^{n-1}}\|\widetilde{\rho}(D)\int_{8}^{8e}F(\cdot, \nu,\tau)(x)\frac{\mathrm{d}\tau}{\tau}\|_{L^{p}(\mathbb{R}^{n})}\mathrm{d}\nu\] \[=\|\widetilde{\rho}(D)\int_{8}^{8e}F(\cdot,\nu,\tau)(x)\frac{ \mathrm{d}\tau}{\tau}\|_{L^{q}_{\omega}(S^{n-1};L^{p}_{x}(\mathbb{R}^{n}))}.\]
Since \(\widetilde{\rho}(D)\) is bounded on \(L^{p}(\mathbb{R}^{n})\), we conclude that
\[\|III(x,\omega,\sigma)\|_{L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S^ {*}_{+}\mathbb{R}^{n})} \lesssim\Big{\|}\int_{8}^{8e}F(\cdot,\nu,\tau)(x)\frac{\mathrm{d} \tau}{\tau}\Big{\|}_{L^{q}_{\omega}(S^{n-1};L^{p}_{x}(\mathbb{R}^{n}))}\] \[\lesssim\|F\|_{L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S^{*}_{+} \mathbb{R}^{n})},\]
using Cauchy-Schwarz inequality in the last estimate. We finally turn to \(IV\). Using the fact that this is term is constant in \(\omega\), and only depends on \(\sigma\) via \(\mathbf{1}_{[8,8e]}(\sigma)\), we have that
\[\|IV(x,\omega,\sigma)\|_{L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S^ {*}_{+}\mathbb{R}^{n})} \lesssim\int_{S^{n-1}}\|\rho(D)\widetilde{\rho}(D)\int_{8}^{8e}F( \cdot,\nu,\tau)\frac{\mathrm{d}\tau}{\tau}\|_{L^{p}(\mathbb{R}^{n})}\mathrm{d}\nu,\] \[\lesssim\int_{S^{n-1}}\|\int_{8}^{8e}F(\cdot,\nu,\tau)\frac{ \mathrm{d}\tau}{\tau}\|_{L^{p}(\mathbb{R}^{n})}\mathrm{d}\nu,\]
since \(\widetilde{\rho}(D)\) and \(\rho(D)\) are bounded on \(L^{p}(\mathbb{R}^{n})\). Applying Holder's inequality, we thus have \(\|IV\|_{L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S^{*}_{+}\mathbb{R}^{n})} \lesssim\|F\|_{L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S^{*}_{+} \mathbb{R}^{n})}\), which concludes the proof.
## 4. Function spaces for decoupling
In this section we first define the function spaces for decoupling. We then derive some of their basic properties, including interpolation and duality theorems.
### Definition
For simplicity of notation, we write \(\mathcal{H}^{p}(\mathbb{R}^{n}):=L^{p}(\mathbb{R}^{n})\) for \(p\in(1,\infty)\). Also, \(\mathcal{H}^{1}(\mathbb{R}^{n})\) is the local real Hardy space, with norm
\[\|f\|_{\mathcal{H}^{1}(\mathbb{R}^{n})}:=\|\rho(D)f\|_{L^{1}(\mathbb{R}^{n})}+ \|(1-\rho)(D)f\|_{H^{1}(\mathbb{R}^{n})}\]
for \(f\in\mathcal{H}^{1}(\mathbb{R}^{n})\). Here \(\rho\in C^{\infty}_{c}(\mathbb{R}^{n})\) is as in (3.2), although the specific choice of low-frequency cut-off is irrelevant. Moreover, \(\mathcal{H}^{\infty}(\mathbb{R}^{n}):=\operatorname{bmo}(\mathbb{R}^{n})\) is the dual of \(\mathcal{H}^{1}(\mathbb{R}^{n})\), and we write \(\mathcal{H}^{s,p}(\mathbb{R}^{n}):=\langle D\rangle^{-s}\mathcal{H}^{p}( \mathbb{R}^{n})\) for all \(p\in[1,\infty]\) and \(s\in\mathbb{R}\).
Recall that \((\varphi_{\omega})_{\omega\in S^{n-1}}\subseteq C^{\infty}(\mathbb{R}^{n})\) is a fixed collection of parabolic frequency localizations, introduced in Section 3.1.
**Definition 4.1**.: Let \(p\in[1,\infty]\), \(q\in[1,\infty)\) and \(s\in\mathbb{R}\). Then \(\mathcal{H}^{p,q;s}_{\rm dec}(\mathbb{R}^{n})\) consists of those \(f\in\mathcal{S}^{\prime}(\mathbb{R}^{n})\) such that \(\rho(D)f\in L^{p}(\mathbb{R}^{n})\), \(\varphi_{\omega}(D)f\in\mathcal{H}^{s,p}(\mathbb{R}^{n})\) for almost all \(\omega\in S^{n-1}\), and \((\int_{S^{n-1}}\|\varphi_{\omega}(D)f\|^{q}_{\mathcal{H}^{s,p}(\mathbb{R}^{n} )}\mathrm{d}\omega)^{1/q}<\infty\), endowed with the norm
\[\|f\|_{\mathcal{H}^{p,q;s}_{\rm dec}(\mathbb{R}^{n})}:=\|\rho(D)f\|_{L^{p}( \mathbb{R}^{n})}+\Big{(}\int_{S^{n-1}}\|\varphi_{\omega}(D)f\|^{q}_{\mathcal{H }^{s,p}(\mathbb{R}^{n})}\mathrm{d}\omega\Big{)}^{1/q}. \tag{4.1}\]
It is straightforward to see that \(\mathcal{H}^{p,q;s}_{\rm dec}(\mathbb{R}^{n})=\langle D\rangle^{-s}\mathcal{H }^{p,q;0}_{\rm dec}(\mathbb{R}^{n})\) for all \(p\in[1,\infty]\), \(q\in[1,\infty)\) and \(s\in\mathbb{R}\). Note also that one may replace \(\mathcal{H}^{s,p}(\mathbb{R}^{n})\) by \(\langle D\rangle^{-s}H^{p}(\mathbb{R}^{n})\) in Definition 4.1, since each \(\varphi_{\omega}\) vanishes near zero. Then Proposition 2.6 shows that in fact
\[\|f\|_{\mathcal{H}^{p,q;s}_{\rm dec}(\mathbb{R}^{n})}\eqsim\|\rho(D)f\|_{L^{p} (\mathbb{R}^{n})}+\Big{(}\int_{S^{n-1}}\|\langle D\rangle^{s}\varphi_{\omega}( D)f\|^{q}_{H^{p}_{\mathcal{R}}(\mathbb{R}^{n})}\mathrm{d}\omega\Big{)}^{1/q}\]
for all \(p,q\in[1,\infty)\), \(s\in\mathbb{R}\) and \(f\in\mathcal{H}^{p,q;s}_{\rm dec}(\mathbb{R}^{n})\).
**Remark 4.2**.: For all \(p\in[1,\infty)\) and \(s\in\mathbb{R}\), the space \(\mathcal{H}^{p,p;s}_{\rm dec}(\mathbb{R}^{n})\) coincides with \(\mathcal{H}^{s,p}_{FIO}(\mathbb{R}^{n})=\langle D\rangle^{-s}\mathcal{H}^{p} _{FIO}(\mathbb{R}^{n})\), as defined in [13, 14]. This follows from equivalent characterizations of \(\mathcal{H}^{p}_{FIO}(\mathbb{R}^{n})\) proved in [9, 21]. These characterizations involve norms as in (4.1).
### Basic properties
To derive some of the basic properties of the function spaces for decoupling, it will be convenient to characterize them using the wave packet transform \(W\) from (3.8), and the spaces \(L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S^{*}_{+}\mathbb{R}^{n})\) from (3.5).
**Lemma 4.3**.: _Let \(p,q\in[1,\infty)\). Then an \(f\in\mathcal{S}^{\prime}(\mathbb{R}^{n})\) satisfies \(f\in\mathcal{H}^{p,q;0}_{\rm dec}(\mathbb{R}^{n})\) if and only if \(Wf\in L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S^{*}_{+}\mathbb{R}^{n})\), in which case_
\[\tfrac{1}{2}\|f\|_{\mathcal{H}^{p,q;0}_{\rm dec}(\mathbb{R}^{n})}\leq\|Wf\|_{L ^{q}_{x}L^{p}_{x}L^{2}_{\sigma}(S^{*}_{+}\mathbb{R}^{n})}\leq\|f\|_{\mathcal{ H}^{p,q;0}_{\rm dec}(\mathbb{R}^{n})}.\]
_In particular, \(\mathcal{H}^{2,2;s}_{\rm dec}(\mathbb{R}^{n})=W^{s,2}(\mathbb{R}^{n})\) for all \(s\in\mathbb{R}\), with equivalence of norms._
Proof.: The first statement follows from the Littlewood-Paley characterization of \(\mathcal{H}^{p}(\mathbb{R}^{n})\), the support properties of \(\Psi\) and the \(\varphi_{\omega}\), and the triangle inequality. Then the second statement follows from Proposition 3.3 (1).
We can now combine this lemma with Theorem 3.4 to derive various properties of the function spaces for decoupling.
For the following proposition, recall the definition of the transform \(V\) from (3.9).
**Proposition 4.4**.: _Let \(p,q\in(1,\infty)\). Then \(W:\mathcal{H}^{p,q;0}_{\mathrm{dec}}(\mathbb{R}^{n})\to L^{q}_{\omega}L^{p}_{x}L^ {2}_{\sigma}(S^{*}_{+}\mathbb{R}^{n})\) is an isomorphism onto a complemented subspace, \(V:L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S^{*}_{+}\mathbb{R}^{n})\to\mathcal{H}^ {p,q;0}_{\mathrm{dec}}(\mathbb{R}^{n})\) is bounded, and_
\[V:L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S^{*}_{+}\mathbb{R}^{n})/\ker(V)\to \mathcal{H}^{p,q;0}_{\mathrm{dec}}(\mathbb{R}^{n}) \tag{4.2}\]
_is an isomorphism. In particular, \(\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})\) is a Banach space for all \(s\in\mathbb{R}\)._
Proof.: The first statement follows from Lemma 4.3, Theorem 3.4 and Proposition 3.3 (5), since \(WV\) is a bounded projection on \(L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S^{*}_{+}\mathbb{R}^{n})\) with image \(W\mathcal{H}^{p,q;0}_{\mathrm{dec}}(\mathbb{R}^{n})\). Next, note that \(VF\in\mathcal{S}^{\prime}(\mathbb{R}^{n})\) for each \(F\in L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S^{*}_{+}\mathbb{R}^{n})\), by (3.7) and Proposition 3.3 (5). Hence the second statement also follows from Lemma 4.3 and Theorem 3.4. For the same reasons, \(V:L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S^{*}_{+}\mathbb{R}^{n})\to\mathcal{H} ^{p,q;0}_{\mathrm{dec}}(\mathbb{R}^{n})\) is surjective, which implies (4.2). The final statement in turn follows from (4.2), or alternatively because \(W:\mathcal{H}^{p,q;0}_{\mathrm{dec}}(\mathbb{R}^{n})\to W\mathcal{H}^{p,q;0}_{ \mathrm{dec}}(\mathbb{R}^{n})\) is an isomorphism onto a complemented subspace.
Next, we show that the definition of \(\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})\) is independent of the choice of parabolic frequency localizations. More precisely, let \((\widetilde{\varphi}_{\omega})_{\omega\in S^{n-1}}\subseteq C^{\infty}( \mathbb{R}^{n})\) be a family with the same properties (1), (2) and (3) that \((\varphi_{\omega})_{\omega\in S^{n-1}}\) has, from Section 3.1, and let \(\widetilde{\rho}\) be the associated low-frequency cut-off, from (3.10).
**Proposition 4.5**.: _Let \(p,q\in(1,\infty)\) and \(s\in\mathbb{R}\). Then there exists a \(C>0\) such that the following holds for all \(f\in\mathcal{S}^{\prime}(\mathbb{R}^{n})\). One has \(f\in\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})\) if and only if \(\widetilde{\rho}(D)f\in L^{p}(\mathbb{R}^{n})\), \(\widetilde{\varphi}_{\omega}(D)f\in\mathcal{H}^{s,p}(\mathbb{R}^{n})\) for almost all \(\omega\in S^{n-1}\), and \((\int_{S^{n-1}}\|\widetilde{\varphi}_{\omega}(D)f\|^{q}_{\mathcal{H}^{s,p}( \mathbb{R}^{n})}\mathrm{d}\omega)^{1/q}<\infty\), in which case_
\[\frac{1}{C}\|f\|_{\mathcal{H}^{s,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})}\leq\| \widetilde{\rho}(D)f\|_{L^{p}(\mathbb{R}^{n})}+\Bigl{(}\int_{S^{n-1}}\| \widetilde{\varphi}_{\omega}(D)f\|^{q}_{\mathcal{H}^{s,p}(\mathbb{R}^{n})} \mathrm{d}\omega\Bigr{)}^{\frac{1}{q}}\leq C\|f\|_{\mathcal{H}^{s,q;s}_{ \mathrm{dec}}(\mathbb{R}^{n})}.\]
Proof.: It suffices to consider the case where \(s=0\). We will use the operators \(\widetilde{W}\) and \(\widetilde{V}\) from (3.11) and (3.12). More precisely, one has \(\widetilde{V}\widetilde{W}f=f\) for all \(f\in\mathcal{S}^{\prime}(\mathbb{R}^{n})\), as follows from Proposition 3.3 (5) with \(W\) and \(V\) replaced by \(\widetilde{W}\) and \(\widetilde{V}\). Hence Proposition 4.4 and Theorem 3.4 yield
\[\|f\|_{\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})} \asymp\|Wf\|_{L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S^{*}_{+} \mathbb{R}^{n})}=\|W\widetilde{V}\widetilde{W}f\|_{L^{q}_{\omega}L^{p}_{x}L^ {2}_{\sigma}(S^{*}_{+}\mathbb{R}^{n})}\] \[\lesssim\|\widetilde{W}f\|_{L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}( S^{*}_{+}\mathbb{R}^{n})}\]
for all \(f\in\mathcal{S}^{\prime}(\mathbb{R}^{n})\) for which the final quantity is finite. By Proposition 4.4 with \(W\) replaced by \(\widetilde{W}\), this proves one of the required inequalities. The other inequality follows by symmetry.
Finally, we consider the Schwartz functions as a subset of \(\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})\).
**Proposition 4.6**.: _Let \(p,q\in(1,\infty)\) and \(s\in\mathbb{R}\). Then \(\mathcal{S}(\mathbb{R}^{n})\subseteq\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R }^{n})\) lies dense._
Proof.: It suffices to consider the case where \(s=0\). By Proposition 3.3 (2) and (3.6), one has \(Wf\in\mathcal{J}(S^{*}_{+}\mathbb{R}^{n})\subseteq L^{q}_{x}L^{p}_{x}L^{2}_{ \sigma}(S^{*}_{+}\mathbb{R}^{n})\) for all \(f\in\mathcal{S}(\mathbb{R}^{n})\). By Lemma 4.3, this in turn implies that \(f\in\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})\).
To see that \(\mathcal{S}(\mathbb{R}^{n})\) in fact lies dense in \(\mathcal{H}^{p,q;0}_{\mathrm{dec}}(\mathbb{R}^{n})\), let \(f\in\mathcal{H}^{p,q;0}_{\mathrm{dec}}(\mathbb{R}^{n})\) be given. Then \(Wf\in L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S^{*}_{+}\mathbb{R}^{n})\), by Lemma 4.3. Since \(J(S^{*}_{+}\mathbb{R}^{n})\subseteq L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S^{*}_{+ }\mathbb{R}^{n})\) lies dense, there exists a sequence \((F_{j})_{j=0}^{\infty}\subseteq\mathcal{J}(S^{*}_{+}\mathbb{R}^{n})\) such that \(F_{j}\to Wf\) in
\(L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S^{*}_{+}\mathbb{R}^{n})\), as \(j\to\infty\). Then, by Propositions 3.3 and 4.4, \((VF_{j})_{j=0}^{\infty}\subseteq\mathcal{S}(\mathbb{R}^{n})\) and
\[\|VF_{j}-f\|_{\mathcal{H}^{p,q;0}_{\mathrm{dec}}(\mathbb{R}^{n})}=\|VF_{j}-VWf \|_{\mathcal{H}^{p,q;0}_{\mathrm{dec}}(\mathbb{R}^{n})}\lesssim\|F_{j}-Wf\|_{L ^{q}_{x}L^{p}_{x}L^{2}_{\sigma}(S^{*}_{+}\mathbb{R}^{n})}\to 0\]
as \(j\to\infty\).
### Interpolation and duality
In this subsection we prove interpolation and duality properties of the function spaces for decoupling.
First, we prove that the function spaces for decoupling form a complex interpolation scale.
**Proposition 4.7**.: _Let \(p_{0},p_{1},p,q_{1},q_{2},q\in(1,\infty)\), \(s_{0},s_{1},s\in\mathbb{R}\) and \(\theta\in[0,1]\) be such that \(\frac{1}{p}=\frac{1-\theta}{p_{0}}+\frac{\theta}{p_{1}}\), \(\frac{1}{p}=\frac{1-\theta}{p_{0}}+\frac{\theta}{p_{1}}\) and \(s=(1-\theta)s_{0}+\theta s_{1}\). Then_
\[[\mathcal{H}^{p_{0},q_{0};s_{0}}_{\mathrm{dec}}(\mathbb{R}^{n}),\mathcal{H}^{ p_{0},q_{0};s_{0}}_{\mathrm{dec}}(\mathbb{R}^{n})]_{\theta}=\mathcal{H}^{p,q;s}_{ \mathrm{dec}}(\mathbb{R}^{n}),\]
_with equivalent norms._
Proof.: It suffices to consider the case where \(s_{0}=s_{1}=s=0\). It is a basic fact about interpolation of vector-valued function spaces (see [17, Theorem 2.2.6]) that
\[[L^{q_{0}}_{\omega}L^{p_{0}}_{x}L^{2}_{\sigma}(S^{*}_{+}\mathbb{R}^{n}),L^{q_ {1}}_{\omega}L^{p_{1}}_{x}L^{2}_{\sigma}(S^{*}_{+}\mathbb{R}^{n})]_{\theta}=L^ {q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S^{*}_{+}\mathbb{R}^{n}).\]
Hence one can use combine Theorem 3.4 with a result about interpolation of complemented subspaces (see [31, Theorem 1.17.1.1]) to conclude that
\[[WVL^{q_{0}}_{\omega}L^{p_{0}}_{x}L^{2}_{\sigma}(S^{*}_{+}\mathbb{R}^{n}),WVL^ {q_{1}}_{\omega}L^{p_{1}}_{x}L^{2}_{\sigma}(S^{*}_{+}\mathbb{R}^{n})]_{\theta }=WVL^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S^{*}_{+}\mathbb{R}^{n}).\]
Moreover, by Proposition 4.4,
\[W:\mathcal{H}^{\tilde{p},\tilde{q};0}_{\mathrm{dec}}(\mathbb{R}^{n})\to WVL^ {\tilde{q}}_{\omega}L^{\tilde{p}}_{x}L^{2}_{\sigma}(S^{*}_{+}\mathbb{R}^{n})\]
is an isomorphism for all \(\tilde{p},\tilde{q}\in(1,\infty)\), which proves the required statement.
Next, we show that the function spaces for decoupling have natural duality properties.
**Proposition 4.8**.: _Let \(p,q\in(1,\infty)\) and \(s\in\mathbb{R}\). Then_
\[(\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n}))^{*}=\mathcal{H}^{p^{ \prime},q^{\prime};-s}_{\mathrm{dec}}(\mathbb{R}^{n})\]
_with equivalent norms, where the duality pairing is given by_
\[\int_{S^{*}_{+}\mathbb{R}^{n}}Wf(x,\omega,\sigma)Wg(x,\omega,\sigma)\mathrm{d }x\mathrm{d}\omega\frac{\mathrm{d}\sigma}{\sigma}\]
_for \(f\in\mathcal{H}^{p^{\prime},q^{\prime};-s}_{\mathrm{dec}}(\mathbb{R}^{n})\) and \(g\in\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})\), and by \(\langle f,g\rangle_{\mathbb{R}^{n}}\) if \(g\in\mathcal{S}(\mathbb{R}^{n})\)._
Proof.: Since \(\langle D\rangle^{s}\) commutes with \(W\), it suffices to consider the case where \(s=0\).
First let \(f\in\mathcal{H}^{p^{\prime},q^{\prime};0}_{\mathrm{dec}}(\mathbb{R}^{n})\). Then, by Proposition 4.4, \(Wf\in L^{q^{\prime}}_{\omega}L^{p^{\prime}}_{x}L^{2}_{\sigma}(S^{*}_{+} \mathbb{R}^{n})\), and \(Wg\in L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S^{*}_{+}\mathbb{R}^{n})\) for all \(g\in\mathcal{H}^{p,q;0}_{\mathrm{dec}}(\mathbb{R}^{n})\). Hence Holder's inequality yields
\[\Big{|}\int_{S^{*}_{+}\mathbb{R}^{n}}Wf(x,\omega,\sigma)Wg(x, \omega,\sigma)\mathrm{d}x\mathrm{d}\omega\frac{\mathrm{d}\sigma}{\sigma}\Big{|}\] \[\leq\|Wf\|_{L^{q^{\prime}}_{x}L^{p^{\prime}}_{x}L^{2}_{\sigma}(S^ {*}_{+}\mathbb{R}^{n})}\|Wg\|_{L^{q}_{x}L^{p}_{x}L^{2}_{\sigma}(S^{*}_{+} \mathbb{R}^{n})}\eqsim\|f\|_{\mathcal{H}^{p^{\prime},q^{\prime};0}_{\mathrm{ dec}}(\mathbb{R}^{n})}\|g\|_{\mathcal{H}^{p,q;0}_{\mathrm{dec}}(\mathbb{R}^{n})}.\]
Moreover, by Proposition 3.3, if \(g\in\mathcal{S}(\mathbb{R}^{n})\) then
\[\int_{S^{*}_{+}\mathbb{R}^{n}}Wf(x,\omega,\sigma)Wg(x,\omega,\sigma)\mathrm{d }x\mathrm{d}\omega\frac{\mathrm{d}\sigma}{\sigma}=\langle Wf,Wg\rangle_{S^{*}_{ +}\mathbb{R}^{n}}=\langle f,g\rangle_{\mathbb{R}^{n}}.\]
In particular, if \(\langle Wf,Wg\rangle_{S^{*}_{+}\mathbb{R}^{n}}=0\) for all \(g\in\mathcal{H}^{p,q;0}_{\mathrm{dec}}(\mathbb{R}^{n})\), then \(f=0\). So \(\mathcal{H}^{p^{\prime},q^{\prime};0}_{\mathrm{dec}}(\mathbb{R}^{n})\subseteq( \mathcal{H}^{p,q;0}_{\mathrm{dec}}(\mathbb{R}^{n}))^{*}\).
Conversely, let \(l\in(\mathcal{H}^{p,q;0}_{\mathrm{dec}}(\mathbb{R}^{n}))^{*}\). Then \(l\circ V\in(L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S^{*}_{+}\mathbb{R}^{n}))^{*}\), by Proposition 4.4. Moreover, by [17, Theorems 1.3.10 and 1.3.21],
\[(L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S^{*}_{+}\mathbb{R}^{n}))^{*}=L^{q^{ \prime}}_{\omega}L^{p^{\prime}}_{x}L^{2}_{\sigma}(S^{*}_{+}\mathbb{R}^{n}),\]
with the natural duality pairing. Hence there exists an \(F\in L^{q^{\prime}}_{\omega}L^{p^{\prime}}_{x}L^{2}_{\sigma}(S^{*}_{+} \mathbb{R}^{n})\) such that
\[l(VG)=\int_{S^{*}_{+}\mathbb{R}^{n}}F(x,\omega,\sigma)G(x,\omega,\sigma) \mathrm{d}x\mathrm{d}\omega\frac{\mathrm{d}\sigma}{\sigma}\]
for all \(G\in L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S^{*}_{+}\mathbb{R}^{n})\). Set \(f:=VF\). Then \(f\in\mathcal{H}^{p^{\prime},q^{\prime};0}_{\mathrm{dec}}(\mathbb{R}^{n})\), by Proposition 4.4. Also, Proposition 3.3 implies that
\[\langle f,g\rangle_{\mathbb{R}^{n}}=\langle VF,g\rangle_{\mathbb{R}^{n}}= \langle F,Wg\rangle_{S^{*}_{+}\mathbb{R}^{n}}=l(VWg)=l(g)\]
for all \(g\in\mathcal{S}(\mathbb{R}^{n})\). Finally, Propositions 3.3 and 4.4 now yield
\[|\langle Wf,G\rangle_{S^{*}_{+}\mathbb{R}^{n}}| =|\langle f,VG\rangle_{\mathbb{R}^{n}}|=|\langle FWVG\rangle_{S^{ *}_{+}\mathbb{R}^{n}}|=|l(VWVG)|=|l(VG)|\] \[\leq\|l\|\|VG\|_{\mathcal{H}^{p,q;0}_{\mathrm{dec}}(\mathbb{R}^{n })}\lesssim\|l\|\|G\|_{L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S^{*}_{+}\mathbb{ R}^{n})}\]
for all \(G\in\mathcal{J}(S^{*}_{+}\mathbb{R}^{n})\). This concludes the proof, since Lemma 4.3 implies that
\[\|f\|_{\mathcal{H}^{p^{\prime},q^{\prime};0}_{\mathrm{dec}}(\mathbb{R}^{n})} \eqsim\|Wf\|_{L^{q^{\prime}}_{x}L^{p^{\prime}}_{x}L^{2}_{\sigma}(S^{*}_{+} \mathbb{R}^{n})}\eqsim\sup|\langle Wf,G\rangle_{S^{*}_{+}\mathbb{R}^{n}}|,\]
where the supremum is over all \(G\in\mathcal{J}(S^{*}_{+}\mathbb{R}^{n})\) with \(\|G\|_{L^{q}_{\omega}L^{p}_{x}L^{2}_{\sigma}(S^{*}_{+}\mathbb{R}^{n})}\leq 1\).
## 5. Invariance
In this section we show that \(\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})\) is invariant under the Euclidean half-wave propagators, but that it is not invariant under general Fourier integral operators unless \(p=q\).
### Bounded operators
We first note that the function spaces for decoupling are invariant under the action of the Euclidean half-wave propagators.
**Theorem 5.1**.: _Let \(p,q\in(1,\infty)\) and \(s\in\mathbb{R}\). Then there exist \(C,N>0\) such that, for all \(t\in\mathbb{R}\), one has \(e^{it\sqrt{-\Delta}}\in\mathcal{L}(\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{ R}^{n}))\), with_
\[\|e^{it\sqrt{-\Delta}}f\|_{\mathcal{L}(\mathcal{H}^{p,q;s}_{\mathrm{dec}}( \mathbb{R}^{n}))}\leq C(1+|t|)^{N}.\]
Proof.: First note that it suffices to consider the case where \(s=0\). Moreover, since \(\rho^{\prime}(D)e^{it\sqrt{-\Delta}}\in\mathcal{L}(L^{p}(\mathbb{R}^{n}))\) for all \(t\in\mathbb{R}\) and \(\rho^{\prime}\in C^{\infty}_{c}(\mathbb{R}^{n})\) with \(\rho^{\prime}\equiv 1\) on \(\mathrm{supp}(\rho)\), we may prove that
\[\Big{(}\int_{S^{n-1}}\|\varphi_{\omega}(D)e^{it\sqrt{-\Delta}}f\|^{q}_{L^{p}( \mathbb{R}^{n})}\mathrm{d}\omega\Big{)}^{1/q}\lesssim\Big{(}\int_{S^{n-1}}\| \varphi_{\omega}(D)f\|^{q}_{L^{p}(\mathbb{R}^{n})}\mathrm{d}\omega\Big{)}^{1/q}\]
for all \(f\in\mathcal{H}^{p,q;0}_{\mathrm{dec}}(\mathbb{R}^{n})\). In turn, to this end it suffices to show that
\[\|\varphi_{\omega}(D)e^{it\sqrt{-\Delta}}f\|_{L^{p}(\mathbb{R}^{n})}\lesssim\| \varphi_{\omega}(D)f\|_{L^{p}(\mathbb{R}^{n})},\]
for an implicit constant independent of \(\omega\) and \(f\). But the latter statement follows from the arguments in [11, Section 9], which rely on writing
\[\|\varphi_{\omega}(D)e^{it\sqrt{-\Delta}}f\|_{L^{p}(\mathbb{R}^{n})}\]
\[\leq\|e^{it\omega\cdot D}\varphi_{\omega}(D)f\|_{L^{p}(\mathbb{R}^{n}) }+\|e^{it\omega\cdot D}\varphi_{\omega}(D)(e^{it(\sqrt{-\Delta}-w\cdot D)}-1)f\|_ {L^{p}(\mathbb{R}^{n})}\] \[=\|\varphi_{\omega}(D)f\|_{L^{p}(\mathbb{R}^{n})}+\|\varphi_{ \omega}(D)(e^{it(\sqrt{-\Delta}-w\cdot D)}-1)f\|_{L^{p}(\mathbb{R}^{n})}\] \[=\|\varphi_{\omega}(D)f\|_{L^{p}(\mathbb{R}^{n})}+\|\chi_{\omega} (D)(e^{it(\sqrt{-\Delta}-w\cdot D)}-1)\varphi_{\omega}(D)f\|_{L^{p}(\mathbb{R }^{n})}\]
for a suitable \(\chi_{\omega}\in C^{\infty}(\mathbb{R}^{n})\) with the property that \(\chi_{\omega}\equiv 1\) on \(\operatorname{supp}(\varphi_{\omega})\). Then \(\chi_{\omega}(D)(e^{it(\sqrt{-\Delta}-w\cdot D)}-1)\) is an anisotropic Calderon-Zygmund operator, and an application of the Marcinkiewicz-Lizorkin multiplier theorem (see [34] for a version with values in a Banach space) yields the required conclusion.
### Unbounded operators
We recall from [13] (see also [20, Corollary 3.4] and Corollary 6.5 below) that \(\mathcal{H}^{p,p;s}_{\mathrm{dec}}(\mathbb{R}^{n})=\mathcal{H}^{s,p}_{FIO}( \mathbb{R}^{n})\) is invariant under any FIO of order zero as in Definition 2.7, if the symbol has compact support in the spatial variable. In this section we show that, by contrast, for all \(q\in(1,\infty)\setminus\{p\}\), the space \(\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})\) is in general not invariant under such Fourier integral operators.
Let \(\phi:\mathbb{R}^{2}\to\mathbb{R}^{2}\) denote the diffeomorphism
\[\phi:(y_{1},y_{2})\mapsto(y_{2},(1+y_{2}^{2})y_{1}).\]
We denote the inverse map by \(\psi\). Clearly, we have
\[\psi:(x_{1},x_{2})\mapsto\big{(}\frac{x_{2}}{1+x_{1}^{2}},x_{1}\big{)}.\]
We let \(T\) denote the operation of pullback by \(\psi\), followed by multiplication by a smooth cutoff function \(k(x)\) that is equal to \(1\) when \(|x|\leq 4\) and zero when \(|x|\geq 8\). That is,
\[Tf(x):=k(x)\cdot f(\psi(x))\]
for \(f\in\mathcal{S}(\mathbb{R}^{2})\) and \(x\in\mathbb{R}^{2}\). Thus, \(T\) is a properly supported FIO of order zero, associated to a canonical transformation, namely the transpose of \(d\psi\) (which maps \(T^{*}_{\psi(x)}\mathbb{R}^{2}\) to \(T^{*}_{x}\mathbb{R}^{2}\)) on \(T^{*}\mathbb{R}^{2}\). In fact, \(T\) is an operator as in Definition 2.7, with symbol \((2\pi)^{-2}k\) and phase function \(\Phi(x,\eta)=\psi(x)\cdot\eta\). As such, it is bounded on \(\mathcal{H}^{p,p;s}_{\mathrm{dec}}(\mathbb{R}^{2})=\mathcal{H}^{s,p}_{FIO}( \mathbb{R}^{2})\) for all \(p\in(1,\infty)\) and \(s\in\mathbb{R}\).
**Theorem 5.2**.: _Let \(p,q\in(1,\infty)\) be such that \(p\neq q\). Then \(T\) is not bounded on \(\mathcal{H}^{p,q;0}_{\mathrm{dec}}(\mathbb{R}^{2})\)._
**Remark 5.3**.: For \(s\in\mathbb{R}\), the operator \(T_{s}:=\langle D\rangle^{s}T\langle D\rangle^{-s}\) has similar properties as \(T\), and it is not bounded on \(\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{2})\). Moreover, it follows from the proof of Theorem 5.2 that one can use a similar construction to show, for all \(n\geq 2\), that \(\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})\) is not invariant under general FIOs of order zero.
Before getting into the technical details, we describe the idea. First, any Fourier integral operator \(F\), including \(T\), maps one wave packet, located at \((x,\omega)\in S^{*}\mathbb{R}^{n}\), to another (slightly distorted) wave packet, located at \(\chi(x,\omega)\) where \(\chi\) is the map on \(S^{*}\mathbb{R}^{n}\) induced by the (homogeneous) canonical relation of \(F\). Below, we choose \(u\) as a sum of many wave packets all pointing in the same direction \(\omega_{0}\). We shall see that the \(\mathcal{H}^{p,q;0}_{\mathrm{dec}}(\mathbb{R}^{n})\) norm sums the \(L^{p}(\mathbb{R}^{n})\) norm of each wave packet in, essentially, a 'little \(\ell^{p}\) way', thus the growth of the norm in \(N\), the number of wave packets, will be \(\sim N^{1/p}\). The image of these wave packets, however, will all have _different_ frequency directions \(\omega\). As a result, the \(\mathcal{H}^{p,q;0}_{\mathrm{dec}}(\mathbb{R}^{n})\) norm sums the \(L^{p}(\mathbb{R}^{n})\) norm of each wave packet in a 'little \(\ell^{q}\) way', thus the growth of the norm in \(N\) of the image of \(u\) will be \(\sim N^{1/q}\). If \(q<p\) then the norm of the image grows faster than
the norm of \(u\), demonstrating that the operator is not bounded. In the other case \(q>p\), we just have to consider the opposite situation, that is, we consider \(u\) that is a sum of many wave packets in different directions such that \(Tu\) is a sum of wave packets all pointing in the same direction.
Proof of Theorem 5.2.: We first consider the case where \(q<p\).
We choose a frequency scale \(R>0\), which will be large and, eventually, taken to infinity. We let \(\omega_{0}\) be the unit vector \((1,0)\in\mathbb{R}^{2}\) and let \(u_{0}\) be the wave packet
\[u_{0}(x):=\frac{1}{(2\pi)^{2}}\int_{\mathbb{R}^{2}}e^{ix\cdot\xi}\varphi_{ \omega_{0}}(\xi)\chi\big{(}\frac{|\xi|}{R}\big{)}\,\mathrm{d}\xi\]
for \(x\in\mathbb{R}^{2}\), where \(\chi\in C_{c}^{\infty}(\mathbb{R})\) is supported on \([1/2,4]\) and identically \(1\) on \([1,2]\). Thus, \(\hat{u}_{0}\) is a symbol of order \(1/4\) and type \((1/2,0)\) supported in a second dyadic region where \(|\xi|\in[R,2R]\) and \(\hat{\xi}\) is within1\(CR^{-1/2}\) of \(\omega_{0}\).
Footnote 1: Throughout this subsection, we will use \(C\) and \(c\) for sufficiently large, resp. sufficiently small, constants, the value of which may change from line to line.
We now choose an integer \(N\) comparable to \(R^{\alpha}\) for some fixed \(0<\alpha<1/2\). We consider a function that consists of a sum of \(N\) translates of \(u_{0}\), along the line \(x_{1}=1\). We define
\[y_{j,N}:=\big{(}1,\frac{j}{N}\big{)}\quad(j=1,\dots,N).\]
Let
\[u_{j,N}(x)=u_{0}(x-y_{j,N})=\frac{1}{(2\pi)^{2}}\int_{\mathbb{R}^{2}}e^{i(x-y_ {j,N})\cdot\xi}\varphi_{\omega_{0}}(\xi)\chi\big{(}\frac{|\xi|}{R}\big{)}\, \mathrm{d}\xi\]
for \(x\in\mathbb{R}^{2}\). We set
\[u:=\sum_{j=1}^{N}u_{j,N}.\]
In preparation for computing the \(\mathcal{H}^{p,q;0}_{\mathrm{dec}}(\mathbb{R}^{2})\) norm of \(u\), we apply \(\varphi_{\omega}(D)\) to \(u\). As this operator is a Fourier multiplier, this gives us
\[\varphi_{\omega}(D)u(x)=\frac{1}{(2\pi)^{2}}\sum_{j=1}^{N}\int_{\mathbb{R}^{2 }}e^{i(x-y_{j,N})\cdot\xi}\varphi_{\omega_{0}}(\xi)\varphi_{\omega}(\xi)\chi \big{(}\frac{|\xi|}{R}\big{)}\,\mathrm{d}\xi.\]
The product \(\varphi_{\omega_{0}}(\xi)\varphi_{\omega}(\xi)\) vanishes for \(|\omega-\omega_{0}|\geq CR^{-1/2}\), and is a symbol of order \(1/2\) and type \((1/2,1)\) (that is, type \(1\) in the radial direction and \(1/2\) in orthogonal directions) with support contained in \(\mathrm{supp}(\varphi_{\omega_{0}})\) for \(|\omega-\omega_{0}|\leq CR^{-1/2}\). We apply a standard integration by parts argument, using the fact that the function \(e^{i(x-y_{j,N})\cdot\xi}\) is invariant under the differential operators
\[\frac{1+\langle\xi\rangle^{2}D_{\xi_{1}}^{2}}{1+\langle\xi\rangle^{2}(x_{1}-1 )^{2}},\quad\frac{1+\langle\xi\rangle D_{\xi_{2}}^{2}}{1+\langle\xi\rangle(x _{2}-j/N)^{2}},\]
to obtain a uniform upper bound:
\[|\varphi_{\omega}(D)u_{j,N}(x)|\leq C_{N}R^{2}\big{(}1+R^{2}(x_{1}-1)^{2}+R(x _{2}-j/N)^{2}\big{)}^{-N} \tag{5.1}\]
for \(|\omega-\omega_{0}|\leq CR^{-1/2}\) and \(x\in\mathbb{R}^{2}\), where \(C_{N}\geq 0\) is uniform for \(\omega\) in this range. Essentially this is the same estimate as in (3.4), with \(\sigma=R^{-1}\) and using (2.3), except that the symbol here is order \(1/4\) greater than that in (3.4). Since the \(\varphi_{\omega}(D)u_{j,N}\) are all decaying rapidly at scale \(R^{-\beta}\), for any \(\beta<1/2\) due to (5.1), the \(L^{p}(\mathbb{R}^{2})\) norm of \(\varphi_{\omega}(D)u_{j,N}\) is equal to the \(L^{p}(\mathbb{R}^{2})\) norm of \(\varphi_{\omega}(D)u_{j,N}\) restricted
to a ball \(B_{j,N}\subseteq\mathbb{R}^{2}\) of radius \(R^{-\beta}\) centered at \(y_{j,N}\), where \(\beta\) is chosen so that \(\alpha<\beta<1/2\), up to an \(O(R^{-\infty})\) error. As these balls are all disjoint (using that \(\beta>\alpha\) and the fact that the spacing between the centers of these balls is \(\sim 1/N\sim R^{-\alpha}\)), it follows that \(\|\varphi_{\omega}(D)u\|_{L^{p}(\mathbb{R}^{2})}^{p}\) is equal, up to an \(O(R^{-\infty})\) error, to the sum
\[\sum_{j=1}^{N}\|\varphi_{\omega}(D)u_{j,N}\|_{L^{p}(B_{j,N})}^{p}\leq CR^{2p}R ^{-3/2}N.\]
We therefore obtain an upper bound for the \(\mathcal{H}_{\mathrm{dec}}^{p,q;0}(\mathbb{R}^{2})\) norm of \(u\), by using this bound in the region \(|\omega-\omega_{0}|\leq CR^{-1/2}\) and noting that the integrand is zero for all other \(\omega\). Using the definition of the norm of \(\mathcal{H}_{\mathrm{dec}}^{p,q;0}(\mathbb{R}^{2})\) in Definition 4.1, we obtain
\[\|u\|_{\mathcal{H}_{\mathrm{dec}}^{p,q;0}(\mathbb{R}^{2})}\leq CR^{2}\cdot R^ {-3/(2p)}\cdot N^{1/p}\cdot R^{-1/(2q)}, \tag{5.2}\]
where \(C\) is independent of \(u\) and \(R\).
We now consider \(v:=Tu\), which is
\[v(x)=k(x)u(\psi(x))=\frac{1}{(2\pi)^{2}}k(x)\sum_{j=1}^{N}\int_{\mathbb{R}^{2} }e^{i(\psi(x)-y_{j,N})\cdot\xi}\psi_{\omega_{0}}(\xi)\chi\big{(}\frac{|\xi|}{R }\big{)}\,\mathrm{d}\xi\]
for \(x\in\mathbb{R}^{2}\).
We want to apply the Fourier multiplier \(\varphi_{\omega}(D)v\) to \(v\), so we take the Fourier transform of \(v\). This is
\[\hat{v}(\xi)=\frac{1}{(2\pi)^{2}}\int_{\mathbb{R}^{2}}e^{-ix\cdot\xi}k(x)\sum_ {j=1}^{N}\int_{\mathbb{R}^{2}}e^{i(\psi(x)-y_{j,N})\cdot\eta}\psi_{\omega_{0} }(\eta)\chi\big{(}\frac{|\eta|}{R}\big{)}\mathrm{d}\eta\,\mathrm{d}x.\]
Let \(v_{j,N}\) be the \(j\)th term in this sum. We scale the frequencies \(\xi\) and \(\eta\) by a factor of \(R\) and write in the form
\[\hat{v}_{j,N}(R\xi)=\frac{R^{2}}{(2\pi)^{2}}\iint_{\mathbb{R}^{4}}e^{iR\big{(} -x\cdot\xi+(\psi(x)-y_{j,N})\cdot\eta\big{)}}k(x)\psi_{\omega_{0}}(R\eta)\chi (|\eta|)\mathrm{d}\eta\,\mathrm{d}x. \tag{5.3}\]
For each fixed \(\xi\), the phase function \(f(x,\eta):=-x\cdot\xi+(\psi(x)-y_{j,N})\cdot\eta\) has a unique critical point at \(x=x_{j,N}:=\psi^{-1}(y_{j,N})\) and \(\xi=d_{x}\psi(x_{j,N})\cdot\eta\), which uniquely determines \(\eta\) since \(\psi\) is a diffeomorphism and therefore \(d_{x}\psi\) is an invertible linear map. It is easy to check that the critical point is nondegenerate, with Hessian at the critical point given in block form by
\[\begin{pmatrix}d_{xx}^{2}\psi(x_{j,N})&d_{x}\psi\\ d_{x}\psi&0\end{pmatrix}. \tag{5.4}\]
We notice that zero entry in all the \((\eta_{i},\eta_{j})\) positions, which is a crucial feature that will be exploited in the proof of Lemma 5.4.
**Lemma 5.4**.: _The stationary phase expansion is valid for the integral (5.3) as \(R\to\infty\)._
Proof.: The proof is a bit subtle, as the amplitude \(u(x):=k(x)\psi_{\omega_{0}}(R\eta)\chi(|\eta|)\) also depends on the parameter \(R\), so we have to show that the remainder term in the asymptotic expansion vanishes to higher and higher powers of \(R\). This requires careful estimation of derivatives of the amplitude \(u\), tracking the dependence on \(R\).
We begin by noting that \(u\) satisfies estimates
\[|D_{x}^{\alpha}D_{\eta}^{\beta}u|\leq C_{\alpha,\beta}R^{1/4+|\beta|/2}. \tag{5.5}\]
We denote the critical point by \((x_{0},\eta_{0})\), and introduce an \(R\)-dependent cutoff function \(\tilde{\chi}(x,\eta)\) that is \(1\) in a neighbourhood of \((x_{0},\eta_{0})\) of diameter \(R^{-b}\) and is supported in a larger \(R^{-b}\) neighbourhood of this point, with derivatives estimated by corresponding powers of \(R^{b}\). As we shall see, we need to take \(1/4<b<1/2\).
First we consider \(u_{1}=u(1-\tilde{\chi})\), which therefore vanishes in a \(R^{-b}\) neighbourhood of the critical point. Since the Hessian is nondegenerate, it follows that the derivative of the phase function has size \(|\nabla f|\geq CR^{-b}\) on the support of \(u_{1}\). Then Theorem 7.7.1 of [15] shows that, for any integer \(k\), we can estimate the integral pointwise by
\[CR^{-k}\sum_{|\alpha|\leq k}|D^{\alpha}u_{1}||\nabla f|^{|\alpha|-2k}.\]
Using the fact that the derivatives \(|D^{\alpha}u_{1}|\) are bounded by \(R^{|\alpha|/2}\) (using \(b<1/2\)) this is estimated by
\[CR^{-k}\sum_{|\alpha|\leq k}R^{|\alpha|/2}R^{-b(|\alpha|-2k)}\leq R^{k(b-1/2)},\]
where the exponent tends to \(-\infty\) as \(k\to\infty\). Thus, \(u_{1}\) contributes nothing to the expansion as \(R\to\infty\).
Next we consider \(u_{2}=u\tilde{\chi}\). In this case, we write \(f(x,\eta)=f(x_{0},\eta_{0})+Q(x-x_{0},\eta-\eta_{0})/2+g(x,\eta)\), where \(Q\) is the quadratic part of the Taylor series and \(g\) vanishes to third order at \((x_{0},\eta_{0})\). We now view the \(g\) term as belonging to the amplitude rather than the exponential, so we consider the integral
\[e^{iRx_{0}\cdot\xi}\frac{R^{2}}{(2\pi)^{2}}\iint_{\mathbb{R}^{4}}e^{iRQ(x-x_{ 0},\eta-\eta_{0})/2}\big{(}k(x)e^{iRg(x,\eta)}\psi_{\omega_{0}}(R\eta)\chi(| \eta|)\tilde{\chi}(x,\eta)\big{)}\mathrm{d}\eta\mathrm{d}x. \tag{5.6}\]
Let \(u_{3}\) be the amplitude in this expression. We use the identity (see [15, proof of Lemma 7.7.3])
\[\iint_{\mathbb{R}^{4}}e^{iRQ(x-x_{0},\eta-\eta_{0})/2}u_{3}(x,\eta)\,\mathrm{ d}\eta\mathrm{d}x=R^{-2}\iint_{\mathbb{R}^{4}}e^{iQ(D_{x},D_{\eta})/2R}u(x_{0}, \eta_{0})\mathrm{d}\eta\mathrm{d}x,\]
and we use estimate (7.6.7) of [15, Section 7.6]: for \(s\) an integer larger than \(n/2=1\),
\[\begin{split}&\sup\Big{|}e^{iQ(D_{x},D_{\eta})/2R}u_{3}(x_{0}, \eta_{0})-\sum_{j=0}^{k-1}\frac{(iQ(D_{x},D_{\eta})/2R)^{j}}{j!}u_{3}(x_{0}, \eta_{0})\Big{|}\\ &\leq\frac{C(2R)^{-k}}{k!}\Big{(}\sum_{|\alpha|\leq s}\|Q(D_{x},D _{\eta})^{k}D^{\alpha}u_{3}\|_{L^{2}}^{2}\Big{)}^{1/2}.\end{split} \tag{5.7}\]
We need to estimate the powers of \(R\) in the sum of squared norms on the RHS. We use the crucial fact that the second order differential operator \(Q(D_{x},D_{\eta})\) only has _first_ derivatives in the \(\eta\) direction, according to (5.4). So, each application of \(Q(D_{x},D_{\eta})\) involves either two \(x\)-derivatives or one \(x\)-derivative and one \(\eta\)-derivative. We notice that \(x\)-derivatives are harmless (in terms of accumulating powers of \(R\)) when they hit the \(k\) factor, while if they hit the \(\tilde{\chi}\) factor we incur \(R^{b}\) and if they hit the \(e^{iRg}\) factor they incur \(R|\nabla g|\). Since \(g\) vanishes to third order at \((x_{0},\eta_{0})\) and since \(u_{3}\) is supported within distance \(R^{-b}\) of \((x_{0},\eta_{0})\), we estimate \(R|\nabla g|\) by \(CR^{1-2b}\) (since \(\nabla g\) still has second order vanishing at the critical point). On the other hand, a \(\eta\)-derivative incurs either \(R^{1/2}\) if it hits the \(\psi_{\omega_{0}}(R\eta)\) factor or \(CR^{1-2b}\) if it hits the \(e^{iRg}\) factor. Taking into account \(b>1/4\), we incur at worst \(R^{1/2}\) from the \(\eta\)-derivative.
Thus, each time we apply \(Q(D_{x},D_{\eta})\) to \(u_{3}\) we incur at worst \(R^{1-2b}R^{1/2}\). This is an exponent strictly less than \(1\) since \(b\) has been chosen strictly greater than \(1/4\). Since we gain a power \(R^{-1}\) from each factor of \(Q(D_{x},D_{\eta})\), this shows that the RHS of (5.7) is estimated by a power of \(R\) with exponent tending to \(-\infty\) linearly with \(k\). We conclude that the 'formal' stationary phase expansion, which can be taken to be the expansion appearing on the LHS of (5.7), is a valid asymptotic expansion as \(R\to\infty\).
_Continuation of the proof of Theorem 5.2._ Lemma 5.4 shows that we can restrict attention to a finite number of terms of the expansion. We first consider the leading term in the expansion. The contribution to the Fourier transform of \(\phi_{\omega}(D)v\) of the leading term is
\[\sum_{j=1}^{N}\phi_{\omega}(\xi)k(x_{j,N})\phi_{\omega_{0}}\big{(}(d_{x}\psi^{ t}(x_{j,N}))^{-1}\xi\big{)}\chi\big{(}(d_{x}\psi^{t}(x_{j,N}))^{-1}\xi/R\big{)}. \tag{5.8}\]
Its support is contained in the union of sets, over \(j=1\dots N\),
\[\{\xi\in\operatorname{supp}\!\varphi_{\omega}\cap d\psi^{t}(x_{j,N})( \operatorname{supp}\,\varphi_{\omega_{0}})\}.\]
We now consider this function of \(\xi\), assuming that \(\omega\) lies in \(O(R^{-1/2})\) neighbourhoods of the direction \(\omega_{j,N}\) of \(d\psi^{t}_{x_{j,N}}\omega_{0}\), for \(j=1,\dots,N\). It is easy to compute that
\[\omega_{j,N}:=d\psi^{t}(x_{j,N})\omega_{0}=d\psi^{t}(x_{j,N})\begin{pmatrix}1 \\ 0\end{pmatrix}=\frac{1}{1+(j/N)^{2}}\begin{pmatrix}-2j/N\\ 1\end{pmatrix}.\]
Notice that, as \(j\) varies, these directions \(\omega_{j,N}\) are separated by an amount at least \(cN^{-1}\gg R^{-1/2}\). Let \(S_{j,N}\) be the neighbourhood of \(\omega_{j,N}\) in \(S^{1}\) of size \(cR^{-1/2}\), for a suitably small \(c<1\). Then for \(\omega\) in \(S_{j,N}\), we have \(\varphi_{\omega}(D)v=\varphi_{\omega}(D)v_{j}\) up to an error which is \(O(R^{-\infty})\) in \(L^{p}\), where \(v_{j}=u_{j}\circ\psi\). We see that for \(\omega\in S_{j,N}\), \(\varphi_{\omega}(D)v\) is given by an inverse Fourier transform
\[(2\pi)^{-2}\int e^{ix\cdot\xi}\hat{v}_{j}(\xi)d\xi+O_{L^{p}}(R^{-\infty}), \tag{5.9}\]
where \(\hat{v}_{j}(\xi)\) is the \(j\)th summand in (5.8). We seek a lower bound on the \(L^{p}\) norm of this function, for \(\omega\in S_{j,N}\). Taking \(c\) sufficiently small, there is a rectangle in \(\eta\)-space of uniform size \(cR\times cR^{1/2}\) where the first term in the expansion of \(\hat{v}_{j}(\xi)\) is at least \(cR^{1/2}\). Moreover, the principal symbol is real and nonnegative, so its contribution to the integral (5.9) is non-oscillatory for \(y=\psi(x)\) in a rectangle \(X\) centred at \(y_{j,N}\) of dimensions \(cR^{-1}\times cR^{-1/2}\). The contribution of the leading term in the stationary phase expansion for \(v_{j}\) thus has a real part at least \(cR^{2}\) on this rectangle \(X\), contributing at least \(cR^{2}R^{-3/(2p)}\) to the \(L^{p}\) norm.
Now when we consider the subleading terms in the asymptotic expansion, these give rise to an integral similar to (5.9) where the integrand, like the leading term, is a symbol of type \((1/2,1)\) as with (5.1), but with order lower, compared to the leading term, by at least \(1/2\) (because it arises from a term on the RHS of (5.7) where at least one \(\eta\)-derivative hits \(\psi_{\omega_{0}}\)). Thus the inverse Fourier transform is bounded, similar to (5.1), by
\[|\varphi_{\omega}(D)u_{j,N}(x)|\leq C_{N}R^{2-\gamma}\big{(}1+R^{2}(x_{1}-1)^{ 2}+R(x_{2}-j/N)^{2}\big{)}^{-N} \tag{5.10}\]
This contributes at most \(CR^{2-\gamma}R^{-3/(2p)}\) to the \(L^{p}\) norm, where \(\gamma=0\) for the leading term but \(\gamma>0\) for all subleading terms. This is a strictly smaller contribution
compared to our lower bound for the leading term, and therefore can be neglected for \(R\) sufficiently large.
We see that there is a lower bound of \(cR^{2}R^{-3/(2p)}\) for the \(L^{p}\) norm of \(v\), when \(\omega\in S_{j,N}\). Now taking an \(L^{q}\) norm in \(\omega\), we have shown a lower bound of the form
\[cR^{2}\cdot R^{-3/(2p)}\cdot R^{-1/(2q)}\cdot N^{1/q}.\]
Comparing this to (5.2) we see that pullback by the diffeomorphism \(\psi\) is not bounded on \(\mathcal{H}^{s,p,q}_{\mathrm{dec}}(\mathbb{R}^{n})\) for \(q<p\). A very similar argument shows that diffeomorphisms are not bounded for \(q>p\); we just need to act on a sum of wave packets pointing in different directions, such that pullback by the diffeomorphism maps them to wave packets pointing in the same direction. We omit the details.
**Remark 5.5**.: This example makes clear that the same phenomenon will happen for any FIO whose canonical relation maps phase space points \((x_{j},\omega_{0})\) with varying base point \(x_{j}\) and fixed direction \(\omega_{0}\) to points with varying direction. For a concrete example, consider the FIO \(T=e^{it\sqrt{L}}\), where \(L\) is the divergence form operator
\[L:=D_{x_{1}}^{2}+(1+x_{1}^{2})D_{x_{2}}^{2}\]
in two space dimensions. The bicharacteristic flow associated with \(\sqrt{L}\) is
\[\dot{x}_{1} =\frac{\xi_{1}}{\sqrt{\xi_{1}^{2}+(1+x_{1}^{2})\xi_{2}^{2}}},\] \[\dot{x}_{2} =\frac{(1+x_{1}^{2})\xi_{2}}{\sqrt{\xi_{1}^{2}+(1+x_{1}^{2})\xi_ {2}^{2}}},\] \[\dot{\xi}_{1} =\frac{-x_{1}\xi_{2}^{2}}{\sqrt{\xi_{1}^{2}+(1+x_{1}^{2})\xi_{2} ^{2}}},\] \[\dot{\xi}_{2} =0.\]
Letting \(\omega=\xi_{1}/\xi_{2}\), we find that \(x_{1}\) and \(\omega\) satisfying the autonomous equations
\[\dot{x}_{1} =\frac{\omega}{\sqrt{1+x_{1}^{2}+\omega^{2}}},\] \[\dot{\omega} =\frac{-x_{1}}{\sqrt{1+x_{1}^{2}+\omega^{2}}}\]
showing that \(x_{1}\) and \(\omega\) move along circles \(\{x_{1}^{2}+\omega^{2}=\mathrm{constant}\}\) at a speed depending only on the radius of the circle. It is thus easy to write down the exact solution and verify that for a fixed small positive time \(t\), if \(\omega(0)=0\) and \(x_{1}(0)=c\) then \(\omega(t)=\omega(t,c)\) varies with \(c\). Threefore, we can implement a similar example as above, to conclude that the FIO \(e^{it\sqrt{L}}\) is not a bounded operator from \(\mathcal{H}^{0,p,q}_{\mathrm{dec}}(\mathbb{R}^{n})\) for \(q\neq p\). Again, we omit the details.
## 6. Embeddings
In this section we consider various embedding properties of \(\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})\). To do so, we first prove some norm inequalities involving the parabolic frequency localizations from Section 3.1. We then consider inclusions involving the function spaces for decoupling and the classical Sobolev spaces \(\mathcal{H}^{s,p}(\mathbb{R}^{n})\), and finally we prove fractional integration theorems for \(\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})\).
### Preliminary results
Some of the embedding properties involving the function spaces for decoupling are obvious. For example, by Holder's inequality, it is clear that as the parameters \(q\) and \(s\) grow, the space \(\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})\) gets smaller. In other words, for all \(p,q,v\in[1,\infty)\) and \(s,r\in\mathbb{R}\), one has
\[\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})\subseteq\mathcal{H}^{p,v; r}_{\mathrm{dec}}(\mathbb{R}^{n}) \tag{6.1}\]
if \(s\geq r\) and \(q\geq v\).
On the other hand, some of the other embeddings involving the function spaces for decoupling are more subtle, and to prove those we will rely in part on the following norm inequalities involving the functions \((\varphi_{\omega})_{\omega\in S^{n-1}}\) from Section 3.1.
**Proposition 6.1**.: _Let \(p\in[1,\infty)\) and \(u\in(p,\infty)\), and set \(s:=\frac{n+1}{2}(\frac{1}{p}-\frac{1}{u})\). Then there exists a \(C\geq 0\) such that the following statements holds for all \(\omega\in S^{n-1}\) and \(f\in\mathcal{S}^{\prime}(\mathbb{R}^{n})\)._
1. _If_ \(\varphi_{\omega}(D)f\in\mathcal{H}^{p}(\mathbb{R}^{n})\)_, then_ \(\langle D\rangle^{-s}\varphi_{\omega}(D)f\in L^{u}(\mathbb{R}^{n})\) _and_ \[\|\langle D\rangle^{-s}\varphi_{\omega}(D)f\|_{L^{u}(\mathbb{R}^{n})}\leq C\| \varphi_{\omega}(D)f\|_{\mathcal{H}^{p}(\mathbb{R}^{n})}.\]
2. _If_ \(f\in H^{p}_{\omega}(\mathbb{R}^{n})\)_, then_ \(\langle D\rangle^{-\frac{n-1}{4}}\varphi_{\omega}(D)f\in H^{p}(\mathbb{R}^{n})\) _and_ \[\|\langle D\rangle^{-\frac{n-1}{4}}\varphi_{\omega}(D)f\|_{\mathcal{H}^{p}( \mathbb{R}^{n})}\leq C\|f\|_{H^{p}_{\omega}(\mathbb{R}^{n})}.\]
Proof.: For (1), we write
\[\langle D\rangle^{-s}\varphi_{\omega}(D)=m_{2}(D)m_{1}(D)\varphi_{\omega}(D).\]
Here, for \(\xi\in\mathbb{R}^{n}\),
\[m_{1}(\xi):=(1+|\xi|_{\omega}^{2})^{-s}\]
and
\[m_{2}(\xi):=(1+|\xi|^{2})^{-s/2}(1+|\xi|_{\omega}^{2})^{s}\tilde{\chi}_{\omega }(\xi),\]
where \(\tilde{\varphi}_{\omega}(\xi)=1\) on the support of \(\varphi_{\omega}(\xi)\). Then \(m_{1}(D)\), being smoothing of order \(2s\) non-isotropically, is bounded from \(H^{p}_{\omega}(\mathbb{R}^{n})\) to \(L^{u}(\mathbb{R}^{n})\). Moreover,
\[\sup_{\xi\in\mathbb{R}^{n}}\langle|\xi|_{\omega}\rangle^{\frac{|\alpha|}{2}+ \beta}|\partial_{\xi}^{\alpha}(\omega\cdot\partial_{\xi})^{\beta}m_{2}(\xi)|<\infty\]
for all \(\alpha\in\mathbb{Z}_{+}^{n}\) and \(\beta\in\mathbb{Z}_{+}\), so \(m_{2}(D)\) is bounded from \(L^{u}(\mathbb{R}^{n})\) to itself for all \(1<u<\infty\). As a result,
\[\|\langle D\rangle^{-s}\varphi_{\omega}(D)f\|_{L^{u}(\mathbb{R} ^{n})} =\|m_{2}(D)m_{1}(D)\varphi_{\omega}(D)f\|_{L^{u}(\mathbb{R}^{n})}\] \[\lesssim\|m_{1}(D)\varphi_{\omega}(D)f\|_{L^{u}(\mathbb{R}^{n})}\] \[\lesssim\|\varphi_{\omega}(D)f\|_{\mathcal{H}^{p}_{\omega}( \mathbb{R}^{n})}\] \[\simeq\|\varphi_{\omega}(D)f\|_{\mathcal{H}^{p}(\mathbb{R}^{n})}\]
where we used Proposition 2.6 in the last inequality to pass between isotropic and non-isotropic versions of \(\mathcal{H}^{p}\) if \(p=1\). This proves (1).
To prove (2), one observes that
\[\|\langle D\rangle^{-\frac{n-1}{4}}\varphi_{\omega}(D)f\|_{ \mathcal{H}^{p}(\mathbb{R}^{n})} \eqsim\|\langle D\rangle^{-\frac{n-1}{4}}\varphi_{\omega}(D)f\|_{H^{p }_{\omega}(\mathbb{R}^{n})}\] \[\lesssim\|f\|_{H^{p}_{\omega}(\mathbb{R}^{n})}\]
for all \(f\in H^{p}_{\omega}(\mathbb{R}^{n})\). Here the first inequality follows from Proposition 2.6, and the second from the fact that
\[\sup_{\xi\in\mathbb{R}^{n}}\langle|\xi|_{\omega}\rangle^{\frac{|\alpha|}{2}+ \beta}|\partial_{\xi}^{\alpha}(\omega\cdot\partial_{\xi})^{\beta}\big{(}\langle \xi\rangle^{-\frac{n-1}{4}}\varphi_{\omega}(\xi)\big{)}|<\infty\]
for all \(\alpha\in\mathbb{Z}_{+}^{n}\) and \(\beta\in\mathbb{Z}_{+}\), so that \(\langle D\rangle^{-\frac{n-1}{4}}\varphi_{\omega}(D)\) is bounded on \(H_{\omega}^{p}(\mathbb{R}^{n})\).
### Sobolev embeddings
Recall from Remark 4.2 that \(\mathcal{H}_{\mathrm{dec}}^{p,p;s}(\mathbb{R}^{n})=\mathcal{H}_{FIO}^{s,p}( \mathbb{R}^{n})\) for all \(p\in[1,\infty)\) and \(s\in\mathbb{R}\). Hence the Sobolev embeddings for \(\mathcal{H}_{FIO}^{p}(\mathbb{R}^{n})\) from (1.1) immediately yield the following embeddings for \(\mathcal{H}_{\mathrm{dec}}^{p,p;s}(\mathbb{R}^{n})\), for all \(p\in[1,\infty)\) and \(s\in\mathbb{R}\):
\[\mathcal{H}^{s+s(p),p}(\mathbb{R}^{n})\subseteq\mathcal{H}_{\mathrm{dec}}^{p,p;s}(\mathbb{R}^{n})\subseteq\mathcal{H}^{s-s(p),p}(\mathbb{R}^{n}). \tag{6.2}\]
The following theorem extends these embeddings to \(\mathcal{H}_{\mathrm{dec}}^{p,q;s}(\mathbb{R}^{n})\) for more general values of \(q\).
**Theorem 6.2**.: _Let \(p\in[1,\infty]\), \(q\in[1,\infty)\) and \(s\in\mathbb{R}\). Then the following assertions hold._
1. _If_ \(q\leq\max(p,2)\)_, then_ \[\mathcal{H}^{s+s(p),p}(\mathbb{R}^{n})\subseteq\mathcal{H}_{\mathrm{dec}}^{p,q;s}(\mathbb{R}^{n}).\]
2. _If_ \(q\geq\min(p,2)\)_, then_ \[\mathcal{H}_{\mathrm{dec}}^{p,q;s}(\mathbb{R}^{n})\subseteq\mathcal{H}^{s-s(p ),p}(\mathbb{R}^{n}).\]
Proof.: First note that (2) follows from (1) by duality, cf. Proposition 4.8. Moreover, in light of (6.1), we only need to prove (1) when \(q=\max\{p,2\}\). This means we need to establish (1) when \(2\leq p<\infty\) and \(q=p\), which is already contained in (6.2), and to establish (1) when \(1<p\leq 2\) and \(q=2\), which we will do in the following.
Below we prove (1) when \(p=1\) and \(q=2\), and then appeal to interpolation.
Assume without loss of generality that \(s=-s(1)=-\frac{n-1}{4}\). Our goal is to show that
\[\|\rho(D)f\|_{L^{1}(\mathbb{R}^{n})}+\Big{(}\int_{S^{n-1}}\|\varphi_{\omega}( D)f\|_{\mathcal{H}^{-\frac{n-1}{4},1}(\mathbb{R}^{n})}^{2}\mathrm{d}\omega\Big{)}^{1 /2}\lesssim\|f\|_{\mathcal{H}^{1}(\mathbb{R}^{n})}.\]
Since \(\|\rho(D)f\|_{L^{1}(\mathbb{R}^{n})}\lesssim\|f\|_{L^{1}(\mathbb{R}^{n})} \lesssim\|f\|_{\mathcal{H}^{1}(\mathbb{R}^{n})}\), we only need to bound the second term on the left hand side above. For \(j\geq 1\), let \(\Delta_{j}\) be the Littlewood-Paley projection onto \(\{|\xi|\simeq 2^{j}\}\), and for \(\omega\in S^{n-1}\), let
\[T_{\omega,j}=\varphi_{\omega}(D)|D|^{-\frac{n-1}{4}}\Delta_{j}.\]
Since
\[\|\varphi_{\omega}(D)f\|_{\mathcal{H}^{-\frac{n-1}{4},1}(\mathbb{R}^{n})} \lesssim\|f\|_{L^{1}}+\Big{\|}\Big{(}\sum_{j=1}^{\infty}|T_{\omega,j}f|^{2} \Big{)}^{1/2}\Big{\|}_{L^{1}(\mathbb{R}^{n})},\]
(the term \(\|f\|_{L^{1}}\) on the right hand side bounds \(\sum_{-4\leq j\leq 0}\|\varphi_{\omega}(D)\Delta_{j}f\|_{\mathcal{H}^{-\frac{n-1}{4},1}}\)), it suffices to show that
\[\Big{(}\int_{S^{n-1}}\Big{\|}\Big{(}\sum_{j=1}^{\infty}|T_{\omega,j}f|^{2} \Big{)}^{1/2}\Big{\|}_{L^{1}(\mathbb{R}^{n})}^{2}\mathrm{d}\omega\Big{)}^{1/2} \lesssim\|f\|_{\mathcal{H}^{1}(\mathbb{R}^{n})}. \tag{6.3}\]
Let \(f\) be a local Hardy \(\mathcal{H}^{1}\) atom, adapted to a (Euclidean) ball of radius \(r\). By translation invariance we may assume that the ball is centered at \(0\). First assume \(0<r<1\), which is the main case. Then \(\|f\|_{L^{\infty}}\leq r^{-n}\), and \(\int f=0\). We will show that
\[\Big{(}\int_{S^{n-1}}\Big{\|}\Big{(}\sum_{j\colon 2^{j}\geq r^{-1}}|T_{\omega,j}f|^{2} \Big{)}^{1/2}\Big{\|}_{L^{1}(B_{10r}(0,\omega))}^{2}\mathrm{d}\omega\Big{)}^{1/2} \lesssim 1 \tag{6.4}\]
and
\[\Big{\|}\Big{(}\sum_{j\colon\,2^{j}\geq r^{-1}}|T_{\omega,j}f|^{2}\Big{)}^{1/2} \Big{\|}_{L^{1}(\mathbb{R}^{n}\setminus B_{10r}(0,\omega))}\lesssim 1\quad\text{uniformly for }\omega\in S^{n-1}, \tag{6.5}\]
(Here \(B_{r}(0,\omega)\) is the non-isotropic ball \(\{|x\cdot\omega|\leq r,|x\cdot\omega^{\perp}|\leq r^{1/2}\}\).) In addition, we will show that if \(1<2^{j}<r^{-1}\), then
\[\|T_{\omega,j}f\|_{L^{1}(\mathbb{R}^{n})}\lesssim 2^{j}r\quad\text{uniformly for } \omega\in S^{n-1} \tag{6.6}\]
which implies
\[\Big{\|}\Big{(}\sum_{j\colon\,1<2^{j}<r^{-1}}|T_{\omega,j}f|^{2}\Big{)}^{1/2} \Big{\|}_{L^{1}(\mathbb{R}^{n})}\lesssim 1\quad\text{uniformly for }\omega\in S^{n-1} \tag{6.7}\]
(because we can bound the \(\ell^{2}\) norm by \(\ell^{1}\) norm and apply Fubini). (6.4), (6.5) and (6.7) imply our desired estimate (6.3). We now prove (6.4), (6.5) and (6.6).
The proof of (6.4) is via \(L^{2}\) theory. Cauchy-Schwarz and Plancherel gives
\[\Big{\|}\Big{(}\sum_{j\colon\,2^{j}\geq r^{-1}}|T_{\omega,j}f|^{2 }\Big{)}^{1/2}\Big{\|}_{L^{1}(B^{\omega}_{10r}(0))} \leq|B^{\omega}_{10r}(0)|^{1/2}\Big{\|}\Big{(}\sum_{j\colon\,2^{ j}\geq r^{-1}}|T_{\omega,j}f|^{2}\Big{)}^{1/2}\Big{\|}_{L^{2}(\mathbb{R}^{n})}\] \[\lesssim r^{\frac{n+1}{4}}\Big{(}\sum_{j\colon\,2^{j}\geq r^{-1}} \|\chi_{R_{\omega,j}}\widehat{f}\,\|_{L^{2}(\mathbb{R}^{n})}^{2}\Big{)}^{1/2}\]
where \(R_{\omega,j}\) is the support of the multiplier for \(T_{\omega,j}\); it is a dyadic-parabolic region of size \(2^{j}\times(2^{j/2})^{n-1}\). Taking \(L^{2}\) norm over \(\omega\in S^{n-1}\), we have
\[\Big{(}\int_{S^{n-1}}\Big{\|}\Big{(}\sum_{j\colon\,2^{j}\geq r^{- 1}}|T_{\omega,j}f|^{2}\Big{)}^{1/2}\Big{\|}_{L^{1}(B_{10r}(0,\omega))}^{2} \mathrm{d}\omega\Big{)}^{1/2}\] \[\lesssim r^{\frac{n+1}{4}}\Big{(}\sum_{j\colon\,2^{j}\geq r^{-1}} \|\chi_{R_{\omega,j}}\widehat{f}\|_{L^{2}(\mathbb{R}^{n}\times S^{n-1})}^{2} \Big{)}^{1/2}\] \[\lesssim r^{\frac{n+1}{4}}\Big{(}\sum_{j\colon\,2^{j}\geq r^{-1}} \|\chi_{|\xi|\simeq 2^{j}}\widehat{f}\|_{L^{2}(\mathbb{R}^{n})}^{2}2^{-\frac{j}{2}(n-1 )}\Big{)}^{1/2}\]
where in the last inequality we have used the fact that
\[\int_{S^{n-1}}[\chi_{R_{\omega,j}}(\xi)]^{2}d\omega\lesssim 2^{-\frac{j}{2}(n-1)}.\]
We may now bound \(2^{-\frac{j}{2}(n-1)}\) trivially by \(r^{\frac{n-1}{2}}\), and pull it outside the sum in \(j\). Thus
\[\Big{(}\int_{S^{n-1}}\Big{\|}\Big{(}\sum_{j\colon\,2^{j}\geq r^{- 1}}|T_{\omega,j}f|^{2}\Big{)}^{1/2}\Big{\|}_{L^{1}(B_{10r}(0,\omega))}^{2} \mathrm{d}\omega\Big{)}^{1/2}\] \[\lesssim r^{\frac{n+1}{4}}r^{\frac{n-1}{4}}\|f\|_{L^{2}(\mathbb{R}^{n})} \lesssim r^{\frac{n+1}{4}}r^{\frac{n-1}{4}}r^{-\frac{n}{2}}\simeq 1\]
establishing (6.4).
The proofs of (6.5) and (6.6) are via kernel estimates. By rotation symmetry, we will assume \(\omega=e_{1}\). For \(j\geq 1\), let \(m_{j}\) be the multiplier of \(T_{\omega,j}\) and \(K_{j}\) be the inverse Fourier transform of \(m_{j}\). Then \(m_{j}\) is supported in \(\{|\xi_{1}|\simeq 2^{j},|\xi^{\prime}|\leq 2^{j/2}\}\), and
\[\|\partial_{\xi_{1}}^{\alpha}\partial_{\xi^{\prime}}^{\beta}m_{j}\|_{L^{\infty}} \lesssim 2^{-\frac{j}{2}(2\alpha+|\beta|)}.\]
It follows that
\[|K_{j}(x)|\lesssim 2^{\frac{j}{2}(n+1)}(1+2^{\frac{j}{2}}\|x\|)^{-N} \tag{6.8}\]
for all positive integers \(N\), where \(\|x\|:=|x_{1}|^{1/2}+|x^{\prime}|\); furthermore,
\[|\partial_{x_{1}}^{\alpha}\partial_{x^{\prime}}^{\beta}K_{j}(x)|\lesssim 2^{ \frac{j}{2}(n+1+2|\alpha|+|\beta|)}(1+2^{\frac{j}{2}}\|x\|)^{-N} \tag{6.9}\]
for all positive integers \(N\). We will use (6.8) in the proof of (6.5), and use (6.9) (with \(\alpha+|\beta|=1\)) in the proof of (6.6).
To continue with the proof of (6.5), from (6.8), we have
\[\Big{(}\sum_{j\colon 2^{j}\geq r^{-1}}|K_{j}(x)|^{2}\Big{)}^{1/2}\lesssim\Big{(} \sum_{j\colon 2^{j}\geq r^{-1}}2^{j(n+1)}(2^{\frac{j}{2}}\|x\|)^{-2N}\Big{)}^{1/2} \lesssim r^{-\frac{n+1}{2}}(r^{-\frac{1}{2}}\|x\|)^{-N}\]
for all \(x\in\mathbb{R}^{n}\) if \(N>n+1\) (fix one such \(N\) from now on). This is useful for \(x\notin B_{10r}(0,\omega)\): in that case \(\|x\|\geq 10r^{1/2}\), and
\[\Big{(}\sum_{j\colon 2^{j}\geq r^{-1}}|T_{\omega,j}f|^{2}\Big{)}^{1/2}\leq \Big{(}\sum_{j\colon 2^{j}\geq r^{-1}}|f|*|K_{j}|^{2}\Big{)}^{1/2}\leq r^{- \frac{n+1}{2}}(r^{-\frac{1}{2}}\|x\|)^{-N}\]
(more precisely, the first inequality above is Cauchy-Schwarz, using that \(\int|f|\leq 1\), and the second inequality holds if \(\|x\|\geq 10r^{1/2}\), because then
\[|f|*\sum_{j\colon 2^{j}\geq r^{-1}}|K_{j}|^{2}(x) \lesssim r^{-n}\int_{|y|\leq r}r^{-(n+1)}(r^{-\frac{1}{2}}\|x-y\| )^{-2N}\mathrm{d}y\] \[\simeq r^{-(n+1)}(r^{-\frac{1}{2}}\|x\|)^{-2N},\]
which upon taking the square root gives the second inequality). As a result, the left hand side of (6.5) is bounded by
\[\int_{\|x\|\geq 10r^{1/2}}r^{-\frac{n+1}{2}}(r^{-\frac{1}{2}}\|x\|)^{-N} \mathrm{d}x\lesssim 1,\]
as desired.
Finally, the proof of (6.6) makes additional use of the cancellation condition \(\int f=0\). To this end, suppose \(1<2^{j}<r^{-1}\). Then
\[T_{\omega,j}f=\int_{\mathbb{R}^{n}}f(y)[K_{j}(x-y)-K_{j}(x)]\mathrm{d}y\]
because \(\int f=0\). As a result,
\[\|T_{\omega,j}f\|_{L^{1}(\mathbb{R}^{n})}\leq\sup_{|y|\leq r}\int_{\mathbb{R} ^{n}}|K_{j}(x-y)-K_{j}(x)|\mathrm{d}x.\]
But for \(|y|\leq r\),
\[\int_{\mathbb{R}^{n}}|K_{j}(x-y)-K_{j}(x)|dx\leq\int_{0}^{1}\int_{\mathbb{R} ^{n}}|y\cdot\nabla K_{j}(x-ty)|\mathrm{d}x\leq r\int_{\mathbb{R}^{n}}|\nabla K _{j}(x)|\mathrm{d}x.\]
From (6.9), we have
\[\int_{\mathbb{R}^{n}}|\partial_{x_{1}}K_{j}(x)|\mathrm{d}x\lesssim 2^{j},\]
while
\[\int_{\mathbb{R}^{n}}|\partial_{x^{\prime}}K_{j}(x)|\mathrm{d}x\lesssim 2^{ \frac{j}{2}}\leq 2^{j}.\]
This completes our proof of (6.6).
It remains to prove (6.3) when \(f\) is a local Hardy atom adapted to \(B_{r}(0)\) with \(r>1\), which is easier. In that case, we will show that
\[\Big{\|}\Big{(}\sum_{j=1}^{\infty}|T_{\omega,j}f|^{2}\Big{)}^{1/2}\Big{\|}_{L^{ 1}(\mathbb{R}^{n})}\lesssim 1 \tag{6.10}\]
uniformly in \(\omega\in S^{n-1}\), from which (6.3) follows. To prove (6.10), we split the integral into over \(B_{10r}(0)\) and its complement. Then \(L^{2}\) theory gives
\[\Big{\|}\Big{(}\sum_{j=1}^{\infty}|T_{\omega,j}f|^{2}\Big{)}^{1/2 }\Big{\|}_{L^{1}(B_{10r}(0))} \leq|B_{10r}(0)|^{1/2}\Big{\|}\Big{(}\sum_{j=1}^{\infty}|T_{\omega,j}f|^{2}\Big{)}^{1/2}\Big{\|}_{L^{2}(\mathbb{R}^{n})}\] \[\lesssim r^{\frac{n}{2}}\|f\|_{L^{2}(\mathbb{R}^{n})}\lesssim 1,\]
and kernel estimates give
\[|K_{j}(x)|\lesssim 2^{\frac{j}{2}(n+1)}(2^{j}|x|)^{-N}\]
(which follows from (6.8) with \(N\) replaced by \(2N\) there, because \(\|x\|\geq|x|^{1/2}\) if \(|x|\geq 10\)). It follows that
\[\Big{(}\sum_{j=1}^{\infty}|K_{j}(x)|^{2}\Big{)}^{1/2}\lesssim|x|^{-N}\]
for \(N\) large, and
\[\Big{(}\sum_{j=1}^{\infty}|T_{\omega,j}f|^{2}\Big{)}^{1/2}\leq\Big{(}\sum_{j= 1}^{\infty}|f|*|K_{j}|^{2}\Big{)}^{1/2}\leq|x|^{-N}\]
if \(x\notin B_{10r}(0)\). This shows
\[\Big{\|}\Big{(}\sum_{j=1}^{\infty}|T_{\omega,j}f|^{2}\Big{)}^{1/2}\Big{\|}_{L ^{1}(\mathbb{R}^{n}\setminus B_{10r}(0))}\lesssim 1\]
as well, giving finally (6.10). Note we did not need any cancellation condition on \(f\) in the above argument when \(r>1\).
The above argument shows that if \(\operatorname{Re}z=\frac{n-1}{4}\), then the operator
\[T_{z}f(x):=(I-\Delta)^{-z/2}\varphi_{\omega}(D)f(x)\]
is bounded from \(H^{1}(\mathbb{R}^{n})\) into \(L^{2}(S^{n-1};H^{1}(\mathbb{R}^{n}))\). In particular, it is bounded from \(H^{1}(\mathbb{R}^{n})\) to \(L^{2}(S^{n-1};L^{1}(\mathbb{R}^{n}))\), and one can check that the bound grows at most polynomially in the imaginary part of \(z\). Furthermore, \(T_{z}\) is bounded from \(L^{2}(\mathbb{R}^{n})\) to \(L^{2}(S^{n-1};L^{2}(\mathbb{R}^{n}))\) when \(\operatorname{Re}z=0\), by Plancherel, and this bound is independent of the imaginary part of \(z\). Also note that if \(\frac{1}{p}=\frac{1-\theta}{1}-\frac{\theta}{2}\), then we have
\[(H^{1}(\mathbb{R}^{n}),L^{2}(\mathbb{R}^{n}))_{\theta}=L^{p}(\mathbb{R}^{n})\]
(see e.g. [32, Theorem 2.4.7]) and
\[(L^{2}(S^{n-1};L^{1}(\mathbb{R}^{n})),L^{2}(S^{n-1};L^{2}(\mathbb{R}^{n})))_{ \theta}=L^{2}(S^{n-1};L^{p}(\mathbb{R}^{n})),\]
by e.g. [3, Theorems 5.1.1 and 5.1.2]). Thus by Stein's complex interpolation theorem, \(T_{s(p)}\) is bounded from \(L^{p}(\mathbb{R}^{n})\) to \(L^{2}(S^{n-1};L^{p}(\mathbb{R}^{n}))\) for \(1<p<2\). In other words,
\[\left(\int_{S^{n-1}}\|\varphi_{\omega}(D)f(x)\|_{L^{p}(\mathbb{R}^{n})}^{2}d \omega\right)^{1/2}\lesssim\|f\|_{W^{s(p),p}(\mathbb{R}^{n})}\]
for \(1<p<2\). Together with the estimate \(\|\rho(D)f\|_{L^{p}(\mathbb{R}^{n})}\lesssim\|f\|_{W^{s(p),p}(\mathbb{R}^{n})}\) we obtain the assertion of Theorem 6.2 (1) for \(1<p<2\).
### Fractional integration
In this subsection we consider embeddings between the spaces \(\mathcal{H}_{\mathrm{dec}}^{p_{1},q;r}\) and \(\mathcal{H}_{\mathrm{dec}}^{p_{2},q;s}(\mathbb{R}^{n})\).
More precisely, we consider fractional integration theorems, where one trades in the differentiability parameter \(s\) to increase the integrability parameter \(p\). In this regard, the classical result says that
\[\mathcal{H}^{n(\frac{1}{p_{1}}-\frac{1}{p_{2}}),p_{1}}(\mathbb{R}^{n})\subseteq \mathcal{H}^{p_{2}}(\mathbb{R}^{n}) \tag{6.11}\]
for all \(p_{1},p_{2}\in[1,\infty]\) and \(s\in\mathbb{R}\) with \(p_{1}\leq p_{2}\). The following is an analog of this embedding involving the function spaces for decoupling, which in fact constitutes a strict improvement of (6.11) if \(p_{1}\leq 2\leq p_{2}\) (and \((p_{1},p_{2})\neq(2,2)\)).
**Theorem 6.3**.: _Let \(p_{1},p_{2},q\in[1,\infty)\) and \(s\in\mathbb{R}\) be such that \(p_{1}\geq p_{2}\). Then_
\[\mathcal{H}_{\mathrm{dec}}^{p_{1},q;s+\frac{n+1}{2}(\frac{1}{p_{1}}-\frac{1}{ p_{2}})}(\mathbb{R}^{n})\subseteq\mathcal{H}_{\mathrm{dec}}^{p_{2},q;s}( \mathbb{R}^{n}). \tag{6.12}\]
Proof of Theorem 6.3.: We may suppose that \(s=0\). Recall that
\[\|f\|_{\mathcal{H}_{\mathrm{dec}}^{p_{2},q;0}(\mathbb{R}^{n})}=\|\rho(D)f\|_{L^ {p_{2}}(\mathbb{R}^{n})}+\left(\int_{S^{n-1}}\|\varphi_{\omega}(D)f\|_{ \mathcal{H}^{p_{2}}(\mathbb{R}^{n})}^{q}\mathrm{d}\omega\right)^{1/q}.\]
But for \(r:=\frac{n+1}{2}(\frac{1}{p_{1}}-\frac{1}{p_{2}})\),
\[\|\varphi_{\omega}(D)f\|_{\mathcal{H}^{p_{2}}(\mathbb{R}^{n})}\lesssim\| \varphi_{\omega}(D)f\|_{\mathcal{H}^{r,p1}(\mathbb{R}^{n})}\]
by Proposition 6.1 (1). Hence
\[\|f\|_{\mathcal{H}_{\mathrm{dec}}^{p_{2},q;0}(\mathbb{R}^{n})}\lesssim\|f\|_{ \mathcal{H}_{\mathrm{dec}}^{p_{1},q;r}(\mathbb{R}^{n})}.\qed\]
**Remark 6.4**.: If \(p_{1}\leq 2\leq p_{2}<\infty\), then one can recover (6.11) from Theorem 6.3, by combining (6.12) with Theorem 6.2:
\[\mathcal{H}^{n(\frac{1}{p_{1}}-\frac{1}{p_{2}}),p_{1}}(\mathbb{R}^ {n}) \subseteq\mathcal{H}_{\mathrm{dec}}^{p_{1},2;n(\frac{1}{p_{1}}- \frac{1}{p_{2}})-s(p_{1})}(\mathbb{R}^{n})\subseteq\mathcal{H}_{\mathrm{dec}}^ {p_{2},2;\frac{n-1}{2}(\frac{1}{p_{1}}-\frac{1}{p_{2}})-s(p_{1})}(\mathbb{R}^ {n})\] \[\subseteq\mathcal{H}^{p_{2}}(\mathbb{R}^{n}).\]
Here we used that \(s(p_{1})+s(p_{2})=\frac{n-1}{2}(\frac{1}{p_{1}}-\frac{1}{p_{2}})\), because \(p_{1}\leq 2\leq p_{2}\). In fact, by the sharpness of the embeddings in Theorem 6.2, in this case (6.12) is a strict improvement of (6.11), at least if \((p_{1},p_{2})\neq(2,2)\). On the other hand, for \(p_{1},p_{2}\leq 2\) or \(p_{1},p_{2}\geq 2\), again due to the sharpness of the embeddings in Theorem 6.2, (6.11) neither follows from (6.12), nor the other way around.
Although Theorem 6.3 only yields a strict improvement of (6.11) for \(p_{1}\leq 2\leq p_{2}\), outside of this range we can nonetheless improve a classical result about the mapping properties of Fourier integral operators. Recall the definition of Fourier integral operators in standard form, from Definition 2.7.
**Corollary 6.5**.: _Let \(m\in\mathbb{R}\), and let \(T\) be one of the following:_
1. _a Fourier integral operator of order_ \(m\) _and type_ \((1/2,1/2,1)\) _in standard form, the symbol of which has compact support in the spatial variable;_
2. _a Fourier integral operator of order_ \(m\) _and type_ \((1/2,1/2,1)\) _in standard form, associated with a global canonical graph;_
3. _a compactly supported Fourier integral operator of order_ \(m\) _and type_ \((1,0)\)_, associated with a local canonical graph._
_Let \(p,q\in[1,\infty)\) and \(s\in\mathbb{R}\) be such that \(p\leq q\). Then_
\[T:\mathcal{H}^{p,q;s+m+\frac{n+1}{2}(\frac{1}{p}-\frac{1}{q})}_{\mathrm{dec}}( \mathbb{R}^{n})\to\mathcal{H}^{q,q;s}_{\mathrm{dec}}(\mathbb{R}^{n}) \tag{6.13}\]
_is bounded. In particular, suppose that one of the following conditions holds:_
1. \(1\leq p\leq q\leq 2\) _and_ \(m=\frac{1}{q}-\frac{1}{p}-2s(p)\)_;_
2. \(2\leq p\leq q<\infty\) _and_ \(m=\frac{1}{q}-\frac{1}{p}-2s(q)\)_._
_Then \(T:\mathcal{H}^{p}(\mathbb{R}^{n})\to L^{q}(\mathbb{R}^{n})\) is bounded._
Proof.: The first statement follows by combining Theorem 6.3 with the invariance of \(\mathcal{H}^{q,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})=\mathcal{H}^{s,q}_{FIO}( \mathbb{R}^{n})\) under the operator \(T\). For \(m=s=0\), this invariance is contained in [13, Theorem 6.10] in cases (2) and (3), and the techniques used there also allow one to treat operators as in (1). For general \(m,s\in\mathbb{R}\), the mapping property \(T:\mathcal{H}^{s+m,q}_{FIO}(\mathbb{R}^{n})\to\mathcal{H}^{s,q}_{FIO}( \mathbb{R}^{n})\) can be found in [20, Proposition 3.3 and Corollary 3.4] in cases (1) and (3). In case (2) it follows from [22, Proposition 3.3], after precomposing with the operator \(\langle D\rangle^{-m}\).
The second statement follows from the first statement, combined with Theorem 6.2. Indeed, in both cases one has
\[\mathcal{H}^{p}(\mathbb{R}^{n})\subseteq\mathcal{H}^{p,q;-s(p)}_{\mathrm{dec} }(\mathbb{R}^{n})=\mathcal{H}^{p,q;s(q)+m+\frac{n+1}{2}(\frac{1}{p}-\frac{1}{ q})}_{\mathrm{dec}}(\mathbb{R}^{n}),\]
so that embedding \(\mathcal{H}^{q,q;s(q)}_{\mathrm{dec}}(\mathbb{R}^{n})\subseteq L^{q}(\mathbb{ R}^{n})\) concludes the proof.
**Remark 6.6**.: In the case of an operator as in (1) with symbol in \(S^{m}_{1,0}\), the final statement in Corollary 6.5 is well known (see e.g. [30, Section IX.6.15]), and from this one can derive the same statement for operators as in (3). Corollary 6.5 improves upon this result in several ways, since it allows for the larger class of \(S^{m}_{1/2,1/2,1}\) symbols, removes the assumption that the symbol has compact spatial support, cf. (2), and yields stronger estimates through (6.13), due to the sharpness of the Sobolev embeddings in Theorem 6.2.
## 7. Decoupling
In this section we show that the decoupling inequalities for the sphere and the light cone are equivalent to norm bounds for certain functions in \(\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})\) and \(\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n+1})\), respectively. To do so, we first give an equivalent description of the \(\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})\) norm of functions with frequency support in a dyadic annulus.
### A discrete description of the \(\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})\) norm
Throughout this section, for each \(R\geq 2\), fix a maximal collection \(V_{R}\subseteq S^{n-1}\) of unit vectors such that \(|\nu-\nu^{\prime}|\geq R^{-1/2}\) for all \(\nu,\nu^{\prime}\in V_{R}\). Then \(V_{R}\) has approximately \(R^{(n-1)/2}\) elements. Let \((\chi_{\nu})_{\nu\in V_{R}}\subseteq C^{\infty}(\mathbb{R}^{n}\setminus\{0\})\) be an associated partition of unity. More precisely, each \(\chi_{\nu}\) is positively homogeneous of degree \(0\) and satisfies \(0\leq\chi_{\nu}\leq 1\) and
\[\mathrm{supp}(\chi_{\nu})\subseteq\{\xi\in\mathbb{R}^{n}\setminus\{0\}\mid| \hat{\xi}-\nu|\leq 2R^{-1/2}\}.\]
Moreover, \(\sum_{\nu\in V_{R}}\chi_{\nu}(\xi)=1\) for all \(\xi\neq 0\), and for all \(\alpha\in\mathbb{Z}^{n}_{+}\) and \(\beta\in\mathbb{Z}_{+}\) there exists a \(C_{\alpha,\beta}\geq 0\) independent of \(k\) such that, if \(R/2\leq|\xi|\leq 2R\), then
\[|(\hat{\xi}\cdot\partial_{\xi})^{\beta}\partial_{\xi}^{\alpha}\chi_{\nu}(\xi)| \leq C_{\alpha,\beta}R^{-(|\alpha|/2+\beta)}\]
for all \(\nu\in V_{R}\). Such a collection is straightforward to construct (see e.g. [30, Section IX.4]).
The following proposition, which gives an equivalent description of the \(\mathcal{H}_{\mathrm{dec}}^{p,q;s}(\mathbb{R}^{n})\) norm of a function with frequency support in a dyadic annulus, is the key tool to relate \(\mathcal{H}_{\mathrm{dec}}^{p,q;s}(\mathbb{R}^{n})\) to decoupling inequalities.
**Proposition 7.1**.: _Let \(p\in[1,\infty]\), \(q\in[1,\infty)\) and \(s\in\mathbb{R}\). Then there exists a \(C>0\) such that the following holds. Let \(f\in L^{p}(\mathbb{R}^{d})\) be such that \(\mathrm{supp}(\widehat{f}\,)\subseteq\{\xi\in\mathbb{R}^{n}\mid R/2\leq|\xi| \leq 2R\}\) for some \(R\geq 2\). Then_
\[\frac{1}{C}\|f\|_{\mathcal{H}_{\mathrm{dec}}^{p,q;s}(\mathbb{R}^{n})}\leq R^{s +\frac{n-1}{2}(\frac{1}{2}-\frac{1}{q})}\Big{(}\sum_{\nu\in V_{R}}\|\chi_{\nu }(D)f\|_{L^{p}(\mathbb{R}^{n})}^{q}\Big{)}^{1/q}\leq C\|f\|_{\mathcal{H}_{ \mathrm{dec}}^{p,q;s}(\mathbb{R}^{n})}.\]
Proof.: The case where \(p=q\) is covered by [22, Proposition 4.1], at least for \(R=2^{k}\) with \(k\in\mathbb{N}\). For general \(p\) and \(q\), the statement follows from [23, Proposition 2.4], since \(\|f\|_{L^{p}(\mathbb{R}^{n})}\) and the Besov space norm \(\|f\|_{B_{p,q}^{0}(\mathbb{R}^{n})}\) are comparable, which in turn follows from the assumption that \(f\) has frequency support in a dyadic annulus.
**Remark 7.2**.: Let \(f\in\mathcal{H}_{\mathrm{dec}}^{p,q;s}(\mathbb{R}^{n})\) be such that
\[\mathrm{supp}(\widehat{f}\,)\subseteq\{\xi\in\mathbb{R}^{n}\mid R/2\leq|\xi| \leq 2R,|\hat{\xi}-\nu|\leq 2R^{-1/2}\}\]
for some \(R\geq 2\) and \(\nu\in S^{n-1}\). Then Proposition 7.1 yields
\[\|f\|_{\mathcal{H}_{\mathrm{dec}}^{p,q;s}(\mathbb{R}^{n})}\eqsim R^{s+\frac{n- 1}{2}(\frac{1}{2}-\frac{1}{q})}\|f\|_{L^{p}(\mathbb{R}^{n})}\eqsim\|f\|_{ \mathcal{H}^{s+\frac{n-1}{2}(\frac{1}{2}-\frac{1}{q}),p}(\mathbb{R}^{n})}.\]
### Decoupling for the sphere
For \(\kappa>0\) and \(R\geq 2\kappa\), let \(S_{R,\kappa}\) be a \(\kappa R^{-1}\)-neighborhood of the sphere \(S^{n-1}\) in \(\mathbb{R}^{n}\), and let \(V_{R}\) be as in the previous subsection. Then \(\theta_{\nu}:=\mathrm{supp}(\chi_{\nu})\cap S_{R,\kappa}\), for each \(\nu\in V_{R}\), is a curved rectangle of dimensions approximately \(R^{-1/2}\times\cdots\times R^{-1/2}\times R^{-1}\) pointing in the direction of \(\nu\). Moreover, \(S_{R,\kappa}=\cup_{\nu\in V_{R}}\theta_{\nu}\), and the \(\theta_{\nu}\) have finite overlap.
This observation allows us to formulate the decoupling inequality for the sphere. More precisely, for \(p,q\in[2,\infty)\), set
\[d(p,q):=\begin{cases}\frac{n-1}{2}\big{(}\frac{1}{2}-\frac{1}{q}\big{)}+\frac {n-1}{4}-\frac{n+1}{2p}&\text{if }p\geq\frac{2(n+1)}{n-1},\\ \frac{n-1}{2}\big{(}\frac{1}{2}-\frac{1}{q}\big{)}&\text{if }2\leq p\leq \frac{2(n+1)}{n-1}.\end{cases} \tag{7.1}\]
Then the \(\ell^{q}\) decoupling inequality for the sphere, from [4], is as follows.
**Theorem 7.3**.: _Let \(p,q\in[2,\infty)\) and \(\kappa,\varepsilon>0\). Then there exists a \(C\geq 0\) such that_
\[\|f\|_{L^{p}(\mathbb{R}^{n})}\leq CR^{d(p,q)+\varepsilon}\Big{(}\sum_{\nu\in V _{R}}\|\chi_{\nu}(D)f\|_{L^{p}(\mathbb{R}^{n})}^{q}\Big{)}^{1/q}\]
_for all \(R\geq 2\kappa\) and \(f\in\mathcal{S}(\mathbb{R}^{n})\) with \(\mathrm{supp}(\widehat{f}\,)\subseteq S_{R,\kappa}\)._
We can reinterpret this decoupling inequality as a bound for the \(\mathcal{H}_{\mathrm{dec}}^{p,q;s}(\mathbb{R}^{n})\) norm of functions with frequency support which is highly localized in a radial sense. For \(p\in(1,\infty)\), set
\[\alpha(p):=\begin{cases}\frac{n-1}{4}-\frac{n+1}{2p}&\text{if }p\geq\frac{2(n+1)}{n-1}, \\ 0&\text{if }2\frac{n+1}{n+3}\leq p\leq\frac{2(n+1)}{n-1},\\ \frac{n-1}{2p}-1&\text{if }p\leq 2\frac{n+1}{n+3},\end{cases} \tag{7.2}\]
and note that \(\alpha(p)=\alpha(p^{\prime})=d(p,q)-\frac{n-1}{2}(\frac{1}{2}-\frac{1}{q})\) for \(p\geq 2\).
**Corollary 7.4**.: _Let \(p\in(1,\infty)\), \(q\in[2,\infty)\), \(s\in\mathbb{R}\) and \(\kappa,\varepsilon>0\). Then there exists a \(C\geq 0\) such that the following holds for all \(f\in\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})\) with \(\mathrm{supp}(\widehat{f})\subseteq\{\xi\in\mathbb{R}^{n}\mid R-\kappa\leq| \xi|\leq R+\kappa\}\) for some \(R\geq 2\kappa\)._
1. _If_ \(p\geq 2\)_, then_ (7.3) \[\|f\|_{W^{s-\alpha(p)-\varepsilon,p}(\mathbb{R}^{n})}\leq C\|f\|_{\mathcal{H}^ {p,q^{s}}_{\mathrm{dec}}(\mathbb{R}^{n})}.\]
2. _If_ \(p\leq 2\)_, then_ \[\|f\|_{\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})}\leq C\|f\|_{W^{s+ \alpha(p)+\varepsilon,p}(\mathbb{R}^{n})}.\]
Proof.: For (1), we may assume without loss of generality that \(s=\alpha(p)+\varepsilon\). Moreover, by slightly enlarging \(\kappa\) and by applying Proposition 4.6, we may suppose that \(f\in\mathcal{S}(\mathbb{R}^{n})\).
Now set \(f_{R}(y):=R^{-n}f(y/R)\) for \(y\in\mathbb{R}^{n}\), and recall that the \(\chi_{\nu}\) are positively homogeneous of order zero. Then two changes of variables, Theorem 7.3 and Proposition 7.1 combine to yield
\[\|f\|_{L^{p}(\mathbb{R}^{n})} =R^{n/p^{\prime}}\|f_{R}\|_{L^{p}(\mathbb{R}^{n})}\lesssim R^{n/p ^{\prime}+d(p,q)+\varepsilon}\Big{(}\sum_{\nu\in V_{R}}\|\chi_{\nu}(D)f_{R}\|^ {q}_{L^{p}(\mathbb{R}^{n})}\Big{)}^{1/q}\] \[=R^{d(p,q)+\varepsilon}\Big{(}\sum_{\nu\in V_{R}}\|\chi_{\nu}(D)f \|^{q}_{L^{p}(\mathbb{R}^{n})}\Big{)}^{1/q}\eqsim\|f\|_{\mathcal{H}^{\alpha(p) +\varepsilon,p,q}_{\mathrm{dec}}(\mathbb{R}^{n})}.\]
This proves (1).
By duality, (2) follows from (1). More precisely, Propositions 4.8 and 4.6 yield \(\|f\|_{\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})}\eqsim\sup|\langle f,g\rangle_{\mathbb{R}^{n}}|\), where the supremum is taken over all \(g\in\mathcal{S}(\mathbb{R}^{n})\) with \(\mathrm{supp}(\widehat{g})\subseteq\{\xi\in\mathbb{R}^{n}\mid R-2\kappa\leq| \xi|\leq R+2\kappa\}\) and \(\|g\|_{\mathcal{H}^{p^{\prime},\sigma^{\prime};-s}_{\mathrm{dec}}(\mathbb{R}^ {n})}\leq 1\). Hence (1) yields
\[\|f\|_{\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})} \eqsim\sup|\langle f,g\rangle_{\mathbb{R}^{n}}|\lesssim\sup\|f\|_ {W^{s+\alpha(p)+\varepsilon,p}(\mathbb{R}^{n})}\|g\|_{W^{-s-\alpha(p^{\prime} )-\varepsilon,p^{\prime}}(\mathbb{R}^{n})}\] \[\lesssim\sup\|f\|_{W^{s+\alpha(p)+\varepsilon,p}(\mathbb{R}^{n})} \|g\|_{\mathcal{H}^{p^{\prime},q^{\prime};-s}_{\mathrm{dec}}(\mathbb{R}^{n})}= \|f\|_{W^{s+\alpha(p)+\varepsilon,p}(\mathbb{R}^{n})}.\qed\]
**Remark 7.5**.: For \(p\neq 2\), the estimates in (1) and (2) of Corollary 7.4 improve upon (1) and (2) of Theorem 6.2, which hold without any assumptions on the frequency support of \(f\). Indeed, for all \(p\in(2,\infty)\) one has
\[s(p^{\prime})-\alpha(p^{\prime})=s(p)-\alpha(p)=\begin{cases}s(p)&\text{if }2 \leq p\leq\frac{2(n+1)}{n-1},\\ \frac{1}{p}&\text{if }p\geq\frac{2(n+1)}{n-1}.\end{cases}\]
On the other hand, Corollary 7.4 yields less norm control than in the parabolically localized setting of Remark 7.2.
Also note that Corollary 7.4 is equivalent to Theorem 7.3, in the sense that the same rescaling argument as above allows one to deduce Theorem 7.3 from (7.3).
**Remark 7.6**.: The general decoupling inequality in [4], for compact \(C^{2}\) hypersurfaces with positive definite second fundamental form, follows from Theorem 7.3, by decomposing the surface into a finite number of small pieces. Hence Corollary 7.4 can also be used to reformulate this more general decoupling inequality.
### Decoupling for the cone
For \(R>0\), let \(W_{R}\) and \((\chi_{\omega})_{\omega\in W_{R}}\) be collections with the same properties as \(V_{R}\) and \((\chi_{\omega})_{\nu\in V_{R}}\), from Section 7.1, but with \(n\) replaced by \(n+1\). More precisely, \(W_{R}\subseteq S^{n}\) is a maximal collection of unit vectors in \(\mathbb{R}^{n+1}\) such that \(|\omega-\omega^{\prime}|\geq R^{-1/2}\) for all \(\omega,\omega^{\prime}\in W_{R}\), and \((\chi_{\omega})_{\omega\in W_{R}}\subseteq C^{\infty}(\mathbb{R}^{n+1}\setminus \{0\})\) is an associated partition of unity of functions that are positively homogeneous of order zero.
For \(\kappa>0\), let \(\Gamma_{R,\kappa}\subseteq\mathbb{R}^{n+1}\) be a \(\kappa R^{-1}\)-neighborhood of the truncated light cone
\[\{\zeta=(\xi,\tau)\in\mathbb{R}^{n}\times\mathbb{R}\mid 1/2\leq|\xi|=\tau\leq 2\}.\]
Then we can formulate the \(\ell^{q}\) decoupling inequality for the cone as follows, using the exponent \(d(p,q)\) from (7.1).
**Theorem 7.7**.: _Let \(p,q\in[2,\infty)\) and \(\kappa,\varepsilon>0\). Then there exists a \(C\geq 0\) such that_
\[\|f\|_{L^{p}(\mathbb{R}^{n+1})}\leq CR^{d(p,q)+\varepsilon}\Big{(}\sum_{\omega \in W_{R}}\|\chi_{\omega}(D)f\|_{L^{p}(\mathbb{R}^{n+1})}^{q}\Big{)}^{1/q}\]
_for all \(R\geq 2\kappa\) and \(f\in\mathcal{S}(\mathbb{R}^{n})\) with \(\operatorname{supp}(\widehat{f}\,)\subseteq\Gamma_{R,\kappa}\)._
Proof.: For \(\omega\in W_{R}\), set \(\theta_{\omega}:=\operatorname{supp}(\chi_{\omega})\cap\Gamma_{R,\kappa}\). Then \(\Gamma_{R,\kappa}=\cup_{\omega\in W_{R}}\theta_{\omega}\), and the \(\theta_{\omega}\) have finite overlap. Fix \(\omega=(\omega_{1},\ldots,\omega_{n+1})\in W_{R}\) such that \(\theta_{\omega}\neq\emptyset\), and suppose that only the first and last coordinates of \(\omega\) are nonzero. We claim that \(\theta_{\omega}\) is contained in a slab of dimensions approximately \(R^{-1}\times R^{-1/2}\times\cdots\times R^{-1/2}\times 1\), pointing along the light cone. By rotation, the required statement is then equivalent to the standard formulation of the \(\ell^{q}\) decoupling inequality for the cone, from [4].
To prove the claim, let \(\zeta=(\xi,\tau)\in\theta_{\omega}\), with \(\xi=(\xi_{1},\ldots,\xi_{n})\in\mathbb{R}^{n}\). Then, by assumption on \(\omega\) and because \(\zeta\in\operatorname{supp}(\chi_{w})\), one has
\[\Big{|}\frac{\xi_{1}}{|\zeta|}-\omega_{1}\Big{|}^{2}+\Big{|}\frac{\xi_{2}}{| \zeta|}\Big{|}^{2}+\ldots+\Big{|}\frac{\xi_{n}}{|\zeta|}\Big{|}^{2}\leq|\hat{ \zeta}-\omega|^{2}\leq\frac{4}{R}. \tag{7.4}\]
Moreover, \(|\zeta|\eqsim 1\), since \(\zeta\in\Gamma_{R,\kappa}\). Hence
\[|\xi_{2}|^{2}+\ldots+|\xi_{n}|^{2}\leq 4R^{-1}|\zeta|^{2}\eqsim R^{-1}.\]
It also follows from (7.4) that \(\omega\) points in the direction of the light cone. By combining all this, one sees that \(\theta_{\omega}\) is as claimed.
We will reinterpret this decoupling inequality as a bound for the \(\mathcal{H}_{\mathrm{dec}}^{p,q;s}(\mathbb{R}^{n+1})\) norm of functions with frequency support near the light cone. More precisely, let
\[\Gamma:=\{\zeta=(\xi,\tau)\in\mathbb{R}^{n}\times\mathbb{R}\mid|\tau-|\xi|| \leq 1\}\subseteq\mathbb{R}^{n+1}\]
be a thickened version of the full light cone. For \(p\in(1,\infty)\), let \(\alpha(p)\) be as in (7.2).
**Corollary 7.8**.: _Let \(p\in(1,\infty)\), \(q\in[2,\infty)\), \(s\in\mathbb{R}\) and \(\kappa,\varepsilon>0\). Then there exists a \(C\geq 0\) such that the following holds for all \(f\in\mathcal{H}_{\mathrm{dec}}^{p,q;s}(\mathbb{R}^{n+1})\) with_
\[\operatorname{supp}(\widehat{f}\,)\subseteq\Gamma\cap\{\zeta=(\xi,\tau)\in \mathbb{R}^{n}\mid R/2\leq|\xi|\leq 2R\}\]
_for some \(R\geq 2\)._
1. _If_ \(p\geq 2\)_, then_ (7.5) \[\|f\|_{W^{s-\alpha(p)-\varepsilon,p}(\mathbb{R}^{n+1})}\leq C\|f\|_{\mathcal{H }_{\mathrm{dec}}^{p,q;s}(\mathbb{R}^{n+1})}.\]
2. _If_ \(p<2\)_, then_ \[\|f\|_{\mathcal{H}_{\mathrm{dec}}^{p,q;s}(\mathbb{R}^{n+1})}\leq C\|f\|_{W^{s +\alpha(p)-\varepsilon,p}(\mathbb{R}^{n+1})}.\]
Proof.: Apply a rescaling argument and duality, as in the proof of Corollary 7.4.
## 8. Regularity for wave equations
In this section we obtain new regularity results for the Euclidean wave equation. We first prove local smoothing estimates involving the function spaces for decoupling, and then we indicate how these estimates can applied to nonlinear wave equations with initial data outside of \(L^{2}\)-based Sobolev spaces.
### Local smoothing
The following is our main result on local smoothing involving the function spaces for decoupling.
**Theorem 8.1**.: _Let \(p\in(2,\infty)\), \(q\in[2,\infty)\), \(s\in\mathbb{R}\) and \(\varepsilon>0\). Then there exists a \(C>0\) such that_
\[\Big{(}\int_{0}^{1}\|e^{it\sqrt{-\Delta}}f\|_{W^{s,p}(\mathbb{R}^{n})}^{p} \mathrm{d}t\Big{)}^{1/p}\leq C\|f\|_{\mathcal{H}_{\mathrm{dec}}^{p,q;s+\alpha( p)+\varepsilon}(\mathbb{R}^{n})} \tag{8.1}\]
_for all \(f\in\mathcal{H}_{\mathrm{dec}}^{p,q;s+\alpha(p)+\varepsilon}(\mathbb{R}^{n})\)._
Proof.: For \(p=q\), the statement is contained in [22, Theorem 4.4]. For \(q=2\), it is a consequence of [23, Theorem 1.1] and the standard embedding \(W^{s+\alpha(p)+\varepsilon,p}(\mathbb{R}^{n})\subseteq B_{p,p}^{s+\alpha(p)+ \varepsilon/2}(\mathbb{R}^{n})\). However, the proof of both these results in fact directly extends to general \(q\), upon using the \(\ell^{q}\) decoupling inequality for the light cone.
**Remark 8.2**.: The local smoothing conjecture for the Euclidean wave equation stipulates that for each \(\varepsilon>0\) one has
\[\Big{(}\int_{0}^{1}\|e^{it\sqrt{-\Delta}}f\|_{W^{s,p}(\mathbb{R}^{n})}^{p} \mathrm{d}t\Big{)}^{1/p}\lesssim\|f\|_{W^{s+\sigma(p)+\varepsilon,p}(\mathbb{ R}^{n})}, \tag{8.2}\]
for an implicit constant independent of \(f\in W^{s+\sigma(p)+\varepsilon,p}(\mathbb{R}^{n})\). Here \(\sigma(p):=0\) for \(2<p\leq 2n/(n-1)\), and \(\sigma(p):=2s(p)-1/p\) for \(p\geq 2n/(n-1)\). By the Sobolev embeddings for \(\mathcal{H}_{\mathrm{dec}}^{p,q;s}(\mathbb{R}^{n})\) in Theorem 6.2 (1), (8.1) improves upon (8.2) for \(p\geq 2(n+1)/(n-1)\) and \(2\leq q\leq p\). Moreover, by the sharpness of the Sobolev embedding in Theorem 6.2 (2), (8.1) in fact yields a strict improvement of (8.2) for such \(p\) and \(q\). On the other hand, for \(2<p<2(n+1)/(n-1)\) and \(q\in[2,p]\), (8.1) neither follows from (8.2), nor vice versa. In particular, due to the sharpness of the Sobolev embedding in Theorem 6.2 (2), (8.1) yields sharper estimates than (8.2) for certain initial data (see also Remark 7.2). Here it is relevant to note that the exponents \(\alpha(p)\) in (8.1) and \(\sigma(p)\) in (8.2) are sharp, for all \(2<p<\infty\) and \(2\leq q\leq p\). In the case of (8.1), this follows from Theorem 6.3 (6.1) and [22, Theorem 5.3].
We also note that, at least when restricted to dyadic frequency annuli, \(\mathcal{H}_{\mathrm{dec}}^{p,q;s}(\mathbb{R}^{n})\) is the largest space of initial data for which one can obtain local smoothing when applying the \(\ell^{q}\) decoupling inequality in the manner in which it is typically used (for more on this see [22, 23]).
**Remark 8.3**.: By Theorem 6.3, one has \(W^{1/2,2}(\mathbb{R}^{n})\subsetneq\mathcal{H}_{\mathrm{dec}}^{p,2;0}(\mathbb{ R}^{n})\) for \(p=2(n+1)/(n-1)\). Given that \(\alpha(p)=0\), Theorem 8.1 therefore almost yields a strict improvement of the classical Strichartz estimate
\[\Big{(}\int_{0}^{1}\|e^{it\sqrt{-\Delta}}f\|_{L^{p}(\mathbb{R}^{n})}^{p} \mathrm{d}t\Big{)}^{1/p}\leq C\|f\|_{W^{1/2,2}(\mathbb{R}^{n})}. \tag{8.3}\]
In fact, by the sharpness of the Sobolev embeddings in Theorem 6.3, Theorem 8.1 already complements (8.3). More precisely, for this specific \(p\), Theorem 8.1 yields sharper estimates than (8.3) for a large class of initial data, while also allowing for initial data in \(L^{p}\)-based Sobolev spaces.
### Nonlinear wave equations
In this subsection we indicate how our results can be applied to nonlinear wave equations with initial data in \(\mathcal{H}^{p,q;s}_{\rm dec}(\mathbb{R}^{n})\) for \(p>2\). Such initial data might be referred to as "slowly decaying", due to the fact that such a function may decay slower at infinity than an \(L^{2}(\mathbb{R}^{n})\) function does. On the other hand, even for compactly supported initial data, the results presented here show that, assuming additional integrability beyond that of an \(L^{2}(\mathbb{R}^{n})\) function, one can obtain well-posedness statements for rougher initial data than one obtains from Strichartz estimates (for more on this see [20, Section 1.3]).
Our results and proofs for nonlinear wave equations are analogous to those in [23] (see also [20, 26]). In particular, here we only will consider the cubic nonlinear wave equation in dimension \(n=2\), although a similar approach can be used in other dimensions and for different nonlinearities.
Consider the Cauchy problem for the cubic nonlinear wave equation on \(\mathbb{R}^{2}\times\mathbb{R}\):
\[\begin{cases}(\partial_{t}^{2}-\Delta_{g})u(x,t)=\pm|u(x,t)|^{2}u(t,x),\\ u(x,0)=f(x),\ \partial_{t}u(x,0)=g(x).\end{cases} \tag{8.4}\]
Our main result concerning (8.4) is as follows.
**Theorem 8.4**.: _Let \(q\in[2,\infty]\) and \(\varepsilon,T>0\). Then (8.4) is quantitatively well posed with initial data space_
\[X=(\mathcal{H}^{6,q;\varepsilon}_{\rm dec}(\mathbb{R}^{2})+W^{1/2,2}(\mathbb{R }^{2}))\times(\mathcal{H}^{6,q;\varepsilon-1}_{\rm dec}(\mathbb{R}^{2})\times W ^{-1/2,2}(\mathbb{R}^{2})) \tag{8.5}\]
_and solution space_
\[S_{T}=L^{4}\big{(}[0,T];L^{6}(\mathbb{R}^{2})\big{)}\cap C\big{(}[0,T];\mathcal{ H}^{6,q;\varepsilon}_{\rm dec}(\mathbb{R}^{2})+W^{1/2,2}(\mathbb{R}^{2})\big{)}. \tag{8.6}\]
_Moreover, (8.4) is also quantitatively well posed with initial data space_
\[X=(\mathcal{H}^{4,q;\varepsilon}_{\rm dec}(\mathbb{R}^{2})+W^{3/8,2}(\mathbb{R }^{2}))\times(\mathcal{H}^{4,q;\varepsilon-1}_{\rm dec}(\mathbb{R}^{2})\times W ^{-5/8,2}(\mathbb{R}^{2})) \tag{8.7}\]
_and solution space_
\[S_{t_{0}}=L^{24/7}\big{(}[0,T];L^{4}(\mathbb{R}^{2})\big{)}\cap C\big{(}[0,T]; \mathcal{H}^{4,q;\varepsilon}_{\rm dec}(\mathbb{R}^{2})+W^{3/8,2}(\mathbb{R}^ {2})\big{)}. \tag{8.8}\]
Our notion of quantitative well-posedness is taken from [1], and the definition is recalled in the proof below. Via a fixed-point argument, it implies that there exists a \(\delta=\delta(t_{0})>0\) such that, if \(\|(u_{0},u_{1})\|_{X}<\delta\), then (8.4) has a unique solution \(u\in S_{t_{0}}\), and this solution depends analytically on the initial data. Moreover, in Theorem 8.4, for all \((u_{0},u_{1})\in X\) there exists a \(t_{0}>0\) such that there is a unique solution \(u\in S_{t_{0}}\) to (8.4).
Proof.: The proof is almost completely analogous to that of [23, Theorem 1.4], although we rely on Theorems 8.1 and 5.1 instead of results about adapted Besov spaces. We briefly sketch the idea (see also [20, Section 8]).
Write (8.4) as
\[u=L(f,g)+N(u,u,u),\]
with
\[L(f,g)(t) :=\cos(t\sqrt{-\Delta_{g}})f+\frac{\sin(t\sqrt{-\Delta_{g}})}{\sqrt{ -\Delta_{g}}}g,\] \[N(u_{1},u_{2},u_{3})(t) :=\pm\int_{0}^{t}\frac{\sin((t-s)\sqrt{-\Delta_{g}})}{\sqrt{- \Delta_{g}}}u_{1}(s)\overline{u_{2}(s)}u_{3}(s)\mathrm{d}s,\]
for \(u_{1},u_{2},u_{3}\in S_{T}\) and \(t\in[0,T]\). We then say that (8.4) is quantitatively well posed if
\[\|L(f,g)\|_{S_{T}} \leq C\|(f,g)\|_{X}, \tag{8.10}\] \[\|N(u_{1},u_{2},u_{3})\|_{S_{T}} \leq C\prod_{j=1}^{3}\|u_{j}\|_{S_{T}}, \tag{8.9}\]
for some \(C\geq 0\) independent of \(f,g\in X\) and \(u_{1},u_{2},u_{3}\in S_{T}\).
Now, to prove (8.9) one can rely on Theorems 8.1 and 5.1 for initial data in \(\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{2})\), and homogeneous Strichartz estimates (cf. [18]) for initial data in \(W^{s,2}(\mathbb{R}^{2})\). On the other hand, the proof of (8.10) relies on Holder's inequality and inhomogeneous Strichartz estimates. We refer to [23] for details.
One can equally well extend other results in [23] to initial data in \(\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})\). In particular, if \(\varepsilon>1/2\) in Theorem 8.4, then one obtains global existence for the defocusing equation with the initial data space in (8.5) and the solution space in (8.6).
**Remark 8.5**.: The main difference between Theorem 8.4 and the results in [23] concerns the second inclusion in (8.6) and (8.8). Although standard embeddings between Besov and Sobolev spaces, combined with the \(\varepsilon\) loss in (8.5) and (8.7), imply that the spaces of initial data in (8.5) and (8.7) are not fundamentally larger than those in [23], or vice versa, \(\mathcal{H}^{p,q;s}_{\mathrm{dec}}(\mathbb{R}^{n})\) satisfies sharp embeddings into the \(L^{p}\)-based Sobolev scale, cf. Theorem 6.2. By contrast, the adapted Besov spaces from [23] embed in a sharp manner into the standard Besov scale. Hence Theorems 8.4 and 6.2 show that \(L^{p}\) regularity of the initial data is pointwise preserved in an optimal sense, whereas [23, Theorem 1.4] yields the corresponding statement for Besov regularity.
## Acknowledgments
Yung would like to thank Andreas Seeger for a helpful discussion that inspired the proof of Theorem 6.2.
|
2307.10910 | Maximal colourings for graphs | We consider two different notions of graph colouring, namely, the
$t$-periodic colouring for vertices that has been introduced in 1974 by Bondy
and Simonovits, and the periodic colouring for oriented edges that has been
recently introduced in the context of spectral theory of non-backtracking
operators. For each of these two colourings, we introduce the corresponding
colouring number which is given by maximising the possible number of colours.
We first investigate these two new colouring numbers individually, and we then
show that there is a deep relationship between them. | Raffaella Mulas | 2023-07-20T14:31:41Z | http://arxiv.org/abs/2307.10910v3 | # Maximal colourings for graphs
###### Abstract
We consider two different notions of graph colouring, namely, the \(t\)-periodic colouring for vertices that has been introduced in 1974 by Bondy and Simonovits, and the periodic colouring for oriented edges that has been recently introduced in the context of spectral theory of non-backtracking operators. For each of these two colourings, we introduce the corresponding colouring number which is given by maximising the possible number of colours. We first investigate these two new colouring numbers individually, and we then show that there is a deep relationship between them.
**Keywords:** colouring numbers; periodic colouring; circularly partite graphs
_Coming, colours in the air_
_Oh, everywhere_
_She comes in colours_
--The Rolling Stones, _She's a Rainbow_
## 1 Introduction
### Historical note
While graph theory was born in 1736, when Leonard Euler solved the Konigsberg's Seven Bridges Problem [12], the history of graph colouring started in 1852, when the South African mathematician and botanist Francis Guthrie formulated the Four Colour Problem [19, 23, 36, 37]. Francis Guthrie noticed that, when colouring a map of the counties of England, one needed at least four distinct colours if two regions sharing a common border could not have the same colour. Moreover, he conjectured (and tried to prove) that four colours were sufficient to colour any map in this way. His brother, Frederick Guthrie, supported him by sharing his work with Augustus De Morgan, of whom he was a student at the time, and De Morgan immediately showed his interest for the problem [16]. On October 23, 1852, De Morgan presented Francis Guthrie's conjecture in a letter to Sir William Rowan Hamilton, in which he wrote:
_The more I think of it the more evident it seems._
But Hamilton replied:
_I am not likely to attempt your quaternion of colour very soon._
De Morgan then tried to get other mathematicians interested in the conjecture, and it eventually became one of the most famous open problems in graph theory and mathematics for more than a century. After several failed attempts in solving the problem, Francis Guthrie's conjecture was proved to be true in 1976, by Kenneth Appel and Wolfgang Haken, with the first computer-assisted proof in history [1].
The Four Colour Theorem can be equivalently described in the language of graph theory as follows. Let \(G=(V,E)\) be a _simple graph_, that is, an undirected, unweighted graph without multi-edges and without loops. Two distinct vertices \(v\) and \(w\) are called _adjacent_, denoted \(v\sim w\) or \(w\sim v\), if \(\{v,w\}\in E\). A _\(k\)-colouring of the vertices_ is a function \(c:V\to\{1,\ldots,k\}\), and it is _proper_ if \(v\sim w\) implies \(c(v)\neq c(w)\). The _vertex colouring number_\(\chi=\chi(G)\) is the minimum \(k\) such that there exists a proper \(k\)-colouring of the vertices. Moreover, the graph \(G\) is called _planar_ if it can be embedded in the plane, that is, it can be drawn on the plane in such a way that its edges intersect only at their endpoints.
**Theorem 1.1** (Four Colour Theorem, 1976).: _If \(G\) is a planar simple graph, then \(\chi\leq 4\)._
Despite the huge importance of this result, quoting William Thomas Tutte [34],
_The Four Colour Theorem is the tip of the iceberg, the thin end of the wedge and the first cuckoo of Spring._
In fact, the study of the vertex colouring number \(\chi\) has shown to be interesting also for several other problems in graph theory, as well as for applications to partitioning problems. Moreover, other notions of colouring have been introduced, and each of them has led to numerous challenging problems, many of which are beautifully summarised in [19, 34].
For instance, another common notion in the literature is the following. Given a simple graph \(G=(V,E)\), a _proper \(k\)-colouring of the edges_ is a function \(c:E\to\{1,\ldots,k\}\) such that, if the distinct edges \(e\) and \(f\) have one endpoint in common, then \(c(e)\neq c(f)\). The _edge colouring number_\(\chi^{*}=\chi^{*}(G)\) is the minimum \(k\) such that there exists a proper \(k\)-colouring of the edges. In 1964, Vadim Georgievich Vizing [35] proved the following result:
**Theorem 1.2** (Vizing's Theorem, 1964).: _If \(G\) is a simple graph with maximum vertex degree \(\Delta\), then_
\[\chi^{*}\in\{\Delta,\Delta+1\}.\]
Graphs with \(\chi^{*}=\Delta\) are said to be of _class one,_ and graphs with \(\chi^{*}=\Delta+1\) are said to be of _class two._
### Setting and aim of this paper
Here we shall focus on two different notions of periodic colouring for a simple graph \(G=(V,E)\), as well as on their connection. One of them is the notion of \(t\)-periodic colouring for vertices that has been introduced by John Adrian Bondy and Miklos Simonovits in 1974 [6] (cf. Definition 5.4.3 in [11]). Before defining it, we recall the following
**Definition 1.3**.: A _path_ of length \(\ell\geq 1\) is a sequence
\[v_{1},\ldots,v_{\ell},v_{\ell+1}\]
of distinct vertices such that \(v_{j}\sim v_{j+1}\) for each \(j\in\{1,\ldots,\ell\}\). A _cycle_ of length \(\ell\geq 3\) is a path where \(v_{\ell+1}=v_{1}\).
**Definition 1.4**.: A (not necessarily proper) colouring of the vertices of \(G\) is \(t\)_-periodic_ if any path of length \(t\) in \(G\) has endpoints of the same colour.
Bondy and Simonovits used this notion as a tool to solve a conjecture of Paul Erdos and prove that, if \(G\) has \(N\) nodes and at least \(100cN^{1+1/c}\) edges, then \(G\) contains a cycle of length \(2\ell\), for every \(\ell\in[c,cN^{1/c}]\). In particular, they showed that, for any \(t\), if \(G\) has average degree \(2|E|/|V|\) which is \(\geq 4\), then any \(t\)-periodic colouring of \(G\) has at most \(2\) colours (see also Lemma 5.4.4 in [11]). However, they did not study properties of the \(t\)-periodic colouring which were not needed for their main result.
Here we introduce and investigate the following colouring number:
**Definition 1.5**.: The _vertex \(t\)-periodic colouring number_\(\chi_{t}=\chi_{t}(G)\) is the largest \(k\) such that \(G\) is has a \(t\)-periodic colouring of the vertices with \(k\) colours.
Clearly, for each \(k\leq\chi_{t}\), \(G\) has a \(t\)-periodic colouring with \(k\) colours. This is why it is appropriate to consider the maximum number of such colours, as opposed to the definitions of \(\chi\) and \(\chi^{*}\), where the minimum number of colours is taken.
Now, choosing an _orientation_ for an edge \(\{v,w\}\in E\) means letting one of its endpoints be its _input_ and the other one be its _output_. We let \(e=[v,w]\) denote the oriented edge whose input is \(v\) and whose output is \(w\). Moreover, we let
\[\mathcal{O}:=\{[v,w],[w,v]\,:\,v\sim w\}\]
denote the set of oriented edges of \(G\).
Before introducing the other notion of periodic colouring that we shall focus on, we recall that a graph is called \(k\)_-partite_ if it admits a proper \(k\)-colouring of the vertices.
**Definition 1.6** ([26]).: Given \(k\in\mathbb{N}_{\geq 1}\), \(G\) is _circularly \(k\)-partite_ if the set of its oriented edges can be partitioned as \(\mathcal{O}=\mathcal{O}_{1}\sqcup\ldots\sqcup\mathcal{O}_{k}\), where the sets \(\mathcal{O}_{j}\) are non-empty and satisfy the property that
\[[v,w]\in\mathcal{O}_{j}\Longrightarrow[w,z]\in\mathcal{O}_{j+1},\,\forall z \sim w\,:\,z\neq v,\]
where we also let \(\mathcal{O}_{0}:=\mathcal{O}_{k}\) and \(\mathcal{O}_{k+1}:=\mathcal{O}_{1}\).
In particular, if \(G\) is circularly \(k\)-partite, then the corresponding partition \(\mathcal{O}=\mathcal{O}_{1}\sqcup\ldots\sqcup\mathcal{O}_{k}\) of the oriented edges naturally defines a \(k\)-colouring \(c:\mathcal{O}\rightarrow\{1,\ldots,k\}\) of the oriented egdes by letting
\[c([v,w])=j\iff[v,w]\in\mathcal{O}_{j}.\]
We can also see this colouring as a function \(c:\mathcal{O}\rightarrow\mathbb{Z}_{k}\), where \(\mathbb{Z}_{k}\) is the multiplicative group of integers modulo \(k\), and it is _periodic_ in the sense that it satisfies
\[c([v,w])=j\Longrightarrow c([w,z])=j+1,\,\forall z\sim w\,:\,z\neq v.\]
Circularly partite graphs have been introduced in [26] in the following context. Given a simple graph \(G\) as above, its _non-backtracking graph_ is the directed graph \(\mathcal{G}\) whose vertex set is the set \(\mathcal{O}\) of oriented edges of \(G\), and such that there is a directed edges from \([v_{1},v_{2}]\in\mathcal{O}\) to \([w_{1},w_{2}]\in\mathcal{O}\) if and only \(v_{2}=w_{1}\) and \(v_{1}\neq w_{2}\). The _non-backtracking matrix_ of \(G\) is defined as
the transpose of the adjacency matrix of \(\mathcal{G}\). It has been introduced by Ki-ichiro Hashimoto in 1989 [17], and it has been investigated since then in many areas of graph theory and network science (see for instance [2, 3, 4, 5, 7, 8, 9, 10, 13, 14, 15, 18, 21, 24, 25, 27, 28, 29, 30, 31, 32, 20, 33]). Recently, in [20], Jurgen Jost, Leo Torres and the author of this paper introduced the _non-backtracking Laplacian_ of \(G\) as the normalized Laplacian \(\mathcal{L}\) of \(\mathcal{G}\), and they showed that the spectral properties of this new operator are, in some ways, more precise than that of the non-backtracking matrix. In [20] it is shown, in particular, that the eigenvalues of the non-backtracking Laplacian \(\mathcal{L}\) are contained in the complex disc \(D(1,1)\), and the _spectral gap from \(1\)_ is defined as the smallest distance \(\varepsilon\) between the eigenvalues of \(\mathcal{L}\) and the point \(1\) on the real line. Moreover, in [20], also a sharp lower bound for \(\varepsilon\) is provided, but no upper bound is given. As a continuation of this work, in [26], Dong Zhang, Giulio Zucal and the author of this paper proved additional spectral properties of the non-backtracking Laplacian, including a sharp upper bound for \(\varepsilon\). And in order to prove this upper bound (Theorem 5.1 in [26]), they had to introduce the above class of circularly partite graphs.
Here we introduce also the following colouring number:
**Definition 1.7**.: The _oriented edge periodic colouring number_\(\chi^{o}=\chi^{o}(G)\) is the largest \(k\) such that \(G\) is circularly \(k\)-partite.
We shall see why, as for \(\chi_{t}\), also in this case it is natural to consider the maximum number of colours, and we shall investigate various properties of \(\chi^{o}\). Moreover, we shall see that there is a deep connection between \(\chi_{t}\) and \(\chi^{o}\), despite the fact that their definitions are quite different, and they arose in two unrelated contexts.
Throughout the paper, we shall consider simple graphs, and we shall assume that all graphs are _connected_.
### Structure of the paper
In Section 2 we give a characterisation of circularly \(k\)-partite graphs that does not involve the set of oriented edges. Then, in Section 3 we prove various properties of the oriented edge periodic colouring number \(\chi^{o}\), and similarly, in Section 4 we investigate properties of the vertex \(t\)-periodic colouring number \(\chi_{t}\). Finally, in Section 5, we study the relationship between \(\chi^{o}\) and \(\chi_{t}\).
## 2 Characterisation of circularly partite graphs
As pointed out already in [26], all graphs are circularly \(1\)-partite, and a graph is circularly \(2\)-partite if and only if it is bipartite. An example of a circularly \(3\)-partite graph is given in Figure 1 below.
We shall now give a characterisation of all circularly \(k\)-partite graphs that does not require looking at the set of oriented edges. Before, we need to define extended star graphs, as well as the diameter of a graph.
**Definition 2.1**.: A graph \(G\) is an _extended star graph_ if it is a tree (that is, it has no cyles) and it has exactly one vertex of degree \(\geq 3\).
For instance, star graphs with at least \(3\) edges are extended star graphs.
**Definition 2.2**.: The _diameter_ of a graph is the length of the shortest path between the most distanced nodes.
We can characterise all circularly \(k\)-partite graphs as follows.
**Theorem 2.3**.:
1. _The path graph of length_ \(M\) _is circularly_ \(k\)_-partite for each_ \(k\in\{1,\ldots,2M\}\)_._
2. _An extended star graph of diameter_ \(M\) _is circularly_ \(k\)_-partite for each_ \(k\in\{1,\ldots,M\}\)_._
3. _For_ \(k\geq 1\)_, a graph_ \(G\) _which is neither a path nor an extended star graph is circularly_ \(k\)_-partite if and only if the following two conditions hold:_ * _The length of each cycle is a multiple of_ \(k\)_, and_ * _The length_ \(\ell\) _of any path connecting two distinct vertices of degree_ \(\geq 3\) _is such that_ \(2\ell\) _a multiple of_ \(k\)_._
Proof.: The first two claims are straightforward, and so is the third claim for \(k=1\), since all graphs are circularly \(1\)-partite and the two conditions are satisfied by all graphs for \(k=1\). For \(k=2\), we know that a graph is circularly \(2\)-partite if and only if it is bipartite, therefore if and only if all cycles have even length. Together with the fact that \(2\ell\) is always a multiple of \(2\), this proves the claim for \(k=2\).
Now, fix \(k\geq 3\) and let \(G\) be a circularly \(k\)-partite graph which is not a path. Clearly, the length of each cycle in \(G\) must be a multiple of \(k\). To see that also the second condition holds, assume that there is a path of length \(\ell\) between two distinct vertices \(v\) and \(w\) of degree \(\geq 3\). Assume also, without loss of generality, that one of the oriented edges that have \(v\) as an input belongs to \(\mathcal{O}_{1}\). Then, since \(\deg v\geq 3\), all oriented edges that have \(v\) as an input must be in \(\mathcal{O}_{1}\), and all oriented edges that have \(v\) as an output must be in \(\mathcal{O}_{k}\). This leads us to the situation depicted in Figure 2, where we also let \(\mathcal{O}_{k+r}:=\mathcal{O}_{r}\) for any \(r\). In particular, we have an oriented edge in \(\mathcal{O}_{2\ell}\) that has \(v\) as output, but we know that all oriented edges that have \(v\) as output belong to \(\mathcal{O}_{k}\). This implies that \(2\ell\) must be a multiple of \(k\). The fact that, vice versa, a graph satisfying conditions 1 and 2 is circularly \(k\)-partite, is also clear from Figure 2.
By Theorem 2.3, it is straightforward to see for example that a Mickey Mouse graph (Figure 3) with both ears of length \(2k\), the face of length \(5k\) and distance \(k\) between the ears is circularly \(k\)-partite.
Moreover, two immediate consequences of Theorem 2.3 are the following corollaries.
**Corollary 2.4**.: _Let \(G\) be a circularly \(k\)-partite graph which is not a path graph. If we know to which set \(\mathcal{O}_{j}\) a given oriented edge \([v,w]\) belongs to, then by induction we can also infer to which set each oriented edge belongs to._
Figure 1: A circularly \(3\)-partite graph.
**Corollary 2.5**.: _If a graph is circularly \(k\)-partite, then it is also circularly \(\ell\)-partite, for each number \(\ell\) that divides \(k\)._
Hence, it is natural to ask what the largest \(k\) such that a graph is circularly \(k\)-partite is.
## 3 Oriented edge periodic colouring number
In this section we focus on the _oriented edge periodic colouring number_\(\chi^{o}=\chi^{o}(G)\), which we defined as the largest \(k\) such that \(G\) is circularly \(k\)-partite.
We start by considering the following examples.
* For the path graph of length \(M\), \(\chi^{o}=2M\).
* For star graphs, \(\chi^{o}=2\).
* More generally, for an extended star graph of diameter \(M\), \(\chi^{o}=M\).
* For the cycle graph on \(N\) nodes, \(\chi^{o}=N\).
* The _petal graph_ with \(\ell\geq 1\) petals of length \(k\geq 3\) is the graph given by \(\ell\) cycle graphs of length \(k\), all having a common central vertex. Clearly, for a petal graph whose petals have length \(k\), we have \(\chi^{o}=k\).
We can now improve Corollary 2.5 as follows.
**Theorem 3.1**.: _If \(G\) is neither a path nor an extended star graph, then for every \(k\geq 1\),_
\[\text{G is circularly $k$--partite}\ \ \Longleftrightarrow\ k\text{ divides $\chi^{o}$.}\]
Figure 3: A Mickey Mouse graph is circularly \(k\)-partite.
Figure 2: An illustration of the proof of Theorem 2.3.
Proof.: The (\(\Leftarrow\)) follows from Corollary 2.5. Now, assume that \(G\) is circularly \(k\)-partite. Then, by definition of oriented edge periodic colouring number, we must have \(k\leq\chi^{o}\). Assume also, by contradiction, that \(k\) does not divide \(\chi^{o}\). Then, \(k>1\) and, by Theorem 2.3,
* The length of each cycle is a multiple of both \(\chi^{o}\) and \(k\), and
* The length \(\ell\) of any path connecting two distinct vertices of degree \(\geq 3\) is such that \(2\ell\) a multiple of both \(\chi^{o}\) and \(k\).
This implies that
* The length of each cycle is a multiple of \(k\cdot\chi^{o}\), and
* The length \(\ell\) of any path connecting two distinct vertices of degree \(\geq 3\) is such that \(2\ell\) a multiple of \(k\cdot\chi^{o}\).
Therefore, again by Theorem 2.3, \(G\) is circularly \(k\cdot\chi^{o}\)-partite. But since \(k\cdot\chi^{o}>\chi^{o}\), this leads to a contradiction.
**Lemma 3.2**.: _Let \(q,r\in\mathbb{N}_{\geq 3}\) be two relatively prime numbers. If \(G\) has at least one cycle of length \(q\) and at least one cycle of length \(r\), then \(\chi^{o}=1\)._
Proof.: It follows from Theorem 2.3.
For example, the graphs in Figure 4 only differ by one edge subdivison. However, by Lemma 3.2, the first graph is such that \(\chi^{o}=1\), since the inner cycles have length \(3\) while the outer cycle has length \(4\). On the other hand, by Theorem 2.3, the second graph is such that \(\chi^{o}=4\).
An immediate consequence of Lemma 3.2 is the following
**Corollary 3.3**.: _Let \(K_{N}\) denote the complete graph on \(N\) nodes, and let \(G\) be a graph on \(N\) nodes. If \(K_{4}\) is a subgraph of \(G\), then \(\chi^{o}(G)=1\). In particular, if \(N\geq 4\), then \(\chi^{o}(K_{N})=1\)._
Hence, for \(N\geq 4\), the complete graph \(K_{N}\) has the smallest possible oriented edge periodic colouring number, since \(\chi^{o}(K_{N})=1\), and the largest possible vertex colouring number, since \(\chi(K_{N})=N\).
**Proposition 3.4**.: _Let \(d\geq 3\). If \(G\) is \(d\)-regular, then \(\chi^{o}\in\{1,2\}\)._
Proof.: Clearly, by the assumptions, \(G\) is neither a path nor an extended star graph. Hence, by Theorem 2.3 and since all vertices have degree \(d\geq 3\), the length \(\ell\) of any path connecting two distinct vertices is such that \(2\ell\) a multiple of \(\chi^{o}\). This is also true for the distance \(\ell=1\) between two adjacent vertices. Therefore, \(2\) must be a multiple of \(\chi^{o}\), implying that \(\chi^{o}\leq 2\).
As a consequence of Proposition 3.4, cycle graphs are the only regular graphs for which \(\chi^{o}>2\).
Next, we shall evaluate the vertex and edge colouring number for all graphs that have \(\chi^{o}>1\).
Observe that, for an extended star graph of diameter \(M\), \(\chi^{o}=M\) and \(\chi=2\). The colouring number of all other graphs that have \(\chi^{o}>1\) is given in the next theorem.
**Theorem 3.5**.: _If \(G\) is not an extended star graph, then_
1. \(\chi^{o}\) _is even if and only if_ \(\chi=2\)_._
2. _If_ \(\chi^{o}>1\) _is odd, then_ \(\chi=3\)_._
Proof.:
1. We know that \[\chi=2\iff G\text{ is bipartite}\iff G\text{ is circularly 2-partite}.\] By Theorem 3.1, if \(G\) is neither a path nor an extended star graph, this happens if and only if \(2\) divides \(\chi^{o}\), that is, \(\chi^{o}\) is even. If \(G\) is a path graph of length \(M\), the claim follows from the fact that \(\chi=2\) and \(\chi^{o}=2M\) is always even.
2. By (a), \(G\) is not bipartite, therefore \(\chi\geq 3\). Hence, if we show that there exists a proper \(3\)-colouring of the vertices we are done, as this implies that \(\chi\leq 3\). Since \(G\) is neither a path (as it is not bipartite) nor an extended star graph (by assumption), it is described by the third point in Theorem 2.3, for \(k=\chi^{o}\). Therefore, the length \(\ell\) of any path connecting two distinct vertices of degree \(\geq 3\) must be such that \(2\ell\) a multiple of \(\chi^{o}\). But since \(\chi^{o}\) is odd, this implies that \(\ell\) must be a multiple of \(\chi^{o}\). Hence, since \(\chi^{o}>1\), also \(\ell>1\). In particular, two vertices of degree \(\geq 3\) cannot be adjacent, as their distance must be greater than \(1\). Therefore, we can colour all vertices of degree \(\geq 3\) with the same colour. Since all other vertices have degree \(1\) or \(2\), it is enough to colour them with at two additional colours, to obtain a proper \(3\)-colouring of the entire vertex set. This proves the claim.
The inverse of Theorem 3.5(b) does not hold. For instance, let \(G\) be the graph given by a cycle of length \(3\) attached with a cycle of length \(4\) by a common central vertex. Then, \(\chi=3\) and, by Lemma 3.2, \(\chi^{o}=1\).
We now consider the edge colouring number. We shall prove that all graphs that have \(\chi^{o}>1\) are of class one.
**Theorem 3.6**.: _For any graph with maximum degree \(\Delta\),_
\[\chi^{o}>1\Rightarrow\chi^{*}=\Delta.\]
Proof.: If \(G\) is bipartite, then \(\chi^{*}=\Delta\) by Konig's line colouring Theorem [22]. Hence, we can now assume that \(G\) is not an extended star graph, \(\Delta\geq 3\) and, by Theorem 3.5(a), we can assume that \(\chi^{o}\geq 3\) is odd. In this case, as shown in the proof of Theorem 3.5(b), if \(v\) and \(w\) are two distinct vertices of degree \(\geq 3\), then their distance \(d\) must be a multiple of \(\chi^{o}\). Hence, in particular, \(d\geq 3\). This allows us to make a proper \(\Delta\)-colouring of the edges, as follows:
1. We can first colour all edges that are incident to vertices of degree \(\geq 3\) with colours \(\{1,2,3,\ldots,\Delta\}\), so that two such edges that share a common vertex have different colours.
2. The endpoints of the remaining edges all have degree \(1\) or \(2\). Clearly, we can colour such edges with colours \(\{1,2,3\}\) so that any two edges in \(G\) that meet at a common vertex have different colours.
Therefore, \(\chi^{*}\leq\Delta\), and by Vizing's Theorem, this implies that \(\chi^{*}=\Delta\).
## 4 Vertex \(t\)-periodic colouring
Recall that, in the Introduction, we gave the following definitions:
**Definition 4.1**.: A (not necessarily proper) colouring of the vertices of \(G\) is _\(t\)-periodic_ if any path of length \(t\) in \(G\) has endpoints of the same colour.
**Definition 4.2**.: The _vertex \(t\)-periodic colouring number_\(\chi_{t}=\chi_{t}(G)\) is the largest \(k\) such that \(G\) is has a \(t\)-periodic colouring of the vertices with \(k\) colours.
We shall now investigate various properties of \(\chi_{t}\). We start with the following lemma.
**Lemma 4.3**.: _For any \(t\), \(1\leq\chi_{t}\leq t\)._
Proof.: The lower bound can be trivially seen. The upper bound follows by the fact that, if \(c:V\to\{1,\ldots,k\}\) is a \(t\)-periodic colour and \(v_{1}\sim\cdots\sim v_{t}\) form a path, then for each \(w\in V\) there exists \(i\in\{1,\ldots,t\}\) and \(c\in\mathbb{N}\) such that \(w\) has distance \(c\cdot t\) from \(v_{i}\), implying that \(c(w)=c(v_{i})\). Hence, \(k\leq t\), that is, at most \(t\) colours can be used.
Now, we know that a graph \(G\) is bipartite if and only if \(\chi=2\). Moreover, in Theorem 3.5 we have shown that, if \(G\) is not an extended star graph, then this is true if and only if \(\chi^{o}\) is even. Similarly, we can prove the following
**Proposition 4.4**.: \(G\) _is bipartite if and only if \(\chi_{2}=2\)._
Proof.: If \(V=V_{1}\sqcup V_{2}\) is a partition of the vertex set that makes \(G\) bipartite, then we can clearly colour \(V_{1}\) with one colour and \(V_{2}\) with another colour to obtain a \(2\)-periodic colouring. Together with Lemma 4.3, this implies that \(\chi_{2}=2\).
Assume now that \(\chi_{2}=2\), and let \(V=V_{1}\sqcup V_{2}\) be the partition of the vertex set which is given by the colours of a \(2\)-periodic colouring. Assume also, by contradiction, that there exists an edge \(v\sim w\) within \(V_{1}\). Then, for each \(z\in V_{2}\), we cannot have \(v\sim z\), because otherwise \(w\) would have the same colour as \(z\), and similarly we cannot have \(w\sim z\). Hence, in particular, \(v\) and \(w\) can only be connected to vertices within \(V_{1}\). By induction, this implies that \(V_{1}\) and \(V_{2}\) are not joined by any edge, which is a contradiction since we are considering connected graphs.
We now consider some examples.
* Clearly, \(\chi_{1}=1\) for every graph.
* For the path on \(N\) nodes and any \(t\in\{1,\ldots,N\}\), it is easy to check that \(\chi_{t}=t\).
* For the complete graph on \(N\) nodes, \(\chi_{t}=1\) for all \(t\in\{1,\ldots,N-1\}\).
* In [6] and [11, Lemma 5.4.4] it is shown that, for any \(t\), if \(G\) has average degree \(2|E|/|V|\) which is \(\geq 4\), then any \(t\)-periodic colouring of \(G\) has at most \(2\) colours, implying that \(\chi_{t}\leq 2\) in this case.
**Theorem 4.5**.: _Let \(G=C_{N}\) be the cycle graph on \(N\) nodes, and let \(t\in\{1,\ldots,N\}\). Then,_
\[\chi_{t}=\gcd(t,N).\]
Proof.: Let \(g:=\gcd(t,N)\). The fact that \(\chi_{t}\geq g\) is clear by considering a proper \(t\)-periodic \(g\)-colouring of the vertices. Hence, it is enough to show that \(\chi_{t}\leq g\). Let
\[v_{0}\sim v_{1}\sim\ldots\sim v_{N-1}\]
be the vertices of \(G\), and let \(c:V\to\{1,\ldots,\chi_{t}\}\) be a \(t\)-periodic colouring. Moreover, write \(t=ge\) and \(N=gf\), so that \(\gcd(e,f)=1\). If we see the indices of all the vertices as elements of the multiplicative group \(\mathbb{Z}_{N}\) of integers modulo \(N\), then by definition of \(t\)-periodic colouring, we have that
\[c(v_{i})=c(v_{\alpha t+i}),\quad\forall\alpha\in\mathbb{Z}_{N}\text{ and } \forall i\in\{0,\ldots,N-1\}.\]
Therefore, if we show that for each \(i\in\{0,\ldots,N-1\}\) there are \(f\) distinct elements of the form \(\alpha t+i\) (for \(\alpha\in\mathbb{Z}_{N}\)) in \(\mathbb{Z}_{N}\), we can infer that there are at least \(f\) distinct vertices that have colour \(c(v_{i})\). In particular, this would imply that
\[\chi_{t}\leq\frac{N}{f}=g.\]
Without loss of generality, we can consider \(i=0\) and ask how many distinct elements of the form \(\alpha t\) (for \(\alpha\in\mathbb{Z}_{N}\)) are in \(\mathbb{Z}_{N}\). We have
\[\begin{split}\alpha t\equiv\alpha^{\prime}t\mod N& \iff N|(\alpha-\alpha^{\prime})t\\ &\iff gf|(\alpha-\alpha^{\prime})ge\\ &\iff f|(\alpha-\alpha^{\prime})e\\ &\iff f|(\alpha-\alpha^{\prime})\\ &\iff\alpha\equiv\alpha^{\prime}\mod f.\end{split}\]
Since \(f\) is the number of distinct elements \(\alpha\in\mathbb{Z}_{f}\), this proves the claim.
An immediate consequence is the following
**Corollary 4.6**.: _If \(G\) has a cycle of length \(\ell\), then for \(t\in\{1,\ldots,\ell\}\),_
\[\chi_{t}\leq\gcd(t,\ell).\]
Proof.: Clearly, if \(C_{\ell}\) is a subgraph of \(G\), then any \(t\)-periodic colouring of \(G\) is also a \(t\)-periodic colouring of \(C_{\ell}\). And by Theorem 4.5, any \(t\)-periodic colouring of \(C_{\ell}\) must be \(\leq\gcd(t,\ell)\).
We now consider graphs with minimum degree \(\geq 2\) which are not cycle graphs.
**Theorem 4.7**.: _Let \(G\) be a graph with minimum degree \(\geq 2\) which is not a cycle graph. Fix \(t\geq 3\) which is not larger than the shortest length of a cycle, and let \(\tau:=\left\lfloor\frac{t}{2}\right\rfloor\). Then, any \(t\)-periodic colouring must be of the form_
\[c_{0},c_{1},\ldots,c_{\tau},\ldots,c_{1}\quad\text{(for $t$ even), or} \tag{1}\] \[c_{0},c_{1},\ldots,c_{\tau},c_{\tau},\ldots,c_{1}\quad\text{( for $t$ odd).} \tag{2}\]
_Hence, in particular,_
\[\chi_{t}\leq\tau+1,\]
_with equality if and only if the \(\tau+1\) colours in (1) or (2) can be chosen to be all distinct._
Proof.: By the assumptions, one can check that \(G\) contains a subgraph of the form \(C_{\ell}\sqcup P_{\tau}\) which is given by a cycle on \(\ell\geq t\) nodes attached to a path of length \(\tau\) at exactly one vertex \(v_{0}\). Let
\[v_{0}\sim v_{1}\sim\ldots\sim v_{\ell-1}\]
denote the vertices within the cycle \(C_{\ell}\), and consider the indices of these vertices as elements of \(\mathbb{Z}_{\ell}\). Let also
\[v_{0}\sim w_{1}\sim\ldots\sim w_{\tau}\]
be a path of length \(\tau\) such that the vertices \(w_{i}\)'s are not in \(C_{\ell}\).
For \(k\leq\tau\), any \(t\)-periodic colouring \(c\) must satisfy (Figure 5)
\[c(w_{k})=c(v_{t-k})=c(v_{-k})\]
and
\[c(w_{k})=c(v_{-t+k})=c(v_{k}).\]
The condition
\[c(w_{k})=c(v_{k})=c(v_{t-k})\quad\forall k\in\{1,\ldots,\tau\}\]
implies that
\[c(w_{1})=c(v_{1})=c(v_{t-1}),\] \[c(w_{2})=c(v_{2})=c(v_{t-2}),\] \[\ldots\] \[c(w_{\tau})=c(v_{\tau})=c(v_{t-\tau}).\]
Therefore, if \(t=2\tau\) is even, then the \(t\)-periodic colouring has to be of the form
\[c(v_{0}),c(w_{1}),\ldots,c(w_{\tau}),\ldots,c(w_{1}).\]
If \(t=2\tau+1\) is odd, then the \(t\)-periodic colouring has to be of the form
\[c(v_{0}),c(w_{1}),\ldots,c(w_{\tau}),c(w_{\tau}),\ldots,c(w_{1}).\]
This proves the claim.
Vertex \(t\)-periodic colouring and oriented edge periodic colouring
In this concluding section, we show that there is a deep relationship between the vertex \(t\)-periodic colouring and the oriented edge periodic colouring. We start with the following
**Proposition 5.1**.: _If \(G\) is the cycle graph on \(N\) nodes and \(t\in\{1,\ldots,N\}\), then_
\[\chi_{t}=t\iff G\text{ is circularly $t$-partite}.\]
Proof.: By Theorem 4.5, \(\chi_{t}=t\) if and only if \(\gcd(t,N)=t\), therefore if and only if \(t\) divides \(N\). By Theorem 2.3, this happens if and only if \(G\) is circularly \(t\)-partite.
**Theorem 5.2**.: _Let \(G\) be a graph with minimum degree \(\geq 2\) which is not a cycle graph. Fix \(t\geq 3\) which is not larger than the shortest length of a cycle, and let \(\tau:=\left\lfloor\frac{t}{2}\right\rfloor\). Then,_
\[\chi_{t}=\tau+1\iff G\text{ is circularly $t$-partite}.\]
Proof.: If \(t\) is even, then by Theorem 2.3, \(G\) is circularly \(t\)-partite if and only if
* The length of each cycle is a multiple of \(t\), and
* The length of any path connecting two distinct vertices of degree \(\geq 3\) is a multiple of \(\tau\).
As a consequence one can check that, in this case, the \(t\)-periodic colouring in (1) can be made with \(\tau+1\) distinct colours, if each vertex of degree \(\geq 3\) has either colour \(c_{0}\) or colour \(c_{\tau}\).
Similarly, if \(t\) is odd, then again by Theorem 2.3, \(G\) is circularly \(t\)-partite if and only if
* The length of each cycle is a multiple of \(t\), and
* The length of any path connecting two distinct vertices of degree \(\geq 3\) is a multiple of \(t\).
As a consequence one can check that, in this case, the \(t\)-periodic colouring in (2) can be made with \(\tau+1\) distinct colours, if all vertices of degree \(\geq 3\) have colour \(c_{0}\).
Therefore, we have shown one direction. Assume now that \(\chi_{t}=\tau+1\), and let \(c:V\to\{1,\ldots,\tau+1\}\) be a \(t\)-periodic colouring. Then, by Theorem 4.7, \(c\) must be of the form
\[c_{0},c_{1},\ldots,c_{\tau},\ldots,c_{1}\quad\text{(for $t$ even), or}\] \[c_{0},c_{1},\ldots,c_{\tau},c_{\tau},\ldots,c_{1}\quad\text{(for $t$ odd),}\]
where \(c_{0},\ldots,c_{\tau}\) are \(\tau+1\) distinct colours. Hence, clearly, the length of each cycle must be a multiple of \(t\). This shows the first condition for circularly \(t\)-partite graphs.
Now, if \(v_{0}\) is a vertex of degree \(\geq 3\), then we can write
\[v_{0} \sim v_{1}\sim\ldots\sim v_{t-1},\] \[v_{0} \sim v_{-1}\sim\ldots\sim v_{-t+1},\] \[v_{0} \sim w_{1},\]
where:
* The vertices \(v_{1}\), \(v_{-1}\) and \(w_{1}\) are all distinct;
* The vertices \(v_{i}\), for \(i\in\{0,1,\ldots,t-1,-1,\ldots,-t+1\}\) might possibly have indices in \(\mathbb{Z}_{\ell}\), for some \(\ell\geq t\). By definition of \(t\)-periodic colouring, \[c(w_{1})=c(v_{t-1})=c(v_{1})\] and \[c(w_{1})=c(v_{-t+1})=c(v_{-1}),\] implying that all neighbors of \(v_{0}\) must have the same colour. But since the colors \(c_{0},\ldots,c_{\tau}\) are all distinct, this is only possible if either \(t\) is even and \(v_{0}\) has colour \(c_{0}\) or \(c_{\tau}\), or \(t\) is odd and \(v_{0}\) has colour \(c_{0}\). Hence, this implies that the length of any path connection two distinct vertices of degree \(\geq 3\) must be a multiple of \(\tau\) (for \(t\) even), or a multiple of \(t\) (for \(t\) odd). That is, \(G\) must be circularly \(t\)-partite.
Hence, summarising the relationship between the \(t\)-periodic colouring for vertices and the periodic colouring for oriented edges, we have that:
1. Every graph admits a \(1\)-periodic \(1\)-coloring of the vertices, as well as a periodic \(1\)-coloring of the oriented edges.
2. A graph is bipartite if and only if it admits a \(2\)-periodic \(2\)-coloring of the vertices (by Proposition 4.4), and if and only if it admits a periodic \(2\)-coloring of the oriented edges (by Theorem 3.5).
3. For \(t\leq N\), the cycle graph on \(N\) nodes admits a \(t\)-periodic \(t\)-coloring of the vertices if and only if it admits a periodic \(t\)-coloring of the oriented edges (by Proposition 5.1), and this happens if and only if \(t\) divides \(N\) (by Theorem 3.1).
4. Let \(G\) be a graph with minimum degree \(\geq 2\) which is not a cycle graph. Let \(\ell\) be the shortest length of a cycle, let \(t\in\{3,\ldots,\ell\}\) and let \(\tau:=\left\lfloor\frac{t}{2}\right\rfloor\). Then, by Theorem 5.2, \(G\) admits a \(t\)-periodic \((\tau+1)\)-coloring of the vertices if and only if it admits a periodic \(t\)-coloring of the oriented edges.
### Acknowledgments
Most of these ideas have been developed during one of the author's visits to the Alfred Renyi Institute of Mathematics in Budapest. She is grateful to Agnes Backhausz, Peter Csikvari, Peter Frenkel, Laszlo Lovasz and Giulio Zucal for the inspiring discussions that took place during her visit. She is also grateful to Conor Finn, Joseph Lizier and (again) Giulio Zucal for the discussions that took place at the Max Planck Institute for Mathematics in the Sciences.
|
2308.11253 | Finite total curvature and soap bubbles with almost constant
higher-order mean curvature | Given $ n \geq 2 $ and $ k \in \{2, \ldots , n\} $, we study the asymptotic
behaviour of sequences of bounded $C^2$-domains of finite total curvature in $
\mathbb{R}^{n+1} $ converging in volume and perimeter, and with the $ k $-th
mean curvature functions converging in $ L^1 $ to a constant. Under natural
mean convexity hypothesis, and assuming an $ L^\infty $-control on the mean
curvature outside a set of vanishing area, we prove that finite unions of
mutually tangent balls are the only possible limits. This is the first result
where such a uniqueness is proved without assuming uniform bounds on the
exterior or interior touching balls. | Mario Santilli | 2023-08-22T07:52:09Z | http://arxiv.org/abs/2308.11253v1 | # Finite total curvature and soap bubbles with almost constant higher-order mean curvature
###### Abstract.
Given \(n\geq 2\) and \(k\in\{2,\ldots,n\}\), we study the asymptotic behaviour of sequences of bounded \(C^{2}\)-domains of finite total curvature in \(\mathbb{R}^{n+1}\) converging in volume and perimeter, and with the \(k\)-th mean curvature functions converging in \(L^{1}\) to a constant. Under natural mean convexity hypothesis, and assuming an \(L^{\infty}\)-control on the mean curvature outside a set of vanishing area, we prove that finite unions of mutually tangent balls are the only possible limits. This is the first result where such a uniqueness is proved without assuming uniform bounds on the exterior or interior touching balls.
###### Contents
* 1 Introduction
* 2 Preliminaries
* 3 Legendrian cycles and Curvature measures
* 4 Heintze-Karcher inequality for varifolds of bounded mean curvature
* 5 Proof of Theorem 1.1
* A Soap bubbles with positive reach
## 1. Introduction
### Overview
If \(\Omega\subseteq\mathbb{R}^{n+1}\) is an open set whose boundary \(\partial\Omega\) is a closed embedded \(C^{2}\)-hypersurface and if \(\kappa_{\Omega,1}\leq\ldots\leq\kappa_{\Omega,n}\) are the principal curvatures of \(\partial\Omega\) with respect to the exterior normal of \(\Omega\), then _the \(k\)-th mean curvature function of \(\Omega\)_ is given by
\[H_{\Omega,k}=\sum_{\lambda\in\Lambda(n,k)}\kappa_{\Omega,\lambda(1)}\cdots \kappa_{\Omega,\lambda(k)},\]
where \(\Lambda(n,k)\) is the set of all increasing maps from \(\{1,\ldots,k\}\) to \(\{1,\ldots,n\}\). The function \(H_{\Omega,1}\) is also called mean curvature of \(\Omega\).
The following theorem is a classical and well known result.
**Soap bubble theorem.**_If \(k\in\{1,\ldots,n\}\) and \(\Omega\subseteq\mathbb{R}^{n+1}\) is a bounded and connected open set with \(C^{2}\)-boundary such that \(H_{\Omega,k}\) is constant, then \(\Omega\) is a round ball._
For \(k=1\) this result is due to Alexandrov and was proved using his celebrated moving plane method; see [1]. For arbitrary \(k\) the result was proved in [11] and [12], using an approach based on an optimal geometric inequality inspired by the work of Heintze and Karcher in [13], and on the Minkowski identities
[Hsi54]. A proof based on the moving plane method for arbitrary \(k\) is due to Korevaar, see [Ros88, Appendix].
Motivated by the soap bubble theorem, we are interested in the following uniqueness problem
( \[\begin{array}{ll}&\mbox{\it Does every sequence $\Omega_{\ell}\subseteq\mathbb{R}^{n+1}$ of bounded, connected, open}\\ &\mbox{\it smooth sets with bounded perimeters, whose $k$-th mean curvature}\\ &\mbox{\it functions converge to a constant, have finite unions of}\\ &\mbox{\it mutually tangent balls as their only possible limits?}\end{array}\]
In general it is not possible to deduce convergence to a single ball from the sole hypothesis of small oscillation of the mean curvature functions. In fact, by truncating and smoothly completing unduloids with thin necks one can construct a sequence of bounded and connected smooth boundaries converging to an array of mutually tangent balls while the mean curvatures converge to a constant.
The problem (\(UP_{1}\)) was thoroughly investigated, even in quantitative ways. In [DM19], Delgadino and Maggi extended the Alexandrov theorem proving that if a set of finite perimeter has constant distributional mean curvature, then it is a finite union of closed balls with disjointed interiors. This result is proved using a measure-theoretic generalization of the Montiel-Ros argument, and implies the following uniqueness result (see [DM19, Corollary 2]): _finite unions of mutually tangent balls are the only possible limits of sequences of sets of finite perimeter converging in volume and in perimeter, and whose distributional mean curvatures converge to a constant._ Quantitative rates of convergence towards finite unions of balls are obtained in [CM17], [DMMN18] and in [JN23], employing integral-geometric methods inspired by the Montiel-Ros argument. See also [DRKS20] and [MS23] for extensions of [DM19] to the anisotropic and Riemannian setting.
The Heintze-Karcher inequality was generalized in the setting of arbitrary closed sets in [HS22, Theorem 3.20]. This result opens the way to obtain a measure-theoretic version of the soap bubble theorem for sets of positive reach in terms of their curvature measures; see [HS22, Theorem A, Theorem 6.15] (see also Theorem A.1 for an extension). As a corollary (see Theorem A.2) we obtain the following answer to (\(UP_{k}\)) for arbitrary \(k\): _one single ball is the only possible limit in the sense of Hausdorff converge, if one assumes that the sets \(\Omega_{\ell}\) in (\(UP_{k}\)) satisfies a uniform bound on the outer touching balls (i.e. lower uniform bound on the reach) and their \((k-1)\)-th mean curvature functions become asymptotically non-negative._ Optimal quantitative rates of convergence in Hausdorff distance towards one single balls can be deduced from the results in [CV18] (for \(k=1\)) and [CRV21] (for \(1\leq k\leq n\)), assuming a uniform bound both on the interior and exterior touching balls. The results in [CV18] and [CRV21] are based on a quantitative version of the Alexandrov moving plane method. See also [MP20] for other related quantitative results.
### The main theorem
The uniqueness problem (\(UP_{k}\)), for \(k\geq 2\) and without assuming uniform bounds on the interior or exterior touching balls, is a natural and interesting problem, which is to author's knowledge completely open, even in the 3-dimensional Euclidean space for sequences with vanishing oscillation of the Gaussian curvature (in some \(L^{p}\)-norm). In this paper we study this problem for sequences of _finite total curvature_, see Definition 2. As expected, studying
this problem under this new hypothesis requires the introduction of a substantial novel method of proof, with respect to the approaches used in Theorem A.2 and in [10], where the problem is treated under uniform bounds on the touching balls. Before to state the main result of the paper, firstly we introduce some definitions and notations. For a function \(f\) we write
\[f^{+}=\sup\{f,0\}\quad\text{and}\quad f^{-}=-\inf\{f,0\}.\]
We denote with \(A_{\Omega}\) the norm of the second fundamental form of \(\partial\Omega\):
\[A_{\Omega}(x)=\bigg{(}\sum_{i=1}^{n}\kappa_{\Omega,i}(x)^{2}\bigg{)}^{\frac{1 }{2}}\quad\text{for }x\in\partial\Omega.\]
**Definition 1** (Compactly supported sequences).: We say that a sequence \(\Omega_{\ell}\) of subsets of \(\mathbb{R}^{n+1}\) is _compactly supported_ if there exists a ball \(B_{R}\) such that \(\Omega_{\ell}\subseteq B_{R}\) for all \(\ell\geq 1\).
**Definition 2** (Sequences of finite total curvature).: We say that a sequence \(\Omega_{\ell}\subseteq\mathbb{R}^{n+1}\) of open sets with \(C^{2}\)-boundary has _finite total curvature_ if
\[\sup_{\ell\geq 1}\int_{\partial\Omega_{\ell}}A_{\Omega_{\ell}}^{n}\,d \mathcal{H}^{n}<\infty.\]
**Definition 3** (Asymptotically \(k\)-mean convex sequences).: Let \(k\in\{1,\dots,n\}\). We say that a sequence \(\Omega_{\ell}\subseteq\mathbb{R}^{n+1}\), \(\ell\geq 1\), of open sets with \(C^{2}\)-boundary is _asymptotically \(k\)-mean convex_ if and only if
\[\lim_{\ell\to\infty}\int_{\partial\Omega_{\ell}}(H_{\Omega_{\ell},i})^{-}\,d \mathcal{H}^{n}=0\quad\text{for }i=1,\dots,k.\]
If \(k=1\) we simply say that \(\Omega_{\ell}\) is asymptotically mean convex.
**Remark 1.1**.: In relation to Definition 3, we recall a well known fact. If \(k\in\{1,\dots,n\}\) and \(\Omega\subseteq\mathbb{R}^{n+1}\) is a bounded and connected \(C^{2}\)-domain with constant \(k\)-th mean curvature function, then
\[H_{\Omega,i}(x)>0\quad\text{for every }x\in\partial\Omega\text{ and for }1\leq i\leq k. \tag{1.1}\]
See [14, page 450] for details. The condition (1.1) is usually called \(k\)-convexity or \(k\)-mean convexity. It naturally appears in many problems involving higher-order mean curvature functions as it guarantees the ellipticity of the related fully non-linear PDE's equations; see the pioneering [12].
This is the main result of the paper.
**Theorem 1.1**.: _Let \(n\geq 2\), \(k\in\{2,\dots,n\}\) and let \(\Omega_{\ell}\subseteq\mathbb{R}^{n+1}\) be a compactly supported and asymptotically \((k-1)\)-mean convex sequence with finite total curvature. Suppose that the sets \(\Omega_{\ell}\) converge in volume and perimeter to a set \(\Omega\subseteq\mathbb{R}^{n+1}\), and there exist \(\lambda\in\mathbb{R}\) and \(M>0\) such that_
\[\lim_{\ell\to\infty}\int_{\partial\Omega_{\ell}}|H_{\Omega_{\ell},k}-\lambda| \,d\mathcal{H}^{n}=0 \tag{1.2}\]
\[\lim_{\ell\to\infty}\mathcal{H}^{n}(\{x\in\partial\Omega_{\ell}:H_{\Omega_{ \ell},1}(x)\geq M\})=0. \tag{1.3}\]
_Then \(\Omega\) is \(\mathcal{L}^{n+1}\) almost equal to a finite union of closed balls of the same radius with disjointed interiors. The radius \(\rho\) of the balls satisfies the relations_
\[\rho=\frac{(n+1)\mathcal{L}^{n+1}(\Omega)}{\mathcal{H}^{n}(\partial^{*}\Omega )}\quad\text{and}\quad\lambda=\binom{n}{k}\rho^{-k}. \tag{1.4}\]
In the classical case of 2-dimensional surfaces in \(\mathbb{R}^{3}\), in Theorem 1.1 (i.e. \(n=2\) and \(k=2\)) one can equivalently replace the condition of finite total curvature with a uniform bound on the \(L^{2}\)-norm of the mean curvature functions, since \(A^{2}_{\Omega_{\ell}}=H^{2}_{\Omega_{\ell},1}-2H_{\Omega_{\ell},2}\).
**Corollary 1.2**.: _Let \(\Omega_{\ell}\subseteq\mathbb{R}^{3}\) be a compactly supported and asymptotically mean convex sequence with_
\[\sup_{\ell\geq 1}\int_{\partial\Omega_{\ell}}H^{2}_{\Omega_{\ell},1}\,d \mathcal{H}^{2}<\infty.\]
_Suppose that the sets \(\Omega_{\ell}\) converge in volume and perimeter to a set \(\Omega\subseteq\mathbb{R}^{3}\), and there exist \(\lambda\in\mathbb{R}\) and \(M>0\) such that_
\[\lim_{\ell\to\infty}\int_{\partial\Omega_{\ell}}|H_{\Omega_{\ell},2}-\lambda| \,d\mathcal{H}^{2}=0\]
\[\lim_{\ell\to\infty}\mathcal{H}^{2}(\{x\in\partial\Omega_{\ell}:H_{\Omega_{ \ell},1}(x)\geq M\})=0. \tag{1.5}\]
_Then the conclusion of Theorem 1.1 holds with \(n=k=2\)._
**Remark 1.2**.: The hypothesis (1.3) in Theorem 1.1 (or (1.5) in Corollary 1.2) amounts to require that the mean curvatures are uniformly bounded from above outside a set of vanishing area. This is a technical condition, which is used in section 4 to derive some key fine properties of the varifold associated with the reduced boundary of the limiting set \(\Omega\). This condition is clearly satisfied whenever one has a sequence such that each set \(\Omega_{\ell}\) is made of several domains of uniformly bounded mean curvature connected by small necks of vanishing area (in which case we do not need to care about the behaviour of the mean curvatures on the vanishing necks).
We conjecture that the hypothesis (1.3) can be completely removed from Theorem 1.1.
**Remark 1.3**.: Besides its own interest, the uniqueness problem (\(UP_{k}\)) for \(k\geq 2\) naturally arises in the study of higher-order isoperimetric type inequalities (see [1] and [13]) and in the study of the geometric properties of hypersurfaces with prescribed higher-order mean curvatures (see [1] and references therein).
### Method of proof and organization of the paper
The proof of Theorem 1.1 is based on a novel geometric-measure theoretic extension of the method pioneered by Montiel and Ros. In [14] and [15] the proof is based on two fundamental steps: (1) proving a sharp Heintze-Karcher inequality with equality achieved only by balls, (2) checking that a domain with constant \(k\)-th mean curvature realizes the equality case. This last step for \(k\geq 2\) is crucially based on the Minkowski identities in [10], while it is an immediate consequence of divergence theorem for \(k=1\).
The reduced boundary of the limiting set \(\Omega\) in Theorem 1.1 is a \(n\)-dimensional varifold \(V_{\Omega}\) of bounded mean curvature in \(\mathbb{R}^{n+1}\). In section 4, combining results in varifolds theory ([13] and [12]) with fine properties of the curvature for arbitrary closed sets ([12]), we can quickly deduce from the general Heintze-Karcher inequality in [10, Theorem 3.20] a sharp geometric inequality for sets of finite perimeter and bounded distributional mean curvature; see Theorem 4.2. This completes the first step of the proof. In order to complete the proof, we have to check that the limiting set \(\Omega\) satisfies the equality case in Theorem 4.2,
and this is the key new difficulty with respect to other aforementioned results based on the Montiel-Ros method. In fact, one needs to use a Minkowski-type formula for the limiting set in order to use the geometric information given by the vanishing oscillation hypothesis (1.2) within the geometric inequality from Theorem 4.2. However, Minkowski formulae for singular geometric sets are known only in very special cases, namely sets of positive reach and subanalytic sets, and their proof is a quite subtle issue based on the existence of a normal cycle; see [20, section 3]. In particular, no Minkowski formulae are known in the varifolds setting. To deal with this point in Theorem 1.1, we consider the sequence of normal cycles \(N_{\Omega_{\ell}}\) associated with the exterior unit-normal bundles of the sets \(\Omega_{\ell}\). These are \(n\)-dimensional integral currents in the product space \(\mathbb{R}^{n+1}\times\mathbb{S}^{n}\), which are cycles (i.e. \(\partial N_{\Omega_{\ell}}=0\) in the sense of currents) and satisfy a Legendrian-type property, see Remark 3.4. It follows from the finite total curvature assumption that the masses of the integral currents \(N_{\Omega_{\ell}}\) are uniformly bounded; henceforth we can apply Federer-Fleming compactness theorem to find, up to subsequences, that the currents \(N_{\Omega_{\ell}}\) converge weakly to a Legendrian cycle \(T\); see section 3. Using a representation formula for the curvature measures associated with \(T\) (see Lemma 3.2), we pass to limit in the Minkowski formulae for \(\Omega_{\ell}\) to find a Minkowski-type formula for the Legendrian cycle \(T\); see Lemma 3.4. Now it is still not clear how to use the Minkowski formulae for \(T\) within the Heintze-Karcher inequality for \(V_{\Omega}\), since \(V_{\Omega}\) and \(T\) come from two completely different limit procedures: \(T\) is the limit in the sense of currents of the normal cycles of \(\Omega_{\ell}\), while \(V_{\Omega}\) is the limit in the sense of varifolds of the boundaries of \(\Omega_{\ell}\). This is a quite subtle point and occupies most of the proof of Theorem 1.1 in section 5.
Finally we mention that in the appendix, firstly we generalize the soap bubble theorem [10, Theorem A] (allowing the constant \(\lambda\in\mathbb{R}\)), then we show how this theorem allows to characterize the limit of sequences of \(C^{2}\)-domains with almost constant \(k\)-th mean curvature function and with a uniform bound on the exterior touching balls.
**Acknowledgements:** The author wishes to thank Francesco Maggi for useful comments on a preliminary version of this work. The author is partially supported by INdAM-GNSAGA.
## 2. Preliminaries
Let \(\pi_{0}:\mathbb{R}^{n+1}\times\mathbb{R}^{n+1}\to\mathbb{R}^{n+1}\) and \(\pi_{1}:\mathbb{R}^{n+1}\times\mathbb{R}^{n+1}\to\mathbb{R}^{n+1}\) be defined as
\[\pi_{0}(x,u)=x,\qquad\pi_{1}(x,u)=u.\]
If \(S\subseteq\mathbb{R}^{n+1}\) we define \(\operatorname{dist}(x,S)=\inf\{|x-a|:a\in S\}\) for \(x\in\mathbb{R}^{n+1}\). If \(Q\subseteq\mathbb{R}^{n+1}\times\mathbb{R}^{n+1}\) and \(S\subseteq\mathbb{R}^{n+1}\) we define
\[Q\,\llcorner\,S=\pi_{0}^{-1}(S)\cap Q.\]
We say that a subset \(S\subseteq\mathbb{R}^{m}\) is _countably \(\mathcal{H}^{k}\)-rectifiable_ if there exists a countable family \(F\) of \(k\)-dimensional embedded \(C^{1}\)-submanifolds of \(\mathbb{R}^{m}\) such that \(\mathcal{H}^{k}(S\setminus\bigcup F)=0\). We denote with \(\operatorname{Tan}^{k}(\mathcal{H}^{k}\,\llcorner\,S,x)\) the \(\mathcal{H}^{k}\)_-approximate tangent cone_ of \(S\) at \(x\); see [1, 3.2.16]. If \(\mathcal{H}^{k}(S)<\infty\), \(f\) is a Lipschitz function on \(S\), then \(\operatorname{Tan}^{k}(\mathcal{H}^{k}\,\llcorner\,S,x)\) is a \(k\)-dimensional plane at \(\mathcal{H}^{k}\) a.e. \(x\in S\) and we denote with \(J_{k}^{S}f\) the \(k\)-dimensional approximate tangential jacobian function of \(f\); see [1, 3.2.19, 3.2.20].
### Sets of finite perimeter
We refer to [1, Chapter 3] or [10] for details. We recall that \(\Omega\subseteq\mathbb{R}^{n+1}\) is a set of finite perimeter in \(\mathbb{R}^{n+1}\) if its characteristic function \(\mathbf{1}_{\Omega}\) is a function of bounded variation in \(\mathbb{R}^{n+1}\). The reduced boundary \(\partial^{*}\Omega\) of \(\Omega\) is the set of points \(x\in\mathbb{R}^{n+1}\) such that the following limit
\[\lim_{r\to 0}\frac{\mathrm{D}\mathbf{1}_{\Omega}(B_{r}(x))}{|\mathrm{D} \mathbf{1}_{\Omega}(B_{r}(x))|}\]
exists and belongs to \(\mathbb{S}^{n}\), in which case we denote it by \(\nu_{\Omega}(x)\) (here \(\mathrm{D}\mathbf{1}_{\Omega}\) is the distributional gradient of \(\mathbf{1}_{\Omega}\)). The reduced boundary is countably \(\mathcal{H}^{n}\)-rectifiable and \(\mathcal{H}^{n}(\partial^{*}\Omega)\) equals the total variation of \(\mathrm{D}\mathbf{1}_{\Omega}\); this number is the perimeter of \(\Omega\). The map \(\nu_{\Omega}:\partial^{*}\Omega\to\mathbb{S}^{n}\) is the measure-theoretic exterior unit-normal of \(\Omega\) and we define
\[\overline{\nu}_{\Omega}:\partial^{*}\Omega\to\mathbb{R}^{n+1}\times\mathbb{S} ^{n},\qquad\overline{\nu}_{\Omega}(x)=(x,\nu_{\Omega}(x)).\]
Notice that if \(\Omega\) is a \(C^{1}\)-domain then \(\partial^{*}\Omega=\partial\Omega\) and \(\nu_{\Omega}\) is the classical exterior unit-normal vector field of \(\Omega\).
### Currents
We refer to [13, Chapter 6] for details. The space of compactly supported \(k\)-forms on an open subset \(U\) of \(\mathbb{R}^{p}\) is denoted by the usual \(\mathcal{D}^{k}(U)\) and the space of \(k\)-currents on \(U\) by \(\mathcal{D}_{k}(U)\). We denote with \(\boldsymbol{M}_{W}(T)\) the mass of \(T\) over an open subset \(W\) of \(U\). A sequence \(T_{\ell}\in\mathcal{D}_{k}(U)\)_weakly converges_ to \(T\in\mathcal{D}_{k}(U)\) if and only if
\[T_{\ell}(\phi)\to T(\phi)\qquad\text{for all $\phi\in\mathcal{D}^{k}(U)$}.\]
We say that a \(k\)-current \(T\in\mathcal{D}_{k}(U)\) is an _interger multiplicity rectifiable \(k\)-current of \(U\)_ provided
\[T(\phi)=\int_{M}\langle\eta(x),\phi(x)\rangle\theta(x)\,d\mathcal{H}^{k}(x) \qquad\text{for all $\phi\in\mathcal{D}^{k}(U)$},\]
where \(M\) is a countably \(\mathcal{H}^{k}\)-rectifiable subset of \(U\), \(\theta:M\to\mathbb{Z}_{+}\) is an \(\mathcal{H}^{k}\)-measurable function such that \(\int_{K\cap M}\theta\,d\mathcal{H}^{k}<\infty\) for every compact subset \(K\) of \(U\), and \(\eta(x)=\tau_{1}(x)\wedge\ldots\wedge\tau_{k}(x)\) for \(\mathcal{H}^{k}\) a.e. \(x\in M\), where \(\tau_{1}(x),\ldots,\tau_{k}(x)\) form an orthonormal basis of \(\mathrm{Tan}^{k}(\mathcal{H}^{k}\,{}_{\!\perp}\,M,x)\) for \(\mathcal{H}^{k}\) a.e. \(x\in M\). The set \(M\) is called _carrier_ of \(T\) and it is \(\mathcal{H}^{n}\) almost uniquely determined by \(T\). Henceforth for each integer multiplicity rectifiable \(k\)-current \(T\) we introduce the symbol \(W_{T}\) for the carrier of \(T\). One can easily check that if \(W\subseteq U\) is an open set then
\[\boldsymbol{M}_{W}(T)=\int_{W\cap M}\theta\,d\mathcal{H}^{k}.\]
### Varifolds
We refer to [1] for details. Let \(U\subseteq\mathbb{R}^{n+1}\) be an open set. The space of \(k\)-dimensional varifolds on \(U\) is denoted with \(\boldsymbol{V}_{k}(U)\) and the space of \(k\)-dimensional integral varifolds on \(U\) with the usual \(\boldsymbol{IV}_{k}(U)\). Associated with \(V\in\boldsymbol{V}_{k}(U)\) we consider the weight measure \(\|V\|\), which is the Radon measure on \(U\) defined by \(\|V\|(S)=V(S\times\boldsymbol{G}(n+1,k))\) for every \(S\subseteq U\), and we denote the _first variation of \(V\)_ by \(\delta V\); see [1, section 4] for details. A varifold \(V\in\boldsymbol{V}_{k}(U)\) is a varifold of _bounded mean curvature_ if there exists a function \(h\in L^{\infty}(\|V\|,\mathbb{R}^{n+1})\) so that
\[\delta V(X)=\int h(x)\bullet X(x)\,d\|V\|(x),\quad\text{for all $X\in\mathcal{C} _{c}^{\infty}(U,\mathbb{R}^{n+1})$}.\]
The function \(h\) is uniquely determined by \(V\) and we write \(h=\boldsymbol{h}(V,\cdot)\).
In this paper we consider integral varifolds \(V\in I\!\!V_{n}(\mathbb{R}^{n+1})\) associated with the reduced boundary of a set of finite perimeter; namely if \(E\) is a set of finite perimeter in \(\mathbb{R}^{n+1}\) of positive volume, we define the \(n\)-dimensional varifold \(V_{E}\in I\!\!V_{n}(\mathbb{R}^{n+1})\) as the unique Radon measure \(V_{E}\) on \(\mathbb{R}^{n+1}\times\mathbf{G}(n+1,n)\) such that
\[\int\phi(x,S)\,dV_{E}(x,S)=\int_{\partial^{*}E}\phi(x,\mbox{Tan}^{n}(\mathcal{ H}^{n}\,\mbox{\raisebox{0.86pt}{$\perp$}}\,\partial^{*}E,x))\,d\mathcal{H}^{n}(x)\]
for every \(\phi\in C_{c}(\mathbb{R}^{n+1}\times\mathbf{G}(n+1,n))\). Notice that \(\|V_{E}\|=\mathcal{H}^{n}\,\mbox{\raisebox{0.86pt}{$\perp$}}\,\partial^{*}E\) and
\[\delta V_{E}(X)=\int_{\partial^{*}E}\mbox{div}_{\mathbb{R}^{n+1}}(X)\,d \mathcal{H}^{n}-\int_{\partial^{*}E}\mbox{D}X(x)(\nu_{E}(x))\bullet\nu_{E}(x) \,d\mathcal{H}^{n}(x) \tag{2.1}\]
for all \(X\in\mathcal{C}_{c}^{\infty}(\mathbb{R}^{n+1},\mathbb{R}^{n+1})\).
### Normal bundle of closed sets
Suppose \(C\subseteq\mathbb{R}^{n+1}\) is a closed set. We define
\[\mbox{nor}(C)=\{(x,u)\in C\times\mathbb{S}^{n}:\mbox{dist}(x+su,C)=s\mbox{ for some }s>0\}.\]
and we recall that \(\mbox{nor}(C)\) is always countably \(\mathcal{H}^{n}\)-rectifiable; see [10]1. On the other hand we remark that there are closed sets \(C\) for which \(\mbox{nor}(C)\) has not locally finite \(\mathcal{H}^{n}\)-measure; indeed this can happen even when \(C=\mbox{spt}\|V\|\) and \(V\in I\!\!V_{2}(\mathbb{R}^{3})\) is a varifold of bounded mean curvature.
Footnote 1: The unit normal bundle of a closed set \(C\) in [10] is denoted with \(N(C)\).
It is proved in [10] that for \(\mathcal{H}^{n}\) a.e. \((x,u)\in\mbox{nor}(C)\) there exist a linear subspace \(T_{C}(x,u)\) of \(\mathbb{R}^{n+1}\) and a symmetric bilinear form \(Q_{C}(x,u):T_{C}(x,u)\times T_{C}(x,u)\to\mathbb{R}\), whose eigenvalues can be used to provide an explicit representation of the approximate tangent space of \(\mbox{nor}(C)\) at \(\mathcal{H}^{n}\) almost all points. To this end, for \(\mathcal{H}^{n}\) a.e. \((x,u)\in\mbox{nor}(C)\), we define
\[-\infty<\kappa_{C,1}(x,u)\leq\ldots\leq\kappa_{C,n}(x,u)\leq\infty\]
in the following way: \(\kappa_{C,1}(x,u),\ldots,\kappa_{C,m}(x,u)\) are the eigenvalues of \(Q_{C}(x,u)\), where \(m=\dim T_{C}(x,u)\), and \(\kappa_{C,i}(x,u)=+\infty\) for \(i=m+1,\ldots,n\). Given this definition we recall the following lemma.
**Lemma 2.1** (cf. [10, Lemma 4.11]).: _Let \(Q\subseteq\mbox{nor}(C)\) be an \(\mathcal{H}^{n}\)-measurable set with finite \(\mathcal{H}^{n}\) measure. For \(\mathcal{H}^{n}\) a.e. \((x,u)\in Q\) there exist \(\tau_{1}(x,u),\ldots,\tau_{n}(x,u)\in\mathbb{R}^{n+1}\) such that \(\{\tau_{1}(x,u),\ldots,\tau_{n}(x,u),u\}\) is an orthonormal basis of \(\mathbb{R}^{n+1}\) and \(\zeta_{1}(x,u),\ldots,\zeta_{n}(x,u)\in\mathbb{R}^{n+1}\times\mathbb{R}^{n+1}\), defined as_
\[\zeta_{i}(x,u)=(\tau_{i}(x,u),\kappa_{i}(x,u)\tau_{i}(x,u))\quad\mbox{if } \kappa_{C,i}(x,u)<\infty\]
\[\zeta_{i}(x,u)=(0,\tau_{i}(x,u))\quad\mbox{if }\kappa_{C,i}(x,u)=\infty,\]
_form an orthogonal basis of \(\mbox{Tan}^{n}(\mathcal{H}^{n}\,\mbox{\raisebox{0.86pt}{$\perp$}}\,Q,(x,u))\)._
We also need the following result, relating the numbers \(\kappa_{C,i}\) with the principal curvature of a \(C^{2}\)-hypersurface that intersects \(C\).
**Lemma 2.2** (cf. [10, Lemma 6.1]).: _If \(\Sigma\) is an embedded \(C^{2}\)-hypersurface then there exists \(R\subseteq\Sigma\cap\partial C\) such that_
\[\mathcal{H}^{n}((\Sigma\cap\partial C)\setminus R)=0,\qquad\mbox{nor}(C)\, \mbox{\raisebox{0.86pt}{$\perp$}}\,R\subseteq\mbox{nor}(\Sigma)\]
_and_
\[\kappa_{C,i}(x,u)=\kappa_{\Sigma,i}(x,u)\quad\mbox{for }\mathcal{H}^{n}\mbox{ a.e. }(x,u)\in\mbox{nor}(C)\,\mbox{\raisebox{0.86pt}{$\perp$}}\,R\]
_for \(1\leq i\leq n\). Here \(\operatorname{nor}(\Sigma)\) is the classical unit-normal bundle of \(\Sigma\) and_
\[\kappa_{\Sigma,1}(x,u)\leq\ldots\leq\kappa_{\Sigma,n}(x,u)\]
_are the principal curvatures of \(\Sigma\) at \(x\) in the direction \(u\)._
We recall now the general Heintze-Karcher type inequality for arbitrary closed sets proved in [10], which is used in the proof of Theorem 4.2.
**Theorem 2.3** (cf. [10, Theorem 3.20]).: _Let \(C\subseteq\mathbb{R}^{n+1}\) be a bounded closed set with non empty interior. Let \(K=\mathbb{R}^{n+1}\setminus\operatorname{interior}(C)\) and assume that_
\[\sum_{i=1}^{n}\kappa_{K,i}(x,u)\leq 0\quad\text{for $\mathcal{H}^{n}$ a.e. $(x,u)\in\operatorname{nor}(K)$.}\]
_Then_
\[(n+1)\mathcal{L}^{n+1}(\operatorname{interior}(C))\leq\int_{\operatorname{ nor}(K)}J_{n}^{\operatorname{nor}(K)}\pi_{0}(x,u)\,\frac{n}{|\sum_{i=1}^{n} \kappa_{K,i}(x,u)|}\,d\mathcal{H}^{n}(x,u).\]
_It the equality holds and there exists \(q<\infty\) such that \(|\sum_{i=1}^{n}\kappa_{K,i}(x,u)|\leq q\) for \(\mathcal{H}^{n}\) a.e. \((x,u)\in\operatorname{nor}(K)\), then \(C\) equals the union of finitely many closed balls with disjointed interiors._
## 3. Legendrian cycles and Curvature measures
We denote with \(E_{n+1}\in\mathcal{D}^{n+1}(\mathbb{R}^{n+1})\) the _canonical volume form_ of \(\mathbb{R}^{n+1}\), and with
\[\alpha_{\mathbb{R}^{n+1}}:\mathbb{R}^{n+1}\times\mathbb{R}^{n+1}\to\bigwedge^{ 1}(\mathbb{R}^{n+1}\times\mathbb{R}^{n+1})\]
the _contact \(1\)-form_; i.e.
\[\langle(y,v),\alpha_{\mathbb{R}^{n+1}}(x,u)\rangle=y\bullet u\qquad\text{for $( x,u),(y,v)\in\mathbb{R}^{n+1}$.}\]
A _Legendrian cycle of \(\mathbb{R}^{n+1}\)_ is an integer-multiplicity rectifiable \(n\)-current \(T\) of \(\mathbb{R}^{n+1}\times\mathbb{R}^{n+1}\) such that
\[\operatorname{spt}(T)\text{ is compact subset of $\mathbb{R}^{n+1}\times \mathbb{S}^{n}$},\quad\partial T=0,\quad T\,\llcorner\,\alpha_{\mathbb{R}^{n+ 1}}=0.\]
The following result describes the approximate tangent space of the carrier of a Legendrian cycle.
**Theorem 3.1** (cf. [11, Theorem 9.2]).: _Let \(T\) be a Legendrian cycle of \(\mathbb{R}^{n+1}\) with carrier \(W_{T}\). For \(\mathcal{H}^{n}\) a.e. \((x,u)\in W_{T}\) there exist numbers \(-\infty<\kappa_{1}(x,u)\leq\ldots\leq\kappa_{n}(x,u)\leq\infty\) and vectors \(\tau_{1}(x,u),\ldots,\tau_{n}(x,u)\) such that \(\{\tau_{1}(x,u),\ldots,\tau_{n}(x,u),u\}\) is a positively oriented basis of \(\mathbb{R}^{n+1}\) (i.e. \(\langle\tau_{1}(x,u)\wedge\ldots\wedge\tau_{n}(x,u)\wedge u,E_{n+1}\rangle=1\)) and the vectors \(\zeta_{1}(x,u),\ldots,\zeta_{n}(x,u)\in\mathbb{R}^{n+1}\times\mathbb{R}^{n+1}\), defined as_
\[\zeta_{i}(x,u)=(\tau_{i}(x,u),\kappa_{i}(x,u)\tau_{i}(x,u))\quad\text{if $ \kappa_{i}(x,u)<\infty$}\]
\[\zeta_{i}(x,u)=(0,\tau_{i}(x,u))\quad\text{if $\kappa_{i}(x,u)=\infty$},\]
_form an orthogonal basis of \(\operatorname{Tan}^{n}(\mathcal{H}^{n}\,\llcorner\,W_{T},(x,u))\). The numbers \(\kappa_{i}(x,u)\) for \(i=1,\ldots,n\) are uniquely determined, as well as the subspaces of \(\mathbb{R}^{n+1}\) spanned by \(\{\tau_{i}(x,u):\pi_{1}(\zeta_{i}(x,u))=\kappa_{k}(x,u)\tau_{i}(x,u)\}\) for each \(k\)._
**Definition 4**.: Given \(T\), \(\tau_{1},\ldots,\tau_{n}\) and \(\zeta_{1},\ldots,\zeta_{n}\) as in Theorem 3.1, we define
\[\mathcal{J}_{T}(x,u)=\frac{1}{|\zeta_{1}(x,u)\wedge\ldots\wedge\zeta_{n}(x,u)|}\]
and
\[\zeta_{T}(x,u)=\frac{\zeta_{1}(x,u)\wedge\ldots\wedge\zeta_{n}(x,u)}{|\zeta_{1 }(x,u)\wedge\ldots\wedge\zeta_{n}(x,u)|}\]
for \(\mathcal{H}^{n}\) a.e. \((x,u)\in W_{T}\).
**Remark 3.1**.: This definition is well-posed, in the sense that \(\mathcal{J}_{T}\) and \(\zeta_{T}\) do not depend, up to a set of \(\mathcal{H}^{n}\)-measure zero, on the choice of the vectors \(\tau_{1},\ldots,\tau_{n}\).
**Remark 3.2**.: There exists a \(\mathcal{H}^{n}\operatorname{\llcorner}W_{T}\) almost unique integer-valued map \(i_{T}\) such that \(i_{T}(x,u)=0\) for \(\mathcal{H}^{n}\) a.e. \((x,u)\in\mathbb{R}^{n+1}\times\mathbb{R}^{n+1}\setminus W_{T}\), \(i_{T}(x,u)>0\) for \(\mathcal{H}^{n}\) a.e. \((x,u)\in W_{T}\) and
\[T=(\mathcal{H}^{n}\operatorname{\llcorner}W_{T})i_{T}\wedge\zeta_{T}.\]
**Definition 5** (Lipschitz-Killing differential forms).: For each \(k\in\{0,\ldots,n\}\) we define
\[\Sigma_{n,k}=\bigg{\{}\sigma:\{1,\ldots,n\}\to\{0,1\}:\sum_{i=1}^{n}\sigma(i) =n-k\bigg{\}}.\]
The _\(k\)-th Lipschitz-Killing differential form_\(\varphi_{k}:\mathbb{R}^{n+1}\times\mathbb{R}^{n+1}\to\bigwedge^{n}(\mathbb{R}^{n +1}\times\mathbb{R}^{n+1})\) is defined by
\[\langle\xi_{1}\wedge\cdots\wedge\xi_{n},\varphi_{k}(x,u)\rangle\] \[\qquad=\sum_{\sigma\in\Sigma_{n,k}}\langle\pi_{\sigma(1)}(\xi_{1 })\wedge\cdots\wedge\pi_{\sigma(n)}(\xi_{n})\wedge u,E_{n+1}\rangle,\]
for every \(\xi_{1},\ldots,\xi_{n}\in\mathbb{R}^{n+1}\times\mathbb{R}^{n+1}\).
**Definition 6** (Curvature measures).: If \(T\) is a Legendrian cycle in \(\mathbb{R}^{n+1}\) and \(k\in\{0,\ldots,n\}\), then _\(k\)-th curvature measure_ of \(T\) is the \(0\)-current
\[T\operatorname{\llcorner}\varphi_{k}\in\mathcal{D}_{0}(\mathbb{R}^{n+1} \times\mathbb{R}^{n+1}).\]
**Definition 7** (Principal curvatures).: Given a Legendrian cycle \(T\) of \(\mathbb{R}^{n+1}\) we define \(\kappa_{T,i}=\kappa_{i}\), where \(\kappa_{1}\leq\ldots\leq\kappa_{n}\) are the functions defined \(\mathcal{H}^{n}\) a.e. on \(W_{T}\) given by Theorem 3.1.
The numbers \(\kappa_{T,1}\leq\ldots\leq\kappa_{T,n}\) are the principal curvatures of \(T\).
**Definition 8** (Mean curvature functions).: Given a Legendrian cycle \(T\) of \(\mathbb{R}^{n+1}\) we define
\[W_{T}^{(i)}=\{(x,u)\in W_{T}:\kappa_{T,i}(x,u)<\infty,\ \kappa_{T,i+1}(x,u)=+ \infty\}\quad\text{for }i=1,\ldots,n-1\]
\[W_{T}^{(0)}=\{(x,u)\in W_{T}:\kappa_{T,1}(x,u)=+\infty\}\]
\[W_{T}^{(n)}=\{(x,u)\in W_{T}:\kappa_{T,n}(x,u)<+\infty\}.\]
Then we define the _\(r\)-th mean curvature function of \(T\)_ as
\[H_{T,r}=\mathbf{1}_{W_{T}^{(n-r)}}+\sum_{i=1}^{r}\sum_{\lambda\in\Lambda_{n-r+ i,i}}\kappa_{T,\lambda(1)}\cdots\kappa_{T,\lambda(i)}\mathbf{1}_{W_{T}^{(n-r+ i)}}\qquad\text{for }r=1,\ldots,n\]
where \(\Lambda_{n,k}\), for \(k\leq n\), is the set of all increasing functions from \(\{1,\ldots,k\}\) to \(\{1,\ldots,n\}\). Additionally,
\[H_{T,0}=\mathbf{1}_{W_{T}^{(n)}}.\]
**Remark 3.3**.: It follows from Theorem 3.1 that
\[J_{n}^{W_{T}}\pi_{0}(x,u)=\mathcal{J}_{T}(x,u)>0\quad\text{for $\mathcal{H}^{n}$ a.e. $(x,u)\in W_{T}^{(n)}$}\]
and
\[J_{n}^{W_{T}}\pi_{0}(x,u)=0\quad\text{for $\mathcal{H}^{n}$ a.e. $(x,u)\in W_{T}\setminus W_{T}^{(n)}$.}\]
The definition of mean curvature functions is clearly motivated by the following result.
**Lemma 3.2**.: _If \(T\) is a Legendrian cycle of \(\mathbb{R}^{n+1}\) and \(k\in\{0,\ldots,n\}\) then_
\[(T\llcorner\varphi_{k})(\phi)=\int\phi(x,u)\,i_{T}(x,u)\,\mathcal{J}_{T}(x,u) \,H_{T,n-k}(x,u)\,d\mathcal{H}^{n}(x,u)\]
_for every \(\phi\in\mathcal{D}^{0}(\mathbb{R}^{n+1}\times\mathbb{R}^{n+1})\)._
Proof.: For \(j\in\{0,\ldots,n\}\) we define
\[\Sigma_{n,k}^{(j)}=\{\sigma\in\Sigma_{n,k}:\sigma(i)=1\text{ for every $i>j$}\}.\]
Notice that \(\Sigma_{n,k}^{(j)}=\varnothing\) if \(j<k\). Moreover,
\[\left\langle\pi_{\sigma(1)}(\zeta_{1}(x,u))\wedge\ldots\wedge\pi_{\sigma(n)}( \zeta_{n}(x,u))\wedge u,E_{n+1}\right\rangle=0\]
for \((x,u)\in W_{T}^{(j)}\) and \(\sigma\in\Sigma_{n,k}\setminus\Sigma_{n,k}^{(j)}\), where \(\tau_{1},\ldots,\tau_{n}\) and \(\zeta_{1},\ldots,\zeta_{n}\) are given as in Theorem 3.1. Therefore we infer that
\[(T\llcorner\varphi_{k})(\phi)\] \[\qquad=\sum_{j=k}^{n}\sum_{\sigma\in\Sigma_{n,k}^{(j)}}\int_{W_{ T}^{(j)}}\phi(x,u)\,i_{T}(x,u)\,\mathcal{J}_{T}(x,u)\left\langle\pi_{\sigma(1)}( \zeta_{1}(x,u))\wedge\ldots\right.\] \[\qquad\qquad\qquad\qquad\qquad\left.\ldots\wedge\pi_{\sigma(n)}( \zeta_{n}(x,u))\wedge u,E_{n+1}\right\rangle d\mathcal{H}^{n}(x,u).\]
Noting that \(\left\langle\tau_{1}(x,u)\wedge\ldots\wedge\tau_{n}(x,u)\wedge u,E_{n+1} \right\rangle=1\), we infer for \(j>k\) that
\[\sum_{\sigma\in\Sigma_{n,k}^{(j)}}\int_{W_{T}^{(j)}}\phi(x,u)\,i_ {T}(x,u)\,\mathcal{J}_{T}(x,u)\left\langle\pi_{\sigma(1)}(\zeta_{1}(x,u)) \wedge\ldots\right.\] \[\qquad\qquad\qquad\qquad\left.\ldots\wedge\pi_{\sigma(n)}(\zeta _{n}(x,u))\wedge u,E_{n+1}\right\rangle d\mathcal{H}^{n}(x,u)\] \[\qquad=\sum_{\lambda\in\Lambda_{j,j-k}}\int_{W_{T}^{(j)}}\phi(x, u)\,i_{T}(x,u)\,\mathcal{J}_{T}(x,u)\prod_{\ell=1}^{j-k}\kappa_{T,\lambda( \ell)}(x,u)\,d\mathcal{H}^{n}(x,u),\]
while for \(j=k\) we have that
\[\sum_{\sigma\in\Sigma_{n,k}^{(k)}}\int_{W_{T}^{(k)}}\phi(x,u)\,i_ {T}(x,u)\,\mathcal{J}_{T}(x,u)\left\langle\pi_{\sigma(1)}(\zeta_{1}(x,u)) \wedge\ldots\right.\] \[\qquad\qquad\qquad\qquad\left.\ldots\wedge\pi_{\sigma(n)}(\zeta _{n}(x,u))\wedge u,E_{n+1}\right\rangle d\mathcal{H}^{n}(x,u)\] \[\qquad=\int_{W_{T}^{(k)}}\phi(x,u)\,i_{T}(x,u)\,\mathcal{J}_{T}( x,u)\,d\mathcal{H}^{n}(x,u).\]
We conclude
\[(T\,\llcorner\,\varphi_{k})(\phi)\] \[\quad=\sum_{i=1}^{n-k}\sum_{\lambda\in\Lambda_{k+i,i}}\int_{W_{T}^{ (k+i)}}\phi(x,u)\,i_{T}(x,u)\,\mathcal{J}_{T}(x,u)\prod_{\ell=1}^{i}\kappa_{T, \lambda(\ell)}(x,u)\,d\mathcal{H}^{n}(x,u)\] \[\qquad\qquad+\int_{W_{T}^{(k)}}\phi(x,u)\,i_{T}(x,u)\,\mathcal{J} _{T}(x,u)\,d\mathcal{H}^{n}(x,u),\]
which is the desired conclusion.
We conclude this section by introducing the normal cycle of a \(C^{2}\)-domain.
**Definition 9** (Normal cycle of a \(C^{2}\)-domain).: Suppose \(\Omega\subseteq\mathbb{R}^{n+1}\) is an open set with \(C^{2}\)-boundary. Then we define
\[N_{\Omega}=(\overline{\nu}_{\Omega})_{\#}((\mathcal{H}^{n}\,\llcorner\, \partial\Omega)\wedge\star\nu_{\Omega})\in\mathcal{D}_{n}(\mathbb{R}^{n+1} \times\mathbb{S}^{n}).\]
**Remark 3.4** (Classical).: Since \((\mathcal{H}^{n}\,\llcorner\,\partial\Omega)\wedge\star\nu_{\Omega}\) is a \(n\)-dimensional cycle of \(\mathbb{R}^{n+1}\), it follows that \(N_{\Omega}\) is a \(n\)-dimensional cycle in \(\mathbb{R}^{n+1}\times\mathbb{S}^{n}\). Moreover, area formula for rectifiable currents (see [12, 4.1.30]) allows to conclude
\[N_{\Omega}(\psi) =\int_{\partial\Omega}\langle\Lambda_{n}\mathrm{D}\overline{\nu} _{\Omega}(x)(\star\nu_{\Omega}(x)),\psi(\overline{\nu}_{\Omega}(x))\rangle\, d\mathcal{H}^{n}(x)\] \[=\big{[}\big{(}\mathcal{H}^{n}\,\llcorner\,\overline{\nu}_{ \Omega}(\partial\Omega)\big{)}\wedge\eta\big{]}(\psi), \tag{3.1}\]
where
\[\eta(x,u)=\frac{\Lambda_{n}D\overline{\nu}_{\Omega}(x)(\star\nu_{\Omega}(x))} {|\Lambda_{n}D\overline{\nu}_{\Omega}(x)(\star\nu_{\Omega}(x))|}\quad\text{ for }(x,u)\in\overline{\nu}_{\Omega}(\partial\Omega).\]
It follows that \(N_{\Omega}\in\boldsymbol{I}_{n}(\mathbb{R}^{n+1}\times\mathbb{R}^{n+1})\). We notice that
\[\boldsymbol{M}(N_{\Omega})=\mathcal{H}^{n}(\overline{\nu}_{\Omega}(\partial \Omega))=\int_{\partial\Omega}J_{n}^{\partial\Omega}\overline{\nu}_{\Omega} \,d\mathcal{H}^{n}=\int_{\partial\Omega}\prod_{i=1}^{n}(1+\kappa_{\Omega,i}^ {2})^{\frac{1}{2}}\,d\mathcal{H}^{n},\]
and by arithmetic-geometric mean inequality we have that
\[\boldsymbol{M}(N_{\Omega})\leq 2^{\frac{n}{2}}\bigg{(}\mathcal{H}^{n}(\partial \Omega)+n^{-\frac{n}{2}}\int_{\Omega}A_{\Omega}(x)^{n}\,d\mathcal{H}^{n}\bigg{)}. \tag{3.2}\]
We check now that \(N_{\Omega}\,\llcorner\,\alpha_{\mathbb{R}^{n+1}}=0\): if \(x\in\partial\Omega\) and \(\tau_{1},\ldots,\tau_{n}\) is an orthonormal basis of \(\mathrm{Tan}(\partial\Omega,x)\) such that \(\star\nu_{\Omega}=\tau_{1}\wedge\ldots\wedge\tau_{n}\) and \(\mathrm{D}\nu_{\Omega}(x)(\tau_{i}(x))=\kappa_{\Omega,i}(x)\tau_{i}(x)\) for \(x\in\partial\Omega\), then we use shuffle formula (see [12, pag. 18]) to compute
\[\langle\Lambda_{n}\mathrm{D}\overline{\nu}_{\Omega}(x)(\star\nu _{\Omega}(x)),(\psi\wedge\alpha_{\mathbb{R}^{n+1}})(\overline{\nu}_{\Omega}(x))\rangle\] \[\qquad=\langle(\tau_{1},\kappa_{\Omega,1}(x)\tau_{1})\wedge \ldots\wedge(\tau_{n},\kappa_{\Omega,n}(x)\tau_{n}),(\psi\wedge\alpha_{\mathbb{ R}^{n+1}})(\overline{\nu}_{\Omega}(x))\rangle=0\]
for every \(\psi\in\mathcal{D}^{n-1}(\mathbb{R}^{n+1}\times\mathbb{R}^{n+1})\). We conclude that \(N_{\Omega}\) is a Legendrian cycle of \(\mathbb{R}^{n+1}\). Additionally one easily infer that \(W_{N_{\Omega}}^{(n)}=W_{N_{\Omega}}=\overline{\nu}_{\Omega}(\partial\Omega)\) and
\[\kappa_{N_{\Omega},i}(x,u)=\kappa_{\Omega,i}(x),\quad\mathcal{J}_{N_{\Omega}}(x,u)=\frac{1}{|\Lambda_{n}\mathrm{D}\overline{\nu}_{\Omega}(x)(\star\nu_{\Omega} (x))|}\]
for every \((x,u)\in W_{N_{\Omega}}\). In particular, \(H_{N_{\Omega},k}(x,u)=H_{\Omega,k}(x)\) for \((x,u)\in\overline{\nu}_{\Omega}(\partial\Omega)\) and by Remark 3.3
\[(N_{\Omega}\,\llcorner\,\varphi_{n-k})(\phi)=\int_{\partial\Omega}H_{\Omega,k }(x)\,\phi(x,\nu_{\Omega}(x))\,d\mathcal{H}^{n}(x) \tag{3.3}\]
for all \(\phi\in\mathcal{D}^{0}(\mathbb{R}^{n+1}\times\mathbb{R}^{n+1})\) and \(k\in\{0,\ldots,n\}\).
**Lemma 3.3**.: _Let \(\Omega_{\ell}\) be an asymptotically \(k\)-mean convex sequence such that \(N_{\Omega_{\ell}}\) weakly converges to a Legendrian cycle \(T\) of \(\mathbb{R}^{n+1}\). Then_
\[H_{T,i}(x,u)\geq 0\quad\text{for $\mathcal{H}^{n}$ a.e. $(x,u)\in W_{T}$ and for $i\in\{1,\ldots,k\}$.}\]
Proof.: Let \(i\in\{1,\ldots,k\}\). By (3.3) we have that
\[\big{(}N_{\Omega_{\ell}}\!\ll\!\varphi_{n-i}\big{)}(\phi)\] \[\quad=-\int_{\partial\Omega_{\ell}}H_{\Omega_{\ell},i}^{-}(x)\, \phi(x,\nu_{\Omega_{\ell}}(x))\,d\mathcal{H}^{n}(x)+\int_{\partial\Omega_{ \ell}}H_{\Omega_{\ell},i}^{+}(x)\,\phi(x,\nu_{\Omega_{\ell}}(x))\,d\mathcal{H }^{n}(x)\]
for \(\phi\in\mathcal{D}^{0}(\mathbb{R}^{n+1}\times\mathbb{R}^{n+1})\), whence we infer
\[(T\!\ll\!\varphi_{n-i})(\phi)=\lim_{i\to\infty}\int_{\partial\Omega_{\ell}}H_ {\Omega_{\ell},i}^{+}(x)\,\phi(x,\nu_{\Omega_{\ell}}(x))\,d\mathcal{H}^{n}(x)\geq 0\]
for \(\phi\in\mathcal{D}^{0}(\mathbb{R}^{n+1}\times\mathbb{R}^{n+1})\) with \(\phi\geq 0\). Now we obtain the desired conclusion from Lemma 3.2.
**Lemma 3.4** (Minkowski-Hsiung identities).: _Suppose \(\Omega_{\ell}\subseteq\mathbb{R}^{n+1}\) is a compactly supported sequence of open sets with \(C^{2}\)-boundary and \(T\) is a Legendrian cycle of \(\mathbb{R}^{n+1}\) such that \(N_{\Omega_{\ell}}\to T\) weakly._
_Then the Minkowski identity holds for \(T\):_
\[(n-k+1)\int i_{T}(x,u)\,\mathcal{J}_{T}(x,u)\,H_{T,k-1}(x,u)\,d\mathcal{H}^{n }(x,u)\]
\[=k\int(x\bullet u)i_{T}(x,u)\,\mathcal{J}_{T}(x,u)\,H_{T,k}(x,u)\,d\mathcal{H }^{n}(x,u) \tag{3.4}\]
_for \(k=1,\ldots,n\). Additionally, if \(\Omega_{\ell}\) converges in measure to a set \(\Omega\), then_
\[(n+1)\mathcal{L}^{n+1}(\Omega)=\int(x\bullet u)i_{T}(x,u)\,\mathcal{J}_{T}(x, u)\,H_{T,0}(x,u)\,d\mathcal{H}^{n}(x,u). \tag{3.5}\]
Proof.: Let \(R>0\) such that \(\Omega_{\ell}\subseteq B_{R}\) for all \(\ell\geq 1\). Define \(\sigma:\mathbb{R}^{n+1}\times\mathbb{S}^{n}\to\mathbb{R}\) as \(\sigma(x,u)=x\bullet u\) for \((x,u)\in\mathbb{R}^{n+1}\times\mathbb{S}^{n}\) and choose \(\phi\in\mathcal{D}^{0}(\mathbb{R}^{n+1}\times\mathbb{R}^{n+1})\) such that \(\phi(x,u)=1\) for all \((x,u)\in B_{2R}\). The classical Minkowski-Hsiung identities for \(C^{2}\)-domains and (3.3) imply
\[(n-k+1)N_{\Omega_{\ell}}(\phi\,\varphi_{n-k+1}) =(n-k+1)\int_{\partial\Omega_{\ell}}H_{\Omega_{\ell},k-1}(x)\,d \mathcal{H}^{n}(x)\] \[=k\int_{\partial\Omega_{\ell}}(x\bullet\nu_{\Omega_{\ell}}(x))H_{ \Omega_{\ell},k}(x)\,d\mathcal{H}^{n}(x)\] \[=kN_{\Omega_{\ell}}(\sigma\,\phi\,\varphi_{n-k}).\]
If \(\ell\to\infty\) this yields that
\[(n-k+1)T(\phi\,\varphi_{n-k+1})=kT(\sigma\,\phi\,\varphi_{n-k})\]
and we obtain (3.4) from Lemma 3.2. Analogously, if \(\Omega_{\ell}\to\Omega\) in measure, then divergence theorem and area formula imply
\[(n+1)\mathcal{L}^{n+1}(\Omega_{\ell})=\int_{\partial\Omega_{\ell}}(a\bullet\nu _{\Omega_{\ell}}(a))\,d\mathcal{H}^{n}(a)=N_{\Omega_{\ell}}(\sigma\,\phi\, \varphi_{n})\]
and we pass to limit to obtain (3.5).
## 4. Heintze-Karcher inequality for varifolds of bounded mean curvature
The next lemma provides fundamental structural properties of a set of finite perimeter whose reduced boundary is a varifold of bounded mean curvature.
**Lemma 4.1**.: _Let \(E\subseteq\mathbb{R}^{n+1}\) be a set of finite perimeter in \(\mathbb{R}^{n+1}\) and positive volume such that \(V_{E}\) is a varifold of bounded mean curvature._
_Then there exists a closed set \(C\subseteq\mathbb{R}^{n+1}\) with non-empty interior such that_
\[\operatorname{spt}\lVert V_{C}\rVert=\partial C,\qquad\mathcal{L}^{n+1}((C \setminus E)\cup(E\setminus C))=0, \tag{4.1}\]
\[\mathcal{H}^{n}\big{[}\partial C\setminus\big{(}\partial^{*}C\cap\pi_{0}( \operatorname{nor}(C))\big{)}\big{]}=0, \tag{4.2}\]
\[\mathcal{H}^{n}\big{(}\operatorname{nor}(\partial C)\operatorname{\llcorner}Z \big{)}=0\quad\text{whenever $Z\subseteq\partial C$ with $\mathcal{H}^{n}(Z)=0$,} \tag{4.3}\]
\[\operatorname{nor}(C)\operatorname{\llcorner}\partial^{*}C=\{(x,\nu_{C}(x)): x\in\partial^{*}C\cap\pi_{0}(\operatorname{nor}(C))\}, \tag{4.4}\]
\[\mathcal{H}^{n}(\operatorname{nor}(C)\setminus\overline{\nu}_{C}(\partial^{* }C))=0, \tag{4.5}\]
_and_
\[\sum_{i=1}^{n}\kappa_{C,i}(x,u)=\boldsymbol{h}(V_{C},x)\bullet u\quad\text{ for $\mathcal{H}^{n}$ a.e. $(x,u)\in\operatorname{nor}(C)$.} \tag{4.6}\]
Proof.: Since \(V_{E}\) is a varifold of bounded mean curvature, it follows from [1, Theorem 8.6] that \(\Theta^{n}(\mathcal{H}^{n}\operatorname{\llcorner}\partial^{*}E,\cdot)\) is an upper-semicontinuous function on \(\mathbb{R}^{n+1}\). We conclude that \(\Theta^{n}(\mathcal{H}^{n}\operatorname{\llcorner}\partial^{*}E,x)\geq 1\) for every \(x\in\overline{\partial^{*}E}\) and \(\mathcal{H}^{n}(\overline{\partial^{*}E}\setminus\partial^{*}E)=0\) by standard density results. We infer from [1, Lemma 6.2] that there exists a closed set \(C\subseteq\mathbb{R}^{n+1}\) with non-empty interior such that (4.1) holds and
\[\mathcal{H}^{n}(\partial C\setminus\partial^{*}C)=0. \tag{4.7}\]
For every \(x\in\partial^{*}C\), by Allard-Duggan regularity theorem [10], there exist an open set \(U\subseteq\mathbb{R}^{n+1}\) with \(x\in U\), an open set \(V\subseteq\mathbb{R}^{n}\) and a function \(f\in W^{2,p}(V,\mathbb{R})\) for every \(p<\infty\), such that \(U\cap\partial^{*}C\) coincides with the graph of \(f\), up to a rotation in \(\mathbb{R}^{n+1}\). We now recall Reshetnyak differentiability theorem, see [11, Theorem 2]: if \(f\in W^{2,p}(V,\mathbb{R})\) with \(p>n\), then for \(\mathcal{L}^{n}\) a.e. \(a\in V\) there exists a polynomial function \(P_{a}\) of degree at most \(2\) such that \(P_{a}(a)=f(a)\) and
\[\lim_{b\to a}\frac{|f(b)-P_{a}(b)|}{|b-a|^{2}}=0. \tag{4.8}\]
Denoting with \(G\) the graph of \(f\) and with \(\nu_{G}\) a continuous unit-normal on \(G\), we easily conclude from (4.8) that for \(\mathcal{H}^{n}\) a.e. \(x\in G\) there exists \(s>0\) such that
\[\operatorname{dist}(x+s\nu_{G}(x),G)=s\quad\text{and}\quad\operatorname{dist} (x-s\nu_{G}(x),G).\]
Henceforth, it follows that
\[\mathcal{H}^{n}(\partial^{*}C\setminus\pi_{0}(\operatorname{nor}(C)))=0,\]
and we obtain (4.2) from (4.7).
The assertion in (4.3) is proved in [15, Theorem 3.8]2. To prove (4.4) we recall from De Giorgi theorem [1, Theorem 3.59] that if \(x\in\partial^{*}C\) then the sets \(\frac{C-x}{r}\) converge in measure as \(r\to 0\) to an halfspace perpendicular to \(\nu_{C}(x)\)
Consequently if \((x,u)\in\operatorname{nor}(C)\) and \(x\in\partial^{*}C\), we readily conclude \(u=\nu_{C}(x)\). This proves (4.4), whence we infer that
\[\operatorname{nor}(C)\,\llcorner(\partial C\setminus\partial^{*}C)= \operatorname{nor}(C)\setminus\overline{\nu}_{C}(\partial^{*}C). \tag{4.9}\]
Then (4.5) follows from (4.9).
Finally we prove (4.6). By the \(C^{2}\)-rectifiability of Menne [13] we obtain a countable family \(F\) of embedded \(C^{2}\)-hypersurfaces such that
\[\mathcal{H}^{n}\Big{(}\partial C\setminus\bigcup F\Big{)}=0 \tag{4.10}\]
and for each \(\Sigma\in F\) we have that
\[\boldsymbol{h}(\Sigma,x)=\boldsymbol{h}(V_{C},x)\qquad\text{for $\mathcal{H}^{n}$ a.e. $x\in\Sigma\cap\partial C$}. \tag{4.11}\]
(Here \(\boldsymbol{h}(\Sigma,x)\) is the mean curvature vector of \(\Sigma\).) For each \(\Sigma\in F\), if \(R_{\Sigma}\subseteq\Sigma\cap\partial C\) is a set as in Lemma 2.2 and \(R^{\prime}_{\Sigma}\) is the set of \(x\in\Sigma\cap\partial C\) such that (4.11) holds, we have that \(\mathcal{H}^{n}\big{(}(\Sigma\cap\partial C)\setminus(R_{\Sigma}\cap R^{ \prime}_{\Sigma})\big{)}=0\) and it follows from (4.3) that
\[\mathcal{H}^{n}\big{[}\operatorname{nor}(C)\,\llcorner\,\big{(}(\Sigma\cap \partial C)\setminus(R_{\Sigma}\cap R^{\prime}_{\Sigma})\big{)}\big{]}=0;\]
therefore we can conclude that
\[\sum_{i=1}^{n}\kappa_{C,i}(x,u)=\sum_{i=1}^{n}\kappa_{\Sigma,i}(x,u)= \boldsymbol{h}(\Sigma,x)\bullet u=\boldsymbol{h}(V_{C},x)\bullet u\]
for \(\mathcal{H}^{n}\) a.e. \((x,u)\in\operatorname{nor}(C)\,\llcorner(\Sigma\cap\partial C)\) and for \(1\leq i\leq n\). Now (4.6) follows from (4.10).
We can now state and prove the Heintze-Karcher inequality for sets of finite perimeter and bounded distributional mean curvature.
**Theorem 4.2**.: _Let \(E\subseteq\mathbb{R}^{n+1}\) be a set of finite perimeter and finite volume such that \(V_{E}\) is a varifold of bounded mean curvature and_
\[\boldsymbol{h}(V_{E},x)\bullet\nu_{E}(x)\geq 0\quad\text{for $\mathcal{H}^{n}$ a.e. $x\in\partial^{*}E$.}\]
_Then_
\[\mathcal{L}^{n+1}(E)\leq\frac{n}{n+1}\int_{\partial^{*}E}\frac{1}{\boldsymbol {h}(V_{E},x)\bullet\nu_{E}(x)}\,d\mathcal{H}^{n}(x), \tag{4.12}\]
_and the equality is achieved if and only if \(E\) is \(\mathcal{L}^{n+1}\)-almost equal to a finite union of closed ball with disjointed interiors._
Proof.: We assume \(\mathcal{L}^{n+1}(E)>0\). Define \(F=\mathbb{R}^{n+1}\setminus E\) and notice that
\[\partial^{*}F=\partial^{*}E\quad\text{and}\quad\nu_{F}=-\nu_{E}.\]
Since \(V_{F}=V_{E}\) is a varifold of bounded mean curvature, we can apply Lemma 4.1 to find a closed set \(K\subseteq\mathbb{R}^{n+1}\) with non empty interior such that (4.1)-(4.6) hold with \(E\) and \(C\) replaced by \(F\) and \(K\), respectively. In particular, (since \(V_{E}=V_{K}\) and \(\nu_{K}=-\nu_{E}\))
\[\sum_{i=1}^{n}\kappa_{K,i}(x,u)=\boldsymbol{h}(V_{E},x)\bullet u\quad\text{ for $\mathcal{H}^{n}$ a.e. $(x,u)\in\operatorname{nor}(K)$},\]
\[\operatorname{nor}(K)\,\llcorner\,\partial^{*}K=\{(x,-\nu_{E}(x)):x\in \partial^{*}K\cap\pi_{0}(\operatorname{nor}(K))\}\]
and
\[\mathcal{H}^{n}(\operatorname{nor}(K)\setminus(\operatorname{nor}(K)\,\llcorner \,\partial^{*}K))=0.\]
Henceforth,
\[\sum_{i=1}^{n}\kappa_{K,i}(x,u)=-\boldsymbol{h}(V_{E},x)\bullet\nu_{E}(x)\leq 0 \quad\text{for $\mathcal{H}^{n}$ a.e. $(x,u)\in\operatorname{nor}(K)$}\]
and it follows from Theorem 2.3 and area formula that
\[(n+1)\mathcal{L}^{n+1}(E) \leq\int_{\operatorname{nor}(K)}J_{n}^{\operatorname{nor}(K)} \pi_{0}(x,u)\,\frac{n}{\boldsymbol{h}(V_{E},x)\bullet\nu_{E}(x)}\,d\mathcal{H} ^{n}(x,u)\] \[=\int_{\partial^{*}E}\frac{n}{\boldsymbol{h}(V_{E},x)\bullet\nu_ {E}(x)}\,d\mathcal{H}^{n}(x).\]
If the equality is achieved in (4.12), since \(V_{E}\) is a varifold of bounded mean curvature, we obtain the conclusion directly from the rigidity statement in Theorem 2.3.
**Corollary 4.3**.: _Let \(E\subseteq\mathbb{R}^{n+1}\) be a set of finite perimeter with positive and finite volume such that \(V_{E}\) is a varifold of bounded mean curvature and_
\[\boldsymbol{h}(V_{E},x)\bullet\nu_{E}(x)\geq\frac{n\mathcal{H}^{n}(\partial^{ *}E)}{(n+1)\mathcal{L}^{n+1}(E)}\quad\text{for $\mathcal{H}^{n}$ a.e. $x\in\partial^{*}E$.}\]
_Then \(E\) is \(\mathcal{L}^{n+1}\) almost equal to a finite union of disjoint open balls of the same radius \(\rho=\frac{(n+1)\mathcal{L}^{n+1}(E)}{\mathcal{H}^{n}(\partial^{*}E)}\)._
Proof.: The result can be deduced from Theorem 4.2 using the same argument as in [10, Corollary 5.16]. We leave the details to the reader.
## 5. Proof of Theorem 1.1
Let \(\Omega\subseteq\mathbb{R}^{n+1}\) be a set of finite perimeter such that the sequence \(\Omega_{\ell}\) converges in measure to \(\Omega\) and \(\mathcal{H}^{n}(\partial\Omega_{\ell})\to\mathcal{H}^{n}(\partial^{*}\Omega)\). We can assume that \(\mathcal{L}^{n+1}(\Omega)>0\), otherwise there is nothing to prove. It follows from [1, Propositions 1.80 and 3.13]) that
\[\mathcal{H}^{n}\operatorname{\llcorner}\partial\Omega_{\ell}\to\mathcal{H}^{ n}\operatorname{\llcorner}\partial^{*}\Omega\quad\text{and}\quad\nu_{\Omega_{ \ell}}\mathcal{H}^{n}\operatorname{\llcorner}\partial\Omega_{\ell}\to\nu_{ \Omega}\mathcal{H}^{n}\operatorname{\llcorner}\partial^{*}\Omega \tag{5.1}\]
weakly as Radon measures. Employing Reshetnyak theorem [1, Theorem 2.39] and recalling (2.1) we conclude that
\[\delta V_{\Omega_{\ell}}(g)\to\delta V_{\Omega}(g)\quad\text{for all $g\in C_{c}^{\infty}(\mathbb{R}^{n+1},\mathbb{R}^{n+1})$.}\]
We estimate
\[|\delta V_{\Omega_{\ell}}(g)|\leq\int_{\partial\Omega_{\ell}}(H_{ \Omega_{\ell},1})^{+}|g|\,d\mathcal{H}^{n}+\int_{\partial\Omega_{\ell}}(H_{ \Omega_{\ell},1})^{-}|g|\,d\mathcal{H}^{n}\] \[\leq M\,\int_{\{0\leq H_{\Omega_{\ell},1}\leq M\}}|g|\,d\mathcal{H }^{n}+\int_{\{H_{\Omega_{\ell},1}>M\}}H_{\Omega_{\ell},1}|g|\,d\mathcal{H}^{n }+\int_{\partial\Omega_{\ell}}(H_{\Omega_{\ell},1})^{-}|g|\,d\mathcal{H}^{n}\] \[\leq M\,\int_{\partial\Omega_{\ell}}|g|\,d\mathcal{H}^{n}+\bigg{(} \int_{\partial\Omega_{\ell}}|H_{\Omega_{\ell},1}|^{n}\,d\mathcal{H}^{n}\bigg{)} ^{\frac{1}{n}}\bigg{(}\int_{\{H_{\Omega_{\ell},1}>M\}}|g|^{\frac{n}{n-1}}\,d \mathcal{H}^{n}\bigg{)}^{\frac{n-1}{n}}\] \[+\int_{\partial\Omega_{\ell}}(H_{\Omega_{\ell},1})^{-}|g|\,d \mathcal{H}^{n}.\]
Since the sequence has finite total curvature we have that
\[\sup_{\ell\geq 1}\int_{\partial\Omega_{\ell}}|H_{\Omega_{\ell},1}|^{n}\,d \mathcal{H}^{n}<\infty,\]
and the second integral on the right-side converges to zero as \(\ell\to\infty\) thanks to (1.3). Noting that also the third integral on the right-side converges to zero since the sequence is asymptotically \((k-1)\)-mean convex, we infer from (5.1) that
\[|\delta V_{\Omega}(g)|\leq M\int_{\partial^{*}\Omega}|g|\,d\mathcal{H}^{n}\quad \text{for }g\in C_{c}^{\infty}(\mathbb{R}^{n+1},\mathbb{R}^{n+1}).\]
Henceforth, \(V_{\Omega}\) is a varifold of bounded mean curvature. Let \(C\subseteq\mathbb{R}^{n+1}\) be the closed set given by Lemma 4.1 such that \(\mathcal{L}^{n+1}\big{(}C\triangle\Omega\big{)}=0.\) In applying the conclusion (4.3) of Lemma 4.1 it is useful to notice the trivial inclusion \(\operatorname{nor}(C)\subseteq\operatorname{nor}(\partial C)\). Additionally, it follows from (3.2) that \(\sup_{\ell\geq 1}\boldsymbol{M}(N_{\Omega_{\ell}})<\infty\) and we deduce from Federer-Fleming compactness theorem, cf. [12] that, up to subsequences,
\[N_{\Omega_{\ell}}\to T\quad\text{weakly in the sense of currents},\]
with \(T\) being an \(n\)-dimensional integer-rectifiable current compactly supported in \(\mathbb{R}^{n+1}\times\mathbb{S}^{n}\). Since \(\partial N_{\Omega_{\ell}}=0\) and \(N_{\Omega_{\ell}\,\llcorner\,\alpha_{\mathbb{R}^{n+1}}}=0\), we readily infer that \(T\) is a Legendrian cycle of \(\mathbb{R}^{n+1}\).
We have just seen that two different kind of limits are at play here: the varifolds \(V_{\Omega_{\ell}}\) converging to \(V_{C}\) and the normal cycles \(N_{\Omega_{\ell}}\) converging as currents to \(T\). To prove the theorem we need to understand how these two limits are related. We start proving the following two statements:
\[\mathcal{H}^{n}\big{(}\operatorname{nor}(C)\setminus W_{T}^{(n)}\big{)}=0 \tag{5.2}\]
\[H_{T,1}(x,u)=\boldsymbol{h}(V_{C},x)\bullet u\qquad\text{for }\mathcal{H}^{n} \text{ a.e. }(x,u)\in\operatorname{nor}(C). \tag{5.3}\]
By Reshetnyak continuity theorem [1, Theorem 2.39], we have that
\[(N_{\Omega_{\ell}\,\llcorner\,\varphi_{n}})(\phi)=\int_{\partial\Omega_{\ell} }\phi(a,\nu_{\Omega_{\ell}}(a))\,d\mathcal{H}^{n}(a)\to\int_{\partial^{*}C} \phi(a,\nu_{C}(a))\,d\mathcal{H}^{n}(a)\]
for every \(\phi\in\mathcal{D}^{0}(\mathbb{R}^{n+1}\times\mathbb{R}^{n+1})\). Since \(N_{\Omega_{\ell}\,\llcorner\,\varphi_{n}}\to T\,\llcorner\,\varphi_{n}\) weakly, it follows from Lemma 3.2 and Remark 3.3 that
\[\int_{\partial^{*}C}\phi(a,\nu_{C}(a))\,d\mathcal{H}^{n}(a) =(T\,\llcorner\,\varphi_{n})(\phi)\] \[=\int_{W_{T}^{(n)}}J_{n}^{W_{T}}\pi_{0}(a,u)i_{T}(a,u)\phi(a,u)\, d\mathcal{H}^{n}(a,u) \tag{5.4}\]
for \(\phi\in\mathcal{D}^{0}(\mathbb{R}^{n+1}\times\mathbb{R}^{n+1})\). Let \(R=\partial^{*}C\cap\pi_{0}(\operatorname{nor}(C))\) and choose a \(\mathcal{H}^{n}\)-measurable set \(Q\subseteq\operatorname{nor}(C)\) with finite \(\mathcal{H}^{n}\)-measure. We notice from (4.2) and (4.3) of Lemma 4.1 that \(\mathcal{H}^{n}(Q\setminus(Q\,\llcorner\,R))=0\). The latter in combination with area formula, (4.4) of Lemma 4.1 and (5.4) allows to conclude
\[\int_{Q}J_{n}^{Q}\pi_{0}(x,u)\,\phi(x,u)\,d\mathcal{H}^{n}(x,u)\] \[\qquad=\int_{Q\,\llcorner\,R}J_{n}^{Q}\pi_{0}(x,u)\,\phi(x,u)\,d \mathcal{H}^{n}(x,u)\] \[\qquad=\int_{\pi_{0}(Q\,\llcorner\,R)}\phi(x,\nu_{C}(x))\,d \mathcal{H}^{n}(x)\] \[\qquad\leq\int_{W_{T}^{(n)}}J_{n}^{W_{T}}\pi_{0}(x,u)i_{T}(x,u) \phi(x,u)\,d\mathcal{H}^{n}(x,u)\]
for every \(\phi\in\mathcal{D}^{0}(\mathbb{R}^{n+1}\times\mathbb{R}^{n+1})\) with \(\phi\geq 0\). It follows that
\[\int_{Q\cap S}J_{n}^{Q}\pi_{0}(x,u)\,d\mathcal{H}^{n}(x,u)\] \[\leq\int_{W_{T}^{(n)}\cap S}J_{n}^{W_{T}}\pi_{0}(x,u)i_{T}(x,u)\,d \mathcal{H}^{n}(x,u) \tag{5.5}\]
for every \(\mathcal{H}^{n}\)-measurable set \(S\subseteq\mathbb{R}^{n+1}\times\mathbb{R}^{n+1}\). We prove now that
\[J_{n}^{Q}\pi_{0}(x,u)>0\quad\text{for $\mathcal{H}^{n}$ a.e. $(x,u)\in Q$}. \tag{5.6}\]
If \(P=\{(x,u)\in Q:J_{n}^{Q}\pi_{0}(x,u)>0\}\), then
\[0=\int_{Q\setminus P}J_{n}^{Q}\pi_{0}(x,u)\,d\mathcal{H}^{n}(x,u)=\int_{\pi_{ 0}(Q\setminus P)}\mathcal{H}^{0}(\pi_{0}^{-1}(x)\cap(Q\setminus P))\,d \mathcal{H}^{n}(x),\]
whence we infer, since \(\mathcal{H}^{0}(\pi_{0}^{-1}(x)\cap(Q\setminus P))\geq 1\) for \(\mathcal{H}^{n}\) a.e. \(x\in\pi_{0}(Q\setminus P)\), that
\[\mathcal{H}^{n}(\pi_{0}(Q\setminus P))=0.\]
It follows from (4.3) of Lemma 4.1 that \(\mathcal{H}^{n}(Q\setminus P)=0\) and (5.6) is proved. Thanks to (5.6), we can choose \(S=(\mathbb{R}^{n+1}\times\mathbb{R}^{n+1})\setminus W_{T}^{(n)}\) in (5.5) and deduce that
\[\mathcal{H}^{n}(Q\setminus W_{T}^{(n)})=0.\]
Since the latter holds for every \(Q\subseteq\mathrm{nor}(C)\) with finite \(\mathcal{H}^{n}\)-measure, we obtain (5.2). Notice that (5.2) implies that \(\mathcal{H}^{n}(\mathrm{nor}(C))<\infty\). It follows that
\[\mathrm{Tan}^{n}(\mathcal{H}^{n}\,\llcorner\,\mathrm{nor}(C),(x,u))=\mathrm{ Tan}^{n}(\mathcal{H}^{n}\,\llcorner\,W_{T},(x,u))\]
for \(\mathcal{H}^{n}\) a.e. \((x,u)\in\mathrm{nor}(C)\) and, combining Theorem 3.1 with Lemma 2.1 we conclude that
\[\kappa_{C,i}(x,u)=\kappa_{T,i}(x,u)\quad\text{for $\mathcal{H}^{n}$ a.e. $(x,u)\in\mathrm{nor}(C)$} \tag{5.7}\]
for \(1\leq i\leq n\). Now (5.3) follows from Lemma 4.1.
We notice that it follows from (3.3) that
\[(N_{\Omega_{\ell}}\,\llcorner\,\varphi_{n-k})(\phi)=\lambda(N_{\Omega_{\ell}} \,\llcorner\,\varphi_{n})(\phi)+\int_{\partial\Omega_{\ell}}(H_{\Omega_{\ell},k}(x)-\lambda)\,\phi(x,\nu_{\Omega_{\ell}}(x))\,d\mathcal{H}^{n}(x)\]
for \(\phi\in\mathcal{D}^{0}(\mathbb{R}^{n+1}\times\mathbb{R}^{n+1})\). Since the second integral on the right side goes to \(0\) as \(\ell\to\infty\), we obtain
\[T\,\llcorner\,\varphi_{n-k}=\lambda(T\,\llcorner\,\varphi_{n}).\]
Henceforth it follows from Lemma 3.2
\[\int_{W_{T}^{(n)}}\phi(x,u)\,i_{T}(x,u)\,\mathcal{J}_{T}(x,u)\,(H_{T,k}(x,u)- \lambda)\,d\mathcal{H}^{n}(x,u)\]
\[=\int_{W_{T}\setminus W_{T}^{(n)}}\phi(x,u)\,i_{T}(x,u)\,\mathcal{J}_{T}(x,u )\,H_{T,k}(x,u)\,d\mathcal{H}^{n}(x,u)\]
for every \(\phi\in\mathcal{D}^{0}(\mathbb{R}^{n+1}\times\mathbb{R}^{n+1})\). Since \(i_{T}(x,u)\mathcal{J}_{T}(x,u)>0\) for \(\mathcal{H}^{n}\) a.e. \((x,u)\in W_{T}\), we infer
\[H_{T,k}(x,u)=0\quad\text{for $\mathcal{H}^{n}$ a.e. $(x,u)\in W_{T}\setminus W_{T}^{(n)}$} \tag{5.8}\]
and
\[H_{T,k}(x,u)=\lambda\quad\text{for $\mathcal{H}^{n}$ a.e. $(x,u)\in W_{T}^{(n)}$}. \tag{5.9}\]
Employing Lemma 3.4 we obtain
\[(n-k+1)\int i_{T}(x,u)\,\mathcal{J}_{T}(x,u)\,H_{T,k-1}(x,u)\,d \mathcal{H}^{n}(x,u) \tag{5.10}\] \[\quad=k\int(x\bullet u)\,i_{T}(x,u)\,\mathcal{J}_{T}(x,u)\,H_{T,k }(x,u)\,d\mathcal{H}^{n}(x,u)\] \[\quad=k\lambda\int_{W_{T}^{(n)}}(x\bullet u)\,i_{T}(x,u)\, \mathcal{J}_{T}(x,u)\,d\mathcal{H}^{n}(x,u)\] \[\quad=k\lambda(n+1)\,\mathcal{L}^{n+1}(C)\]
and we conclude that (notice that \(0<\mathcal{L}^{n+1}(C)<\infty\))
\[H_{T,k}(x,u)=\lambda=\frac{(n-k+1)T(\varphi_{n-k+1})}{k(n+1)\,\mathcal{L}^{n+ 1}(C)} \tag{5.11}\]
for \(\mathcal{H}^{n}\) a.e. \((x,u)\in W_{T}^{(n)}\). Since, by Lemma 3.3, \(H_{T,k-1}(x,u)\geq 0\) for \(\mathcal{H}^{n}\) a.e. \((x,u)\in W_{T}\), we conclude from (5.10) that \(\lambda\geq 0\).
We claim now that \(\lambda>0\) and we prove it by contradiction. Henceforth we assume \(\lambda=0\) and we prove by induction that
\[H_{T,j}(x,u)=0\quad\text{for $\mathcal{H}^{n}$ a.e. $(x,u)\in W_{T}$} \tag{5.12}\]
for \(j=0,\ldots,k\). If \(j=k\) then (5.12) follows from (5.8) and (5.9). If \(i\in\{1,\ldots,k\}\) and \(H_{T,k-i+1}(x,u)=0\) for \(\mathcal{H}^{n}\) a.e. \((x,u)\in W_{T}\), we infer from Minkowski formula in Lemma 3.4 that
\[0= (k-i+1)\int(x\bullet u)i_{T}(x,u)\,\mathcal{J}_{T}(x,u)\,H_{T,k- i+1}(x,u)\,d\mathcal{H}^{n}(x,u)\] \[\quad=(n-k+i)\int i_{T}(x,u)\,\mathcal{J}_{T}(x,u)\,H_{T,k-i}(x,u )\,d\mathcal{H}^{n}(x,u).\]
Since \(H_{T,k-i}(x,u)\geq 0\) (by Lemma 3.3) and \(i_{T}(x,u)\mathcal{J}_{T}(x,u)>0\) for \(\mathcal{H}^{n}\) a.e. \((x,u)\in W_{T}\), we conclude that
\[H_{T,k-i}(x,u)=0\quad\text{for $\mathcal{H}^{n}$ a.e. $(x,u)\in W_{T}$.}\]
Henceforth, (5.12) is proved for all \(j\in\{0,\ldots,k\}\). It follows that \(H_{T,0}(x,u)=0\) for \(\mathcal{H}^{n}\) a.e. \((x,u)\in W_{T}\) and \(\mathcal{L}^{n+1}(\Omega)=0\) by formula (3.5) of Lemma 3.4. This a contradiction, and consequently we conclude that \(\lambda\) is positive.
Since \(H_{T,i}(x,u)\geq 0\) for \(\mathcal{H}^{n}\) a.e. \((x,u)\in W_{T}\) and for every \(1\leq i\leq k\), we can apply [10, Lemma 2.2] to obtain
\[\frac{H_{T,1}(x,u)}{n}\geq\left(\frac{H_{T,2}(x,u)}{n}\right)^{\frac{1}{2}} \geq\ldots\geq\left(\frac{H_{T,k}(x,u)}{n}\right)^{\frac{1}{k}}\geq\left( \frac{\lambda}{n}\right)^{\frac{1}{k}} \tag{5.13}\]
for \(\mathcal{H}^{n}\) a.e. \((x,u)\in W_{T}^{(n)}\). Noting by (5.2) and Remark 3.3 that
\[\mathcal{J}_{T}(x,u)=J_{n}^{W_{T}}\pi_{0}(x,u)=J_{n}^{\text{nor}(C)}\pi_{0}(x, u)\quad\text{for $\mathcal{H}^{n}$ a.e. $(x,u)\in\text{nor}(C)$}\]
and \(H_{T,k-1}(x,u)\geq 0\) for \(\mathcal{H}^{n}\) a.e. \((x,u)\in W_{T}\) by Lemma 3.3, we employ Lemma 3.4, (5.13), (5.2) and area formula to conclude
\[(n-k+1)\int\mathcal{J}_{T}(x,u)\,i_{T}(x,u)\,H_{T,k-1}(x,u)\,d \mathcal{H}^{n}(x,u) \tag{5.14}\] \[\qquad\geq(n-k+1)\int_{W_{T}^{(n)}}\mathcal{J}_{T}(x,u)\,i_{T}(x,u)\,H_{T,k-1}(x,u)\,d\mathcal{H}^{n}(x,u)\] \[\qquad\geq(n-k+1)\,\lambda^{\frac{k-1}{k}}\begin{pmatrix}n\\ k\end{pmatrix}^{\frac{1-k}{k}}\begin{pmatrix}n\\ k-1\end{pmatrix}\,\int_{W_{T}^{(n)}}\mathcal{J}_{T}(x,u)\,i_{T}(x,u)\,d \mathcal{H}^{n}(x,u)\] \[\qquad\geq(n-k+1)\,\lambda^{\frac{k-1}{k}}\begin{pmatrix}n\\ k\end{pmatrix}^{\frac{1-k}{k}}\begin{pmatrix}n\\ k-1\end{pmatrix}\,\int_{\text{nor}(C)}J_{n}^{\text{nor}(C)}\pi_{0}(x,u)\,i_{T} (x,u)\,d\mathcal{H}^{n}(x,u)\] \[\qquad\geq(n-k+1)\,\lambda^{\frac{k-1}{k}}\begin{pmatrix}n\\ k\end{pmatrix}^{\frac{1-k}{k}}\begin{pmatrix}n\\ k-1\end{pmatrix}\mathcal{H}^{n}(\partial^{*}C).\]
Combining (5.10) and (5.14) we conclude
\[\left(\frac{\lambda}{\binom{n}{k}}\right)^{\frac{1}{k}}\geq\frac{\mathcal{H} ^{n}(\partial^{*}C)}{(n+1)\mathcal{L}^{n+1}(C)},\]
whence we infer in combination with (5.13) and (5.3) that
\[\boldsymbol{h}(V_{C},x)\bullet u\geq\frac{n\mathcal{H}^{n}(\partial^{*}C)}{( n+1)\mathcal{L}^{n+1}(C)}\quad\text{for $\mathcal{H}^{n}$ a.e. $(x,u)\in\text{nor}(C)$}. \tag{5.15}\]
We conclude from Corollary 4.3 that \(C\) is the union of finitely many closed balls of the same radius \(\rho=\frac{(n+1)\mathcal{L}^{n+1}(E)}{\mathcal{H}^{n}(\partial^{*}E)}\) with disjointed interiors.
We now use Lemma 2.2 with \(\Sigma\) replaced by a sphere of radius \(\rho\): noting that \(\text{nor}(C)\subseteq\text{nor}(\partial C)\), this lemma allows to conclude in combination with (4.3) of Lemma 4.1, that
\[\kappa_{C,1}(x,u)=\ldots=\kappa_{C,n}(x,u)=\frac{1}{\rho}\quad\text{for $\mathcal{H}^{n}$ a.e. $(x,u)\in\text{nor}(C)$}.\]
Noting that \(\kappa_{T,i}(x,u)=\kappa_{C,i}(x,u)\) for \(\mathcal{H}^{n}\) a.e. \((x,u)\in\text{nor}(C)\) and for \(i=1,\ldots,n\), as proved in (5.7), we conclude from (5.9) that
\[\lambda=H_{T,k}(x,u)=H_{C,k}(x,u)=\binom{n}{k}\rho^{-k}\]
for \(\mathcal{H}^{n}\) a.e. \((x,u)\in\text{nor}(C)\). Since \(\mathcal{H}^{n}(\text{nor}(C))>0\) we finally conclude that \(\lambda=\binom{n}{k}\rho^{-k}\).
## Appendix A Soap bubbles with positive reach
In this appendix we first generalize [14, Theorem A], removing the restriction \(\lambda\in\mathbb{R}\setminus\{0\}\). Notice that in [14, Theorem A] a stronger hypothesis is at play than in [14, Theorem 6.15] and I do not know if the restriction \(\lambda\in\mathbb{R}\setminus\{0\}\) can also be removed in [14, Theorem 6.15]. We use the notation and the terminology of [14], for which we refer to [14, Section 2.2].
**Theorem A.1**.: _Let \(\phi\) be a uniformly convex \(C^{2}\)-norm, \(k\in\{1,\ldots,n\}\) and let \(C\subseteq\mathbb{R}^{n+1}\) be a set of positive reach with positive and finite volume. Assume that_
\[\Theta^{\phi}_{n-i}(C,\cdot)\text{ is a non-negative measure for }i=1,\ldots,k-1\] (A.1)
_and_
\[\Theta^{\phi}_{n-k}(C,\cdot)=\lambda\Theta^{\phi}_{n}(C,\cdot)\quad\text{for some }\lambda\in\mathbb{R}.\] (A.2)
_Then \(C\) is a finite union of disjoint union of rescaled and translated Wulff shapes of radius \(\rho=\frac{(n+1)\mathcal{L}^{n+1}(C)}{\mathcal{P}^{\phi}(C)}\) and \(\lambda=\binom{n}{r}\frac{\rho}{r+1}\)._
Proof.: The main point is to prove that \(\lambda>0\), then the conclusion can be deduced from [13, Theorem A]. The proof of the positivity of \(\lambda\) follows closely the idea used in the proof of Theorem 1.1.
We notice from [13, Lemma 6.14] that
\[\boldsymbol{H}^{\phi}_{C,r}(a,\eta)=0\quad\text{for }\mathcal{H}^{n}\text{ a.e. }(a,\eta)\in N^{\phi}(C)\setminus\widetilde{N}^{\phi}_{n}(C)\] (A.3)
and
\[\boldsymbol{H}^{\phi}_{C,r}(a,\eta)=(r+1)\lambda\quad\text{for }\mathcal{H}^{n} \text{ a.e. }(a,\eta)\in\widetilde{N}^{\phi}_{n}(C).\] (A.4)
Assume by contradiction that \(\lambda=0\). Now we prove by induction that
\[\boldsymbol{H}^{\phi}_{C,j}(a,\eta)=0\quad\text{for }\mathcal{H}^{n}\text{ a.e. }(x,u)\in N^{\phi}(C)\] (A.5)
for \(j=0,\ldots,r\). If \(j=r\) then (A.5) follows from (A.3) and (A.4). If \(i\in\{1,\ldots,r\}\) and \(\boldsymbol{H}^{\phi}_{C,r-i+1}(a,\eta)=0\) for \(\mathcal{H}^{n}\) a.e. \((a,\eta)\in N^{\phi}(C)\), we infer from Minkowski formula in [13, Theorem 6.8]
\[0 =(r-i+1)\int_{N^{\phi}(C)}(a\bullet\boldsymbol{n}^{\phi}(\eta)) \,J^{\phi}_{C}(a,\eta)\,\boldsymbol{H}^{\phi}_{C,r-i+1}(a,\eta)\,d\mathcal{H} ^{n}(a,\eta)\] \[\qquad=(n-r+i)\int_{N^{\phi}(C)}\phi(\boldsymbol{n}^{\phi}(\eta) )\,J^{\phi}_{C}(a,\eta)\,\boldsymbol{H}^{\phi}_{C,r-i}(a,\eta)\,d\mathcal{H} ^{n}(a,\eta).\]
Since it follows from (A.1) and [13, Definition 6.2] that
\[\boldsymbol{H}^{\phi}_{C,r-i}(a,\eta)\geq 0\quad\text{and}\quad\phi(\boldsymbol{n} ^{\phi}(\eta))J^{\phi}_{C}(a,\eta)>0\quad\text{for }\mathcal{H}^{n}\text{ a.e. }(a,\eta)\in N^{\phi}(C),\]
we conclude that
\[\boldsymbol{H}^{\phi}_{C,r-i}(a,\eta)=0\quad\text{for }\mathcal{H}^{n}\text{ a.e. }(x,u)\in N^{\phi}(C).\]
Henceforth, (A.5) is proved for all \(j\in\{0,\ldots,r\}\). It follows that \(\boldsymbol{H}^{\phi}_{C,0}(a,\eta)=0\) for \(\mathcal{H}^{n}\) a.e. \((a,\eta)\in N^{\phi}(C)\) and \(\mathcal{L}^{n+1}(C)=0\) by the second formula in [13, Theorem 6.8]. This a contradiction, and consequently we conclude that \(\lambda\) is positive.
Combining Theorem A.1 with some standard facts on sets of positive reach we can prove the following uniqueness results for \(C^{2}\)-boundaries with \(k\)-th mean curvature functions converging in \(L^{1}\) to a constant. A completely analogous statement can be obtained in the anisotropic geometry induced by a uniformly convex \(C^{2}\)-norm. We choose to state Theorem A.2 in the Euclidean setting to facilitate the comparison with Theorem 1.1.
**Theorem A.2**.: _Let \(k\in\{1,\ldots,n\}\), let \(\Omega_{\ell}\) be a compactly supported and asymptotically \((k-1)\)-mean convex sequence and let \(C_{\ell}\) be the closure of \(\Omega_{\ell}\). Suppose there exists \(\lambda\in\mathbb{R}\) such that_
\[\lim_{\ell\to\infty}\int_{\partial\Omega_{\ell}}\left|H_{\Omega_{\ell},k}- \lambda\right|d\mathcal{H}^{n}=0\] (A.6)
_and there exists \(\epsilon>0\) so that \(\operatorname{reach}(C_{\ell})\geq\epsilon\) for all \(\ell\geq 1\)._
_If \(C\subseteq\mathbb{R}^{n+1}\) is an accumulation point with respect to Hausdorff convergence of \(C_{\ell}\) with positive volume, then \(C\) is a closed ball._
Proof.: It follows from [10, Lemma 6.12] that \(\operatorname{reach}(C)\geq\epsilon\). Moreover, the \(j\)-th curvature measures \(\Theta_{j}(C_{\ell},\cdot)\) associated with \(C_{\ell}\) converge weakly to \(\Theta_{j}(C,\cdot)\) for every \(j\in\{0,\ldots,n\}\). Recalling that \(H_{\Omega_{\ell},m}\) is the \(m\)-th mean curvature function of \(\partial\Omega_{\ell}\) with respect to the outward-pointing normal direction, and \(H_{\Omega_{\ell},0}\equiv 1\), we notice from [10, Lemma 6.4] that
\[(n-j+1)\Theta_{j}(C_{\ell},B)=\int_{\partial\Omega_{\ell}}\mathbf{1}_{B}(a, \nu_{\Omega_{\ell}}(a))\,H_{\Omega_{\ell},n-j}(a)\,d\mathcal{H}^{n}(a,\eta)\]
for every Borel set \(B\subseteq\mathbb{R}^{n+1}\times\mathbb{R}^{n+1}\) and \(j=0,\ldots,n\). We consider the negative and positive part of \(\Theta_{j}(C_{\ell},\cdot)\)
\[\Theta_{j}^{\pm}(C_{\ell},\cdot)=\int_{\partial\Omega_{\ell}}\mathbf{1}_{( \cdot)}(a,\nu_{\Omega_{\ell}}(a))\,H_{\Omega_{\ell},n-j}(a)^{\pm}\,d\mathcal{ H}^{n}(a,\eta)\quad\text{for $j=1,\ldots,n$},\]
which are non-negative Radon measures. Noting that \(\Theta_{n-i}^{-}(C_{\ell},\cdot)\) weakly converge to \(0\) for \(i\in\{1,\ldots,k-1\}\), we conclude that \(\Theta_{n-i}^{+}(C_{\ell},\cdot)=\Theta_{n-i}(C_{\ell},\cdot)+\Theta_{n-i}^{- }(C_{\ell},\cdot)\) weakly converge to \(\Theta_{n-i}(C,\cdot)\). Henceforth, \(\Theta_{n-i}(C,\cdot)\) is a non-negative Radon measure for \(i\in\{1,\ldots,k-1\}\). Moreover, noting that
\[(k+1)\Theta_{n-k}(C_{\ell},B)=\int_{\partial\Omega_{\ell}}\mathbf{1}_{B}(a, \nu_{\Omega_{\ell}}(a))\,(H_{\Omega_{\ell},k}-\lambda)\,d\mathcal{H}^{n}(a)+ \lambda\Theta_{n}(C_{\ell},B)\]
for every Borel set \(B\subseteq\mathbb{R}^{n+1}\times\mathbb{R}^{n+1}\), we conclude from (A.6) that \(\Theta_{n-k}(C,\cdot)=\lambda\Theta_{n}(C,\cdot)\). Now the conclusion follows from Theorem A.1.
|
2308.01040 | Inaudible Adversarial Perturbation: Manipulating the Recognition of User
Speech in Real Time | Automatic speech recognition (ASR) systems have been shown to be vulnerable
to adversarial examples (AEs). Recent success all assumes that users will not
notice or disrupt the attack process despite the existence of music/noise-like
sounds and spontaneous responses from voice assistants. Nonetheless, in
practical user-present scenarios, user awareness may nullify existing attack
attempts that launch unexpected sounds or ASR usage. In this paper, we seek to
bridge the gap in existing research and extend the attack to user-present
scenarios. We propose VRIFLE, an inaudible adversarial perturbation (IAP)
attack via ultrasound delivery that can manipulate ASRs as a user speaks. The
inherent differences between audible sounds and ultrasounds make IAP delivery
face unprecedented challenges such as distortion, noise, and instability. In
this regard, we design a novel ultrasonic transformation model to enhance the
crafted perturbation to be physically effective and even survive long-distance
delivery. We further enable VRIFLE's robustness by adopting a series of
augmentation on user and real-world variations during the generation process.
In this way, VRIFLE features an effective real-time manipulation of the ASR
output from different distances and under any speech of users, with an
alter-and-mute strategy that suppresses the impact of user disruption. Our
extensive experiments in both digital and physical worlds verify VRIFLE's
effectiveness under various configurations, robustness against six kinds of
defenses, and universality in a targeted manner. We also show that VRIFLE can
be delivered with a portable attack device and even everyday-life loudspeakers. | Xinfeng Li, Chen Yan, Xuancun Lu, Zihan Zeng, Xiaoyu Ji, Wenyuan Xu | 2023-08-02T09:32:17Z | http://arxiv.org/abs/2308.01040v3 | # Inaudible Adversarial Perturbation: Manipulating the Recognition of User Speech in Real Time
###### Abstract
Automatic speech recognition (ASR) systems have been shown to be vulnerable to adversarial examples (AEs). Recent success all assumes that users will not notice or disrupt the attack process despite the existence of music/noise-like sounds and spontaneous responses from voice assistants. Nonetheless, in practical user-present scenarios, user awareness may nullify existing attack attempts that launch unexpected sounds or ASR usage. In this paper, we seek to bridge the gap in existing research and extend the attack to user-present scenarios. We propose V rifle, an inaudible adversarial perturbation (IAP) attack via ultrasound delivery that can manipulate ASRs as a user speaks. The inherent differences between audible sounds and ultrasounds make IAP delivery face unprecedented challenges such as distortion, noise, and instability. In this regard, we design a novel ultrasonic transformation model to enhance the crafted perturbation to be physically effective and even survive long-distance delivery. We further enable V rifle's robustness by adopting a series of augmentation on user and real-world variations during the generation process. In this way, V rifle features an effective real-time manipulation of the ASR output from different distances and under any speech of users, with an _alter-and-muke_ strategy that suppresses the impact of user disruption. Our extensive experiments in both digital and physical worlds verify V rifle's effectiveness under various configurations, robustness against six kinds of defenses, and universality in a targeted manner. We also show that V rifle can be delivered with a portable attack device and even everyday-life loudspeakers.
+
Footnote †: Chen Yan and Xiaoyu Ji are the corresponding authors
Network and Distributed System Security (NDSS) Symposium 2024
26 February - 1 March 2024, San Diego, CA, USA
ISBN 1-891562-93-2
[https://dx.doi.org/10.14722/ndss.2024.23030](https://dx.doi.org/10.14722/ndss.2024.23030)
www.ndss-symposium.org
## I Introduction
Automatic speech recognition (ASR) enables computers to transcribe human speech and is essential in a wide range of voice applications such as voice assistants (VAs) and audio transcription APIs [1, 2]. Prior studies have shown that ASR models are vulnerable to adversarial examples (AEs) that sound benign to humans but are recognized incorrectly by models. As stealthiness is a basic requirement for AEs, existing works largely focus on reducing the audibility of AEs so that they might not cause human suspicion when being heard [3, 4]. In addition, the class of inaudible attacks [5, 6] avoids being perceived by human ears using high-frequency ultrasound/laser. However, few of them have considered attacks in user-present scenarios, where users may notice unexpected events of the ASR service and can mitigate the attack's consequence. For instance, though AEs and inaudible attacks may not sound suspicious, a voice assistant will always provide feedback (e.g., vocal prompt or LED blinking) after receiving voice commands. Alert users may still notice the false wake-up or abnormal feedback caused by an attack and speak remedy commands to correct the mistake, limiting the attack's impact in real life.
In this paper, we aim to propose V rifle1--an _inaudible adversarial perturbation_ (IAP) attack that can extend to this scenario. Its basic idea is to inject IAPs while the user speaks to the ASR service and alter the recognition result in real time, as shown in Fig. 1. Since the voice assistant itself is responding to user commands (e.g., LED blinking), tampering with the user's speech is less noticeable at this time. But such a user-present scenario also imposes higher requirements on attack stealthiness because users are more sensitive to environmental sounds while using ASR services. Moreover, given that adversaries have no prior knowledge of user's speech content and timing, this critical scenario necessitates that V rifle exhibits a high level of universality to guarantee the achievement of the adversary-desired intent in any context. Therefore, we envision V rifle as a truly inaudible and robust framework for real-time IAPs delivery, which can also address variable user factors,
Fig. 1: (When a user uses the ASR service, (a)a adversary injects inaudible adversarial perturbations crafted based on the ultrasonic transformation model (UTM) into a receiver. (The mixed signal of the user command (blue) and demodulated perturbations (red & yellow) can (f)ool the ASR model into the adversary-desired intent.
such as speech content, vocalization time, speech volume, and environmental conditions, while remaining physically effective even at long distances or using portable/everyday-life devices. Overall, materializing Vrifle that attains the above goals is challenging in three aspects.
* _How to achieve adversarial perturbations that are **universal** while **completely** inaudible to **user** auditory?_
The trade-off between universality and stealthiness has been a long-standing challenge in audio AE attacks. Almost all previous works have prioritized stealthiness and introduced imperceptibility constraints during optimization, such as \(\epsilon\) and L2-Norm [7], or by adjusting audio forms, e.g., designing it as short pulses [8]. Nonetheless, this greatly compromises the universality of adversarial perturbations and they are still audible and can be heard when users are nearby. We seek to implement an inaudible adversarial perturbation beyond the human auditory range (20 Hz\(\sim\)20 kHz) in an ultrasound-based attack manner [5], which can make microphones receive our IAP by exploiting their inherent nonlinearity vulnerability. As such, IAPs are no longer limited by stealthiness constraints, holding a vast optimization space with more feasible solutions. Unlike the audible-band perturbations devised to be short to mitigate user auditory, our IAPs enable the adversary to significantly increase their length, which further expands the optimization scope and facilities highly universal attacks.
* _How to alter the recognition of user speech in real time despite the presence of **user disruption**?_
Although we have bypassed _user auditory_, realizing such an attack against ASR in real time faces a few more challenges. _User disruption_ cannot be ignored in this scenario, which includes: \(\blacklozenge\) The user's speech can disrupt the intent of IAPs when both audio signals are superimposed. While universal AEs [8, 9] are shown to resist this case, our preliminary investigation validates that direct ultrasound-based attacks will fail due to such interference. \(\blacklozenge\) User commands can be much longer (e.g., 5s) than 0.5s audible-band short perturbations that affect only a few input frames, thus the exceeded user instructions will impact the entire ASR transcription. \(\blacklozenge\) Users may notice that malicious behavior being executed and therefore block the attack by issuing remedy commands. In addition, there are user-induced factors that make _user disruption_ more complex and can compromise IAPs' effectiveness, including unpredictable content and timing of user speech, as well as the influence of the user's environment and speaking habits on speech reverberation and loudness.
To address these issues, we augment the optimization process of IAPs by using multiple speech clips in public corpus, introducing randomness within the preset time range, as well as considering the various user's speech loudness and reverberation. Thereby, Vrifle can be applied in a content-agnostic, synchronization-aided, user factors-robust manner. Moreover, we overcome _user disruption_ by materializing both silence and universal perturbations in the targeted manner to ensure the arbitrary utterance length cannot pose impacts on adversary-desired intent, without requiring any knowledge. Based on the above design, adversaries can present two more hidden attack strategies, involving _No-feedback Attack_ and _Man-in-the-middle Attack_ in the threat model.
* _How to guarantee inaudible adversarial perturbations are physically effective_ after ultrasonic delivery?_
Though inaudible attacks have demonstrated voice command injection using ultrasound and laser [6], _it is unknown whether fine-grained IAPs can be delivered via such signals_ as the ultrasound channel is reported to be lossy and distorted [15]. Thus, maintaining the effectiveness of IAPs after undergoing a series of modulation, transmission, and demodulation processes in the physical world is not trivial based on prior AEs [16]. Ultrasound is intrinsically distinct from audible sounds in the high-directional propagation and varying soundfield. Additionally, the nonlinear distortion, anomalous noises, and hardware-induced instability that are unique to ultrasound make existing acoustic channel modeling methods inapplicable.
To overcome the challenge, we make the first attempt to establish an ultrasonic transformation model, which consists of tackling variable ultrasound-induced anomalous noises, obtaining ultrasound frequency response (UFR), and enabling location-variable attacks. Based on this transformation, we can precisely estimate Vrifle's pattern of ultrasonic delivery during its optimization, thereby making it physically effective and survive long-distance delivery. Moreover, to enable more covert IAP attacks with portable devices and off-the-shelf loudspeakers, we implement a narrow bandwidth upper-sideband modulation (USB-AM) mechanism to ensure the attack range and inaudibility of Vrifle with simplified devices.
Tab. I compares Vrifle with several existing works. We conduct extensive experiments in both digital and physical worlds to evaluate Vrifle's effectiveness under various configurations (e.g., extend attack range to 10m) and robustness against six kinds of defenses. Our single silence IAP muting up to 27,531 unseen user utterances, likewise, universal IAP altering 18,956, proving Vrifle's universality. Our design also expands the attack methodology to more covert portable
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Method** & **Constraint\({}^{\ddagger}\)** & **Auditory\({}^{\ddagger}\)** & **Disruption\({}^{\star}\)** & **Dist.\({}^{\ddagger}\)** \\ \hline Carlini. [10] & \(-\) & Noise & ✗ & 1.5m \\ \hline Abdullah [11] & \(-\) & Noise & ✗ & 0.3m \\ \hline CW [7] & L2-norm, \(\epsilon\) & Speech & ✗ & ✗ \\ \hline Schönherrr [3] & Psyc. & Speech & ✗ & ✗ \\ \hline Commun. [12] & \(\epsilon\) & Song & ✗ & 1.5m \\ \hline Qin. [4] & Psyc. & Speech & ✗ & \(-\) \\ \hline Meta-Qual [13] & L2-norm, \(\epsilon\) & Song & ✗ & 4m \\ \hline FakeBob [14] & \(\epsilon\) & Speech & ✗ & 2m \\ \hline AdvPulse [9] & L2-norm, \(\epsilon\) & Ambient & ✗ & 2.7m \\ \hline SpecPatch [8] & L2-norm & Pulse & ✗ & 1m \\ \hline
**Ours** & **None** & **Inaudible** & \(\blacklozenge\) & **10m** \\ \hline \hline \end{tabular} (i) \(\ddagger\): The constraints used to guarantee imperceptibility during optimization. “\(-\)” means the method only considers incomprehensibility to humans. \(\epsilon\) means limiting the absolute magnitude of perturbations with a constant \(\epsilon\). \(L_{2}\)-Norm means adding an \(L_{2}\)-Norm term in the objective function. “Psyc.” means psychoacoustic hiding. “None” means no stealthiness constraints. (ii) \(\ddagger\): The objective _user auditory_ of AEs. Ambient means ambient sounds. (iii) \(\star\): \(\blacklozenge\): fully tackles _user disruption_. \(\blacklozenge\): tackles case \(\blacklozenge\): fails by _user disruption_. (iv) \(\dagger\): ✗: the attack is not physically available. \(-\): not reported.
\end{table} TABLE I: Compared with existing works
attack devices and everyday-life loudspeakers, enabling the V rifle delivery in a stealthier form. Our contribution can be summarized as follows:
* To the best of our knowledge, V rifle is the first universal inaudible adversarial perturbation attack that can extend to scenarios when users use ASR services, revealing a new attack surface against ASR models. V rifle is completely inaudible, holds vast optimization space, and enables long-range attacks (10m).
* We make the first attempt to establish an ultrasound transformation model, which overcomes the unique challenges in the ultrasound channel and precisely characterizes it, enabling our fine-grained IAPs delivery to be physically effective.
* We conduct extensive experiments under various configurations in the digital and physical world to validate the effectiveness, robustness, and universality of V rifle, and validate the attack using portable/everyday-life devices.
## II Background
### _Automatic Speech Recognition_
Automatic speech recognition (ASR) systems, e.g., voice assistants, receive and recognize speech commands; then perform execution according to certain rules. Hidden Markov models (HMM) [17] and dynamic time warping (DTW) [18] are two traditional statistical techniques for performing speech recognition. With the development of deep learning, the end-to-end neural ASR models have gone mainstream, such as RNN-T [19] and DeepSpeech2 [20]. A typical end-to-end ASR system pipeline includes four main components: 1
_Spectrum generator_: converts raw audio into spectrum features, e.g., Filter Bank (Fbank), Mel-Frequency Cepstral Coefficients (MFCC), etc. _2 Neural acoustic model_: takes spectrum as input and outputs a matrix of probabilities over linguistic units (e.g., phoneme, syllable, or word) over time. For instance, English ASR is widely modeled with 29 basic units (also known as tokens), including characters a'z, space, apostrophe, and blank symbol \(\phi\). 3 _Decoder_: generates possible sentences from the probability matrix, also optionally coupled with an n-gram language model to impose word-level constraints. The Connectionist Temporal Classification (CTC) module is a typical decoder that sums over all possible alignments that may reduce to the same token sequence, whereby "o&k&a@y" and "o&k&a@yy" are regarded as the same "okay". 4 _Punctuation and capitalization model_: formats the generated text for easier human consumption.
### _Audio Adversarial Examples_
Adversarial examples (AEs) [3, 7, 12, 13] use specialised inputs created with the purpose of confusing a neural network, resulting in the misclassification of a given input. In the audio domain, by adding a crafted perturbation \(\delta\) with some constraints \(\epsilon\) throughout the original benign audio \(x\), the ASR model will be fooled to transcribe a perturbed speech into the targeted text \(y_{t}\), e.g., "take the picture". To craft an adversarial example, an adversary may leverage the optimization function:
\[\begin{split}& minimize~{}\mathcal{L}(f(x+\delta),y_{t})+\alpha\cdot\|\delta\|_{p}\\ & s.t.~{}\delta\in[-\epsilon,\epsilon]^{n},(\epsilon{<}0.01) \end{split} \tag{1}\]
where the ASR functions as \(f(\cdot)\) that takes an input waveform and outputs the probability matrix. \(\mathcal{L}(f(\cdot),y_{t})\) is the CTC loss function denoting the distance between the model output of the adversarial example and the target. \(\|\cdot\|_{p}\) means the \(L_{p}\) norm. \(\alpha\) is a penalty term to limit the \(L_{p}\). \(\epsilon\) denotes the upper bound of the perturbation. Recently, the concept of universal adversarial perturbation is proposed, making AEs valid regardless of the user commands. To make the AEs more concealed, the creating approaches are extended to psychoacoustic hiding [3, 4] and shorter pulses [9]. However, existing efforts cannot fundamentally avoid being perceived by human ears.
### _Ultrasound-based Attacks_
Inaudible attacks modulate the audio baseband on high-frequency carriers to the inaudible band of human ears (\(>\)20 kHz) and exploit microphones' nonlinear vulnerability, so that ASRs can receive the malicious audio while humans cannot perceive it. Recently, inaudible attacks have been extended from ultrasonic carrier [5, 21] to various forms, such as solid conduction [22], laser [6], capacitor [23], power line [24], etc., forming a class of highly threatening and comprehensive covert attacks. We take the representative ultrasound-based attack [5] to present the principle of inaudible attacks shown in Fig. 2. First, the original audio is double-sideband (DSB) modulated on an ultrasound carrier via amplitude modulation (AM). Second, the DSB-AM audio is emitted from the ultrasonic transducer and propagates over the air. Third, after the microphone receives the signal, audio modulated on the high-frequency carrier will be recovered into the audible band _before the low-pass filters and ADC_ due to nonlinear effects of the microphone's diaphragm and amplifier. Thus, though the ultrasound carrier is finally filtered, the demodulated audio still survives and functions to ASR. The nonlinear demodulation is formulated as follows:
\[S_{out}(t)=\sum_{i=1}^{\infty}k_{i}s^{i}(t)=k_{1}s(t)+k_{2}s^{2}(t)+k_{3}s^{3} (t)+... \tag{2}\]
where \(s(t)\) and \(S_{out}(t)\) indicate the input AM signal and amplifier's output, respectively. The even order terms, i.e., \(k_{2}\), \(k_{4}\) are the key in recovering the original audio [25]. Notably, such an ultrasound channel is lossy as the recovered audio samples differ from original ones. Our investigation demonstrates that the channel is also challenging to model III-D.
Fig. 2: Diagram of inaudible attacks (carrier: blue, baseband: green).
### _Threat Model_
**Attack Scenarios:** We consider attacks in user-present scenarios where the user may notice unintended events of ASR services. Such scenarios involve two entities:
**Victim**: The victim user is alert to any strange sounds (e.g., noise, music, pulses) within human auditory. The user can speak an arbitrary command to the smart speaker. Once the user notices attacks, he/she can speak a remedy command to the smart speaker.
**Adversary**: The adversary prepares IAPs for specific intentions offline and alters user command in real time by delivering IAPs 1 at a distance from the victim with an ultrasonic transmitter through the window, 2 physically close with a handheld portable device, or 3 with a preset off-the-shelf loudspeaker. The adversary's goals are providing wrong information to intelligent voice customer service, compromising VAs to execute malicious commands or be in denial-of-service mode, etc. The adversary can attack more covertly with two strategies: _1) Nofeedback Attack_: Prevent the user from hearing VAs vocal prompt by "Mute volume and turn off the WiFi". _2) Man-in-the-middle Attack_: Once the user's intent is satisfied, the attack may be much less suspicious, i.e., while delivering the adversarial perturbation and alter user commands, adversaries can record the user commands and then replay it by traditional ultrasound-based attack means.
**Attacker Capability:** Distinct from the previous works [3, 4, 7, 12, 13, 14, 26] that require the user's speech samples in advance to craft adversarial perturbation, we assume the adversaries have no knowledge of what the user will speak during performing attacks. In line with the widely adopted settings in prior works [4, 7, 8, 12, 13], we assume the attackers have prior knowledge of the target ASR model for obtaining the gradient information during optimization. The adversaries have access to the user's recording device, e.g., borrow a smartphone of the same brand, based on which the adversary can model the ultrasonic transformation, and then create the TAP in advance. We assume adversaries have the flexibility to deploy the hidden ultrasonic transmitter nearly or at a distance, and the recording device is in its line of sight. Additionally, adversaries can also utilize stealthy portable devices and off-the-shelf loudspeakers in everyday-life scenarios to deliver V rifle.
## III Preliminary Investigation
### _Failure of Traditional Inaudible Attacks_
Given the purpose of avoiding alerting users, directly injecting malicious commands into ASR systems using laser-[6] or ultrasound-based [5, 21] inaudible attacks is intuitive. Although laser-based attacks can reach an 100m attack range, we choose ultrasound instead of laser for three practical reasons: (1) The laser spot on the microphone is visible and will alert users immediately; (2) The laser-based attack requires strict line-of-sight alignment; and (3) The severe channel distortion of laser-delivered attacks may nullify fine-grained adversarial perturbations.
To examine whether traditional ultrasound-based attacks can manipulate ASRs into recognizing the modulated malicious commands while users are speaking, we need to ensure that the ultrasonic carrier frequency is optimal. Therefore, we first employ an ultrasonic Vifa [27] to launch a wide-range carrier sweeping from 20\(\sim\)40 kHz. By analyzing the signal-to-noise ratio (SNR) of demodulated basebands, we justify the optimal frequency of four recording devices, i.e., iPhone14 pro: 24.7 kHz, Reno5 pro: 27.7 kHz, Pixel 3aXL: 25.6 kHz, and MIX2s: 25.1 kHz, respectively. This result is consistent with DolphinAttack [5], which reveals most devices' optimal attack carrier frequency is around 25 kHz (22.6\(\sim\)27.9 kHz). In this way, we set the default carrier frequency to 25 kHz, whose advantages are two-fold: (1) Due to lower airborne attenuation, 25 kHz also benefits longer-range attacks than high-frequency carriers (e.g., 40kHz); (2) Moreover, 25 kHz as one of the most typical parameters for commercial ultrasonic transducers that cost as low as 0.14S per unit [28], making the attack cost-effective.
Although the optimal attack frequency is determined, traditional ultrasound-based attacks still fail due to _user disruption_. Specifically, we select 10 text-to-speech commands listed in Tab. IX (e.g., "turn on airplane mode") as the basebands. Four smartphones 50 cm away serve as recording devices that recover the AM signal into audible-band speech. For benign command samples, we randomly select 20 utterances from the popular fluent speech commands dataset [29] to be played via a loudspeaker and recorded by identical smartphones. We also perform simultaneous emissions of both signals, so they are superimposed on each other. For each recording device, we collected \(10\times 20=200\) mixed samples and calculated each sample's character error rate (CER) through the Azure speech-to-text API [1]. As shown in Fig. 3, the direct ultrasound-based attacks and benign audio are well recognized by ASR models, with average CER of 10.8% and 6.88%, respectively. Nevertheless, once attack emission and user's voice coincide, the attack performance (i.e., 10 malicious commands as the target transcription) will severely degrade to an average CER up to 96.01%, even if we have boosted its power2. We believe it is a consequence that when ASRs process the mixed samples, each sampling point of the malicious signal sequence is affected by the human voice, making the acoustic features extracted by the ASR deviate from adversaries' anticipation.
Footnote 2: To facilitate ultrasound-based attacks, we set the volume up to 95 dB, and that of audible benign speech is 70\(\sim\)75 dB.
### _Ultrasonic Adversarial Perturbation Delivery_
We envision that the above failure can be addressed by leveraging the vulnerability of ASR models to craft universal adversarial perturbations. Notably, it is promising to deliver the perturbation in an ultrasound-based manner to eventually reach the goal, i.e., the adversary can alter any user commands into a targeted one while guaranteeing entirely inaudible to
Fig. 3: CER of four recording devices under three settings.
the victim. However, we find the well-trained perturbations that are effective in digital domain all fail after being directly modulated and emitted by the ultrasound-based attack method (results are also given in SSV-C1, _G2_).
Since the ultrasonic channel is lossy and distorted, to obtain a perturbation that can still effectively tamper with user commands after a series of processing based on ultrasound-based attack mechanisms and over-the-air delivery, i.e., the pipeline shown in Fig. 2, we need to precisely model the transformation from a perturbation in the digital domain to its physical version. However, generalizing an AE from the digital to the physical world is inherently difficult, which has been proved by substantial research in both computer vision and audio community [7, 8, 9, 13, 16, 30, 31]. This issue in the audio domain refers to the fact that played-out speech samples are subject to signal distortion and environment interference (i.e., reverberation, attenuation, and noises). Previous audible-band works [16, 32] have paid efforts in simulating the physical world by adopting room impulse response (RIR) during the AE optimization process to close the gap between the digital and physical world. Moreover, no work has yet been proposed on modeling ultrasonic delivery. We are motivated to investigate the feasibility of applying audible-band modeling technologies to our unique ultrasonic case.
### _Attempts at Ultrasound Delivery Modeling_
In this subsection, we elaborate on applying two potentially feasible modeling methods for our case.
#### Iii-C1 Modeled by room impulse response
Inspired by the success of audible-band AEs [16, 32] drawing on the ability of RIR, which describes the reverberation and attenuation during audible sound propagation, we envisage that a similar RIR idea can generalize to characterize the ultrasound transformation process. Specifically, they exploit existing RIR databases [33, 34] by convolving random RIR clips with digital adversarial signals in the optimization process, simulating the audio recorded by the receiver in various scenes, e.g., large concert hall and narrow corridor. Therefore, we modulate the ideal impulse signal as the baseband on an ultrasound carrier and receive it on the recording device. With the obtained "ultra-RIR", we perform convolution with the original audio, whose output are expected to well represent the actually demodulated inaudible attack's result. As a comparison, we also conduct similar operations via a JBL loudspeaker for audible audio. Fig. 4 shows that the estimated audible audio with RIR is very close to the actual playback. However, for the inaudible aspect, there is significant gap between the recorded attack audio and the estimated using ultra-RIR. We believe the reason for such a mismatch is that the RIR rationale relies on the linear time-invariant (LTI) system prerequisite. However, the transformation is nonlinear because ultrasound-based attacks leverage microphones' nonlinearity.
#### Iii-C2 Modeled by Neural Network
Since RIR is originally designed for LTI systems, neural networks with excellent nonlinear fitting capability should work well given their success in various tasks, e.g., speech denosing [35] and image printing distortion [30]. Considering that an adversary expects a practical transformation model with minimal effort (i.e., dataset requirements) while guaranteeing its generality, we implement a multi-layer perception model (MLP) with only 60k parameters, using 120-second aligned original and ultrasound-based attack audio pairs. We find that the MLP can achieve a generalized capability of mapping digital-to-physical world spectrums between unseen pairs, but with position-dependent constraints. As shown in Fig. 5, a slight position displacement (3 cm) leads to an apparent change (i.e. bringing anomalous noises) in the recovered baseband, which can cause the trained network to fail to estimate the recorded audio at various positions. Overall, although MLP builds a functional mapping for the nonlinear ultrasound transformation in a fixed relative position, it is too restricted due to the nature of ultrasound (cf. SSIII-D). Besides, adopting distance \(d\) and angle \(\theta\) as conditional network parameters might help, but collecting data for each position is endless.
### _Challenges in Ultrasonic Delivery Modeling_
The above attempts' failure drives us to look into the root cause of why modeling ultrasonic transformation is challenging. Ultrasound is intrinsically distinct from audible sounds due to its much higher frequency, and ultrasonic delivery leverages microphones' nonlinearity. We also summarize the following characteristics:
* _Ultrasound-induced Noise:_ The ultrasound carrier continuously forces the diaphragm to vibrate, probably resulting in anomalous noise in recorded attack alike Fig. 4&5. Combined with such variable ultrasound fields, a slight displacement (e.g., 3 cm) can lead to different audio patterns.
Fig. 4: Comparison between the audible RIR and ultra-RIR in estimating the digital-to-physical transformation.
Fig. 5: Illustration of displacement-induced changes in recorded audio.
Fig. 6: 2-D sound fields simulation for comparing the audible wave with ultrasound.
* _Nonlinear Distortion_: Eq. 2 indicates the I/O relationship of nonlinear demodulation, in which the factors \(k_{i}\) is unknown and varies with recording devices [15].
* _Varying Soundfield_: As shown in Fig. 6, ultrasound field (25 kHz) is significantly more directional and changes more dramatically than audible waves (1 kHz) due to the much shorter wavelength.
* _Hardware-induced Instability_: Ultrasound-based attacks rely on a series of signal processing and sophisticated devices, thus bringing instability due to hardware imperfection.
## IV Design of V riflele
### _Overview_
**Design Goal.** To manipulate ASRs while being used by users, adversaries shall create universal IAPs. However, they face the following challenges to obtain and deliver such perturbations:
_Ultrasound Complexity (C1)._ Modeling the ultrasonic delivery is unprecedented compared to the audible-band RIR mimics, because ultrasound fundamentally differs from audible sound as listed in SSIII-D, including (i) ultrasound-induced anomalous noises, (ii) nonlinear distortion, (iii) varying sound field, and (iv) hardware-induced instability.
_User-ASR Connection (C2)._ ASR systems always respond to the user after receiving a command. Adversaries need to suppress the impact of _user disruption_, i.e., break down the user-ASR connection by IAPs that can silence user's excessively long speech and remedy commands.
_User Variation (C3)._ Since adversaries cannot exactly know the user speech's content, timing, or length, naively mixed speech signals will lead to undesirable ASR transcriptions. The tailored IAP needs to be universal while facing arbitrary user commands and superimposed time points.
_Physical Robustness (C4)._ The adversary also faces several factors that are variable in physical attacks, such as user loudness, hardware instability, and the user's environment (i.e., with different reverberations). We also extend the modulation method for reducing unexpected sound leakage.
To achieve adversaries' goal while addressing the aforementioned challenges, we propose V rifle with unique technical design. This design includes: (1) tackling ultrasound complexity to deliver physically effective IAPs and therefore addressing _user auditory_ (cf. SSIV-B); (2) overcoming _user disruption_ to achieve real-time manipulation of ASR (cf. SSIV-C, IV-D); (3) boosting attack stealthiness and practicality (cf. SSIV-E). The optimization workflow of V rifle is exhibited in Fig. 7.
**Problem Formalization.** Unlike the audible-band AE attacks subject to stealthiness constraints, we achieve inaudible perturbations delivery using ultrasound modulation. Thus, we avoid the narrow constraints in Eq. 1, e.g. \(\epsilon<0.01\), where the IAP's optimization space can reach the maximum upper bound: \(\delta\in[-1,1]^{n}\). We believe a broad optimization space possesses more feasible solutions, facilitating a universal attack. Combined with our core objective: fooling ASRs to recognize the superimposed speech of user voice and perturbation \(x+\delta\) as the adversary-desired transcription \(y_{t}\). This basic idea can be optimized via the following formulation:
\[\begin{split}& minimize\ \mathcal{L}(f(x+\delta),y_{t})\\ & s.t.\ \delta\in[-1,1]^{n}\ and\ x+\delta\in[-1,1]^{n}\end{split} \tag{3}\]
### _Ultrasonic Transformation Modeling_
As shown in Fig. 8, our modeling exploits the \(\bigodot\)additive property of the baseband audio \(m\)'s nonlinear transformation \(H(f)m(f)\) and the ultrasound-induced anomalous noise \(n\), and then \(\bigodot\)yields estimated audio \(\hat{m}=H(f)m(f)+n\) that is highly similar to the actual recorded audio \(\widetilde{m}\). In this subsection, we elaborate on our divide-and-conquer strategy of implementing ultrasonic transformation modeling that overcomes problems (i)-(iii) corresponding to _ultrasound complexity (C1)_. Based on this, we can deliver physically effective IAPs via the steps 1-3 in Fig. 7. We address the problem (iv) in SSIV-E.
#### Iv-B1 Tackling Anomalous Noises
Ultrasound-based attacks modulate the baseband \(m\) with \(s(t)=A[1+m(t)]c(t)\), where regardless of the energy of \(m\), the carrier signal \(c(t)=cos\omega_{c}t\) always emits and forces the microphone diaphragm into vibration, appearing abnormal noises [15]. Our experiment also demonstrates that although the recorded \(s\) varies with \(m\), the
Fig. 7: Workflow of V rifle. 1–3: the ultrasonic transformation precisely describes the perturbation changes during physical delivery. 4: the transformed perturbation is involved into optimization for silence and universal attack purpose. 5–6: we boost the attack’s physical-world robustness from multiple aspects.
anomalous noise pattern is almost decided by the carrier. The nature of the ultrasound field further results in noise variation with different injection angles \(\theta\) and distances \(d\), showing irregular patterns. Therefore, due to such variation, neural networks fail to learn a stable mapping of the digital-to-physical domain. We denote the noise \(n(\theta,d)=f_{n}(\theta,d,s)\), where \(f_{n}\) is the projection of the ultrasound signal \(s\) to the recorded abnormal noise \(n\) at different positions. In practice, an attacker can sample the variable anomalous noises by simply emitting the ultrasonic carrier. We collect a lightweight noise dataset using 25 kHz ultrasound (without modulation) of 1m at varying angles, forming a set \(U_{n}\) of 25 pieces of 10-second noises.
#### Iv-B2 Ultrasonic Frequency Response
Recalling the reasons for LTI system-based RIR's failure in SSIII-C1 of modeling unprecedented ultrasonic delivery, except for anomalous noises, the inability to describe the nonlinear demodulation process is also a key factor. The adversaries aim to achieve robust and adaptive attacks with minimal effort, i.e., building an efficient transformation that can well estimate the demodulated pattern of a given digital perturbation after ultrasonic delivery in Fig. 8 (red box). Fig. 8 also depicts the recorded audio derived after inaudible signal injection, whose energy is clearly concentrated in the low-frequency band compared to the original audio [21]. Although nonlinearity exists, we are driven to obtain an ultrasonic frequency response (UFR) that characterizes the inaudible acoustic energy conversion at different frequencies.
We first overcome ultrasound-induced noises that hinder us from obtaining an accurate frequency response by adopting the sine sweep technique [36], which can ignore components uncorrelated to the sweep signal during processing. We use it to generate a fast 10s sweep ranging from 50\(\sim\)7800 Hz, which is carefully chosen for diminishing hardware imperfection, and record it on the receivers, shown in Fig. 8(1). Thus, we can obtain the UFR \(H(f)\) by deconvolution (\(*^{-1}\)). Notably, as shown in Fig. 8(2), it does shield the effects of noises and focuses on the frequency response measurement, which decouples the linear and nonlinear terms. We sum these terms up in Fig. 8(3), forming a holistic frequency-domain UFR of the received perturbations \(\delta\) as \(\overline{\Delta}(f)=H(f)\Delta(f)\), where \(\Delta(f)=\mathcal{F}(\delta(t))\); \(\mathcal{F}\) means Fourier Transform.
#### Iv-B3 Enabling Location-Variable
Uneven ultrasound field makes MLP-based method in SSIII-C2 difficult to estimate transformation from arbitrary-position attack. As for efficient UFR, we believe that combining it with ultrasound \(s(d,t)\) propagation process [37] will empower to render more adaptive attacks:
\[H(f,d)=H(f)\cdot e^{-a_{0}\omega_{c}{}^{n}d},\ n\in[1,2] \tag{4}\]
where \(a_{0}\) is a medium-dependent attenuation parameter, \(\omega_{c}\) is the carrier's frequency. Moreover, the energy variation caused by different injection angles is hard to model under such a changing sound field. We overcome this issue by conducting sine sweeps at different angles \(\theta\) similar to SSIV-B1 and get 25 pieces of 10-second sweep clips. Consequently, the collection of a complete set of UFRs and anomalous noises for subsequent optimization requires approximately 8.3 minutes. Overall, with a pair of UFR \(H_{\theta}(f,d)\) and noise clip \(n(\theta,d)\) from the same location, we can well estimate the digital perturbation into its recording. However, to obtain a location-variable perturbation, we shall modify the expression of Eq. 3 and find the perturbation via robust training:
\[\underset{\delta}{argmin}\underset{h\thicksim U_{H},n\sim U_{n}}{\mathbb{E}} [\mathcal{L}(f(x+h_{\theta}(d)*\delta+n),y_{t}] \tag{5}\]
where we use time-domain expression \(h_{\theta}(d)*\delta\) to indicate the transformed perturbation's waveform, as \(H_{\theta}(f,d)\Delta(f)=\mathcal{F}[h_{\theta}(t,d)*\delta(t)]\) obeys the time convolution theorem. We randomly select the UFR \(H_{\theta}(f,d)\) and noise \(n\) pairs from \(U_{H}\) and \(U_{n}\) during the optimization process to mimic actually delivering the inaudible adversarial perturbation at different locations. As we fully take ultrasound's inaudibility advantages, the experiment results also validate the optimization space is large enough to craft a robust perturbation effective under varying UFRs and noises.
### _Silence Perturbation_
Given the failures of prior works faced with _user disruption_, specifically * the challenge of excessively long instructions, and * the potential counteraction through remedy commands, we believe that the solution lies in silencing the user instructions, i.e., breaking down the user-ASR connection when necessary. Based on our ultrasonic transformation modeling, adversaries can materialize physically effective silence perturbations. These perturbations can alter arbitrary user instructions to blank (" ") in a targeted manner, effectively rendering the ASR system unresponsive to the user instructions. We observe that implementing silence perturbations offers several advantages, including: (1) When altering long user commands with short target intent, such as "start recording" (case *). The silence perturbation can be linked alongside the universal perturbation in an _alter-and-mute_ fashion (cf. SSIV-D) so that the ASR will output adversary-desired transcription; (2) it guarantees that users cannot meddle in running malicious operations by issuing remedy commands (case *), even if they notice the presence of attacks; (3) it can render ASR services in the denial-of-service condition, preventing users from using them normally.
Fig. 9(b) depicts the diagram of a robust silence perturbation \(\xi\), which is expected to superimpose over any benign content like Fig. 9(a) and leads the final transcription of ASR to blank \(y_{b}\) (" "). The length of \(\xi\) is empirically set to 5s based on our experiments, for which we balance the duration of common speech instructions and the optimization overhead. For the case of excessively long user utterances, we address them in the generation process by repeating the perturbation.
Fig. 8: Ultrasonic transformation modeling. **1st row:** Procedure to obtain the ultrasonic frequency response (UFR). **2nd row:** High similarity between the estimated audio and actually recorded audio (the red box) proves its effectiveness.
To craft such a content-agnostic \(\xi\), we improve the penalty-based expectation function to find the silence perturbation over a group of common voice commands \(U_{x}\), as shown in Fig. 74.
\[\underset{\xi}{argmin}\underset{h_{\theta}\sim U_{H},n\sim U_{n},x\sim U_{x}}{ \mathbb{E}}[\mathcal{L}(f(\mathcal{S}_{x}+h_{\theta}(d)*\xi+n),y_{b})] \tag{6}\]
where \(\mathcal{S}_{(\cdot)}\) means randomly shifting the user utterances \(x\) for introducing randomness to the superimposed time within a preset \(T\):100 ms. \(U_{x}\) is elaborated in the experimental setup SSV-A2. It is more practical than the case where an AE and user speech are required to be perfectly aligned. The details of content-agnostic and synchronization are given in SSIV-D.
### _Universal Perturbation_
Different from the proof of concept of universal AEs against a CNN-based speech command classification model presented in [9] by exploiting the temporal insensitivity of CNNs, the RNN-based models widely deployed on commercial ASR services are more difficult to attack. This difficulty arises because end-to-end ASRs, such as DeepSpeech [20], employ connectionist temporal classification (CTC) that calculates the loss between a continuous speech feature sequence and a target transcription, making it context-dependent. Consequently, when introducing subtle perturbations in different contexts, it is often difficult to ensure that the CTC losses of multiple mixed signals will simultaneously converge to the desired target.
#### Iv-D1 Content-Agnostic
We believe that the reasons why previous audible-band AEs struggle to tamper with large amount of speech content are two-fold: _user auditory_ and _user disruption_. To avoid being noticed by users, prior adversarial perturbations are limited by imperceptibility constraints and signal forms (e.g., with short length and subtle amplitude). Consequently, these perturbations are fragile and easily defensible. In contrast, our IAP delivery is completely inaudible via ultrasound modulation. Thus, the perturbation's length and amplitude are unconstrained, maximizing its optimization space. We fully use the advantages to generate a universal perturbation that can alter substantial short utterances into adversary-desired intent, e.g., a 1.2s \(\delta\) tailored for "open the door".
However, for excessively long speech or possible subsequent remedy commands in user-present scenarios (_user disruption_**-****Q**), the adversary should resort to the silence perturbation in SSIV-C, which can cooperate well with the universal perturbation in an _alter-and-mute_ manner. As depicted in Fig. 9(c) and (d), when the universal perturbation \(\delta\) is combined with a well-trained silence perturbation \(\hat{\xi}\), the former can apply to alter the user commands, and the latter will mute the subsequent user commands or remedies. As illustrated in Fig 74, we determine the optimal \(\delta\) by optimizing the following expectation function:
\[\underset{\delta}{argmin}\underset{h_{\theta}\sim U_{H},n\sim U_{n},x\sim U_ {x}}{\mathbb{E}}[\mathcal{L}(f(x+h_{\theta}(d)*\overline{\delta:\hat{\xi}}+n ),y_{t})] \tag{7}\]
where \(\overline{\delta:\hat{\xi}}\) means the universal perturbation \(\delta\) followed by a crafted silence perturbation \(\hat{\xi}\). \(U_{x}\) is the same subset used for generating silence perturbations, whose details are given in SSV-A2.
#### Iv-D2 Synchronization-Aided
Although the universal perturbation can deceive the ASR with any victim's speech into adversary-desired intent, an adversary can hardly deliver attacks synchronously when the victim vocalizes. Out of attack practicality, we propose a VAD-based synchronization mechanism to achieve real-time manipulation, which avoids continuous AE broadcasting or assuming an adversary always ready for attacking. Specifically, we employ a microphone to record the user's voice. Once detecting the user's speech via voice activity detection (VAD), our program automatically triggers the emission of the prepared perturbation. Based on our experiments, the delay is impacted by three stages of our real-time pipeline: (1) from user vocalizing to being detected by the running VAD program (5\(\sim\)20 ms); (2) software-to-hardware IAP triggering (5\(\sim\)15 ms); (3) ultrasound propagation (0\(\sim\)30 ms). Due to the delay uncertainty, we consider bringing the time randomness of a range \(T\) into our optimization, whose upper bound is empirically set to 100 ms. For a direct reference, the average overall delay when attacking at 4m is around 27 ms, far below the maximum tolerable delay (100 ms) preset during optimization. Particularly, the recording of user speech can also be utilized to present a more covert attack by inaudibly replaying user-desired commands, as "_Man-in-the-middle Attack_" stated in SSII-D. By integrating the above-mentioned optimization objectives, we further craft the universal IAP through the below expectation:
\[\underset{\delta}{argmin}\underset{h_{\theta}\sim U_{H},n\sim U_{n},x\sim U_ {x}}{\mathbb{E}}[\mathcal{L}(f(x+\mathcal{S}_{h_{\theta}(d)*\overline{\delta :\hat{\xi}}+n}),y_{t})] \tag{8}\]
where \(\mathcal{S}_{(\cdot)}\) mimics V rifle can be superimposed on victim speech at random time points (Fig. 74) within the preset \(T\).
### _Physical Robustness_
#### Iv-E1 Loudness Adaptive and Hardware Instability
When conducting physical attacks, V rifle is able to handle the challenges of ultrasound nature based on our digital-to-physical transformation in SSIV-B. However, the loudness of the victim's speech varies with context or emotion, and hardware instability still exists. These will result in difficulty maintaining our inaudible perturbation in effectively altering the victim's voice if the mutual energy relationship between the two is inconsistent with the optimization process. As shown in Fig. 75,
Fig. 9: Diagram of V rifle attacking a benign speech (a) with the silence (b) and universal goals in an _alter-and-mute_ manner (c&d).
we introduce relative volume augmentation into the crafting process, which exploits a hyper-parameter \(\beta\) denoting a range of user speech's volume and thereby brings randomness to the mutual relationship between user voice and perturbation.
#### Iv-A2 Attack at Different Environments
Although ultrasound-based attacks directly inject into recording devices' microphones and are reverberation-free, the audible-band human voice still goes through multi-path reflections and ambient noises in different environments. To alter user commands regardless of the impact of scenes, we apply random RIR and noise clips from the Aachen Impulse Response (AIR) Database [33] in Fig. 7, including small, medium, large rooms and corridors for user speech augmentation.
#### Iv-A3 Single-Sideband Extension
Although V rifle can achieve real-time manipulation of the ASR output very covertly using sophisticated devices (e.g., narrowband ultrasonic transducers and signal generators) at long distances through windows or doors, we aim to accomplish highly stealthy attacks even in close proximity to the victim by utilizing everyday-life loudspeakers or portable attack devices. However, the simple amplifiers, sound cards, and off-the-shelf loudspeakers exhibit poor suppression of intermodulation and harmonics of high-frequency DSB-AM signals. Namely, they present increased nonlinearity, resulting in sound leakage (cf. SSVII). To enable attacks with portable devices and loudspeakers (cf. SSV-D), we adopt single-sideband amplitude modulation (SSB-AM), which removes one of the sidebands based on the Hilbert transform [38]. Compared to DSB-AM, SSB-AM has only half the bandwidth, rendering higher transmission efficiency. Importantly, it mitigates the intermodulation between different sideband frequencies, making the sound less prone to leakage than DSB-AM at the same energy level. Specifically, we employ upper sideband modulation (USB-AM), formalized as \(S(t)=mcos\omega_{c}t-\hat{m}sin\omega_{c}t+cos\omega_{c}t\), rather than lower sideband modulation (LSB-AM), as the former exhibits better inaudibility in our experiments and more details are given in Appendix SSA.
Overall, the algorithm of V rifle is described in Algorithm 1, Appendix SSD, where we demonstrate the optimization process of crafting V rifle from scratch.
## V Evaluation
### _Experiment Setup_
#### V-A1 Overview
We implement V rifle using PyTorch [39] on a Ubuntu 20.04 server with Intel Xeon 6226R 2.90GHz CPU and NVIDIA 3090 GPU. Based on our experiments, we empirically set the default configuration as \(\delta=1.2s\), \(\xi=5s\), \(\epsilon=1\), \(0.5\leq\beta\leq 1.5\), sync range \(T\)=100 ms, \(maxEpoch\)=800. Adam optimizer [40] is used to speed up our convergence. For evaluating V rifle's effectiveness in fooling ASR while users use it, we select the end-to-end DeepSpeech2 [20] as the target model and conduct experiments in both digital and physical scenarios.
#### V-A2 Dataset
We adopt the typical Fluent Speech Command Dataset [29] to examine the effectiveness of V rifle, including 30,046 voice command samples. We randomly selected 896 samples from the 10-person validation set given in the dataset, with each speaker contributing around 90 utterances on average. These samples are used to craft our perturbation. The remaining unseen 29,150 samples are used to evaluate V rifle under various settings.
#### V-A3 Hardware
We employ a signal generator (SIGLENT SDG6032X) [41] to modulate the created IAPs, a power amplifier (NF HSA4015) [42] to enable long-range delivery, and a custom ultrasound transducer array to emit the modulated IAPs. The recording devices to be tested include Google Pixel 3aXL, iPhone14 pro, MI Mix2s, OPPO Reno5 pro, and ReSpeaker Mic array v2.0 [43], where all model versions are released in the last five years. Moreover, we evaluate attacks with a self-made portable device and a loudspeaker in SSV-D.
#### V-A4 Metrics
(1) We use the success rate (SR) to indicate the percentage of V rifle successfully altering user commands and matching target transcriptions in all attempts. (2) We use character error rate (CER), a representative metric in ASR tasks, to indicate the adversary's ability to tamper with user commands from the character level; a lower CER represents a more effective attack. (3) Signal-to-Noise Ratio (SNR) and \(L2\)-distortion are vital for audible-band AEs because of the imperceptibility requirements. SNR: the ratio of benign audio power to the perturbation power. \(L2\): the sum of squared amplitude. AEs with a low SNR and high \(L2\) are more likely to be noticed, and vice versa.
### _Digital Attack Performance_
As our attack focuses on real-world scenarios, where physical disturbances always exist, we incorporate the effects of physical conditions by employing our ultrasonic transformation modeling to guarantee that digital attack performance has physical significance.
#### V-B1 Impact of Optimization Space
Since the delivery of V rifle is inaudible, it facilitates the unconstrained advantage of setting \(\epsilon\) up to \(1\) (i.e., the normalized audio's upper bound) for universal attacks. We further explore attack capability under different \(\epsilon\) upper bounds, both universal and silence IAPs. We optimize silence perturbations according to \(\epsilon=0.2,0.4,0.6,0.8,1.0\), respectively, aiming to tamper the user instructions to the blank. In addition, we obtain the universal perturbations expected to alter user commands to "open the door" with the same settings. CTC loss convergence curves are shown in Fig. 10. We observe that the crafting process can converge faster as the \(\epsilon\) (i.e., the optimization space) increases in both tasks. After \(\epsilon\) reaches \(0.8\), the convergence rate approaches the maximum. Then we estimate the physical delivery of both perturbations via an unseen transformation model (i.e., a pair of UFR and anomalous noise not involved in training). The transformed perturbations are further superimposed on every test voice command sample. Results listed in Tab. II show that, in addition to the faster convergence, a larger \(\epsilon\) significantly boosts the universality of V rifle. It can successfully alter 18,946 samples into "open the door" and a V rifle 1 user commands into blank " ", which highlights V rifle features a highly universal capability.
#### V-B2 Comparison of Convergence Overhead and Audibility Cost for the Universality Goal
The unconstrained advantage of V rifle (\(\epsilon=1\)) empowers its high universality. We further compare it with 3 classical audible-band AEs (i.e., CW [7], Qin [4], and SpecPatch [8]) regarding the cost for achieving
the same universality goal of each method. We reproduce these works strictly following their instructions. We set _two goals_: creating a single perturbation that can alter 1) one or 2) five commands based on each method. Notably, for audible-band AEs, we employ RIRs for physical simulation to be consistent with our default setup. We specify the minimal upper bounds \(\epsilon\) of CW, Qin, and SpecPatch to \(0.03,0.05,0.05\), respectively, based on which the three methods can maintain universality for 5 commands, i.e., finally converge to the target transcript "Open the door". We examine also the CTC loss convergence speed of 4 methods. The normalized loss curves in Fig. 11 clearly show that V rifle (in red) converges within the fewest iterations among 4 methods; SpecPatch (in blue) converges slowest as it is devised to be short (0.5s). Specifically, we list the overall duration for each methods to final convergence for altering 5 commands--V rifle: 1.63 min, CW: 6.52 min, Qin: 9.16 min, and SpecPatch: 35.38 min. V rifle converges faster because (1) we reduce optimization complexity by only picking 5 random UFR/noise pairs rather than all ultrasonic channel data per iteration; (2) V rifle can quickly find feasible solutions due to its broad optimization space. In addition, Tab. III demonstrates the SNRs and \(L2\)-distortion values of audible-band AEs under different universality goals. All SNRs of these AEs are low due to a compromise of physical robustness and imperceptibility, with the highest SNR down to 22 dB. Moreover, if the goal number increases, audible-band AEs are bound to get louder and more easily heard.
#### Vi-B3 Different Target Commands
Given that adversaries may launch attacks for various purposes, they will craft different adversarial perturbations accordingly. In this experiment, we first train 10 universal perturbations referring to typical malicious commands [44] listed in Appendix SSC Tab. IX along with the silence perturbation. Then we apply V rifle to 7,200 benign commands to validate its effectiveness, amounting to 72,000 samples. We count the success rate when transcription outputs match the target commands correctly. In addition, we also count CERs over all samples. We find no significant performance varying with target transcripts, where most targets derive a 100% SR and 0% CER (7 out of 10). The lowest SR is still up to 92.82%, corresponding to "Mute volume and turn off the WiFi". Moreover, it is worth noting that the highest CER of these targets is still down to 0.50%, suggesting V rifle can tamper with user commands well from the character level. Due to page limitations, the details are listed in Appendix C Tab. IX.
### _Physical Attack Performance_
We perform extensive physical experiments to evaluate the practical performance of V rifle under different conditions, i.e., w/o our modeling, distances, environments, recording devices, etc. In the physical experiments, we set the target intent as "open the door", the attack distance 4m away from the recording devices with the injection angle pointing to their bottom microphones as the default configuration unless otherwise specified. Except for the experiments about different scenes, the rest are conducted in a laboratory of approximately 13.6m\(\times\)5.2m with slight HVAC noises. We employ a custom ultrasonic transmitter for inaudible adversarial perturbation delivery. A loudspeaker plays the audible benign speech samples, and the ambient noise level is around 38 dB. We also deploy a VAD-based program in conjunction with a microphone connected to the laptop to trigger IAPs delivery using the synchronization-aided design. This ensures real-time triggering when audible benign speech initiates. Our real-world attack scenario is given in Appendix SSB, Fig. 19.
#### Vi-C1 Ablation Experiments w/o Transformation Modeling
To validate the effectiveness of our ultrasonic transformation modeling, we apply 3 strategies to craft IAPs. In addition, we apply direct ultrasound-based attacks as the baseline group (_G1_). The first strategy is an optimization without transformation, i.e., \(h_{\theta}(d)*\delta+n\) in Eq. 8 is degraded to simple \(\delta\) during the crafting process (_G2_). Similarly, the second strategy uses a low-pass filter, reducing the precise transformation to a filter that allows signal components below 3 kHz to pass (_G3_). The third strategy is crafting a perturbation with _our transformation_, i.e., V rifle (_G4_). We carry out experiments with synchronization-aided emission of both benign audio and attacks. We select 40 benign utterances to be played via loudspeaker, and finally collect 480 mixed samples (120 per group) by repeating the operation three times for minimizing errors. Tab. IV lists each group's success rate and average CER, which remarkably denotes that our modeling can well describe the digital-to-physical transformation during optimization. Approximating the transformation as a low-pass filter can also generate a physically available perturbation with 21.67% SR and 19.39%
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline
**Upper Bound** (\(\epsilon\)) & 0.2 & 0.4 & 0.6 & 0.8 & 1.0 \\ \hline
**Silence Perturb.** & 1,591 & 8,095 & 17,064 & 24,832 & 27,531 \\ \hline
**Universal Perturb.** & 649 & 5,268 & 13,085 & 16,726 & 18,946 \\ \hline \hline \end{tabular}
\end{table} TABLE II: The number of successfully silenced/altered test speech samples under different \(\epsilon\) upper bounds
Fig. 11: CTC loss curves. Compare the convergence speed of V rifle with 3 classical audible-band AEs. Dashed lines: train a perturbation that can simultaneously alter 5 voice commands; likewise, Solid lines: alter 1 command, thus converging faster than the former.
Fig. 10: CTC loss curves of silence and universal perturbations during the optimization process under varying \(\epsilon\).
CER. The attack performance decreases to 0% in _G1_ and _G2_. However, attacks without modeling still outperform the baseline (_G1_) from the CER perspective due to leveraging the model vulnerability.
#### Vi-B2 Different Attack Distances
Attacks in the audible band are constrained by concealment, resulting in perturbations that cannot be delivered with more extensive ranges. By contrast, our attack delivery via ultrasound modulation can apply substantial power, which overcomes the attenuation nature of ultrasound. We adjust the amplifier gain so that the high-frequency beam's energy reaching microphones is maintained, thus ensuring the effectiveness of V rifle. Specifically, we conduct experiments at the ultrasonic transmitter away from the receiving device within 1m\(\sim\)13m (1m interval), where the maximum power at 10m\(\sim\)13m is approximately 3.2 Watt. We randomly select 40 voice commands and play them at each location. We repeat the perturbation superimposed on the benign command 3 times and totally collect 1,560 samples, with 120 per distance, respectively, as well as feed them into the ASR model. We count the success rate and CER in Fig. 12, where V rifle is very effective within 1m\(\sim\)9m as the SRs are up to 100% and CERs are down to 0%. The SR is 88.7% and CER is still down to 3.25% at 10m. Besides, we observe the attack performance decrease at 11m\(\sim\)13m. We believe this is due to the ultrasound attenuation, which makes the perturbation less significant to the ASR model. We also discuss this issue in SSVII.
#### Vi-B3 Different Attack Angles
In this experiment, we keep the recording device's bottom microphone spatially within the ultrasound beam's coverage and set the attack distance to 2.5m. We rotate the recording device from 0\({}^{\circ}\)to 180\({}^{\circ}\)at 15\({}^{\circ}\)intervals, among which 90\({}^{\circ}\)means the ultrasound directly points to the bottom microphone. Under each angle, we play 40 benign commands and emit the universal IAP. Eventually, we collected 520 mixed audio signals from 13 angles. As shown in Fig. 13, although ultrasound is highly directional, we find that there is no significant difference with 100% success rate among different angles within 15\({}^{\circ}\)\(\sim\)150\({}^{\circ}\). As the deployed location of bottom microphones varies with different phones, therefore attack performance is not symmetrical with angles (i.e., 79% at 0\({}^{\circ}\)and 49% at 180\({}^{\circ}\)). Overall, as most voice-interface devices nowadays are equipped with omnidirectional microphones, V rifle can be effective as long as the beam can cover the bottom microphone.
#### Vi-B4 Different Scenes
To examine the effectiveness of V rifle in different environments, our experiments include a small office (2.4m\(\times\)2.6m, 36 dB), medium lounge (6.3m\(\times\)3.8m, 42 dB), large laboratory (13m\(\times\)5.2m, 38 dB), and narrow corridor (60m\(\times\)2m, 44 dB). In these scenes, the reverberation pattern of audible sound varies with space size. Our configuration consists of a transmitter-to-device distance of 4m and a loudspeaker-to-device distance of 1m, which mimics the standard user interaction distance, except the distance of 2.5m for the small office due to its limited size. We also play 40 audible benign samples and superimpose V rifle on them for once. Then we collected 160 samples from 4 spaces. As shown in Tab. V, we find no significant difference between these scenarios, as our design considers such physical variation.
#### Vi-B5 Different Ambient Noises
We perform ambient noise-related experiments in our laboratory, where noises of 4 typical scenes are involved, i.e., cafeteria (people chatting), office (keyboard typing), lab (machine running), and outdoor (wind blowing) downloaded from the freesound [45]. We evaluate noise starting from 50\(\sim\)65 dB, with 5 dB intervals, and we play noises through an additional loudspeaker to guarantee the noise pressure level reaches the receiver at 50, 55, 60, and 65 dB. Noise samples from 4 scenes are played continuously. At the same time, we play 20 audible benign commands and deliver V rifle. Given that the noise is not constant, the superposition of different parts may have different effects. We repeat the above operation three times and collect 240 mixed samples for each noise level. Fig. 14 demonstrates that V rifle maintains effectiveness even if the noisy ambient sound reaches 65 dB with an average SR up to 97.65%. The performance drops slightly in the office noise case of 87.5%, where the keyboard typing and mouse striking are crisp noises with intense high-frequency energy. Since V rifle mainly affects low-frequency acoustic features after transformation, high-frequency noise might reduce its attack performance on deceiving ASR models.
#### Vi-B6 Different Recording Devices
Since the ultrasound frequency response varies with different recording devices and microphone models [15], i.e., we establish a specific transformation model for each device. To verify that our perturbation can still manipulate the ASR model after being recorded by different devices, we obtain 5 pairs of universal and silence perturbations based on the device-wise ultrasonic transforma
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline
**Scene** & **Office** & **Lounge** & **Laboratory** & **Corridor** \\ \hline
**SR** & 100\% (40/40) & 95\% (38/40) & 100\% (40/40) & 92.5\% (37/40) \\ \hline
**CER** & 0\% & 0.79\% & 0\% & 1.04\% \\ \hline \end{tabular}
\end{table} TABLE V: Different attack scenes
Fig. 12: V rifle’s performance at different distances.
Fig. 13: V rifle’s performance at different angles.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline
**Metrics** & **Baseline (_G1_)** & **Without (_G2_)** & **Low-pass (_G3_)** & **With (_G4_)** \\ \hline
**SR** & 0\% (0/120) & 0\% (0/120) & 21.67\% (26/120) & 100\% (120/120) \\ \hline
**CER** & 95.7\% & 78.93\% & 19.39\% & 0\% \\ \hline \end{tabular}
\end{table} TABLE IV: Ablation of w/o transformation modeling
tion. After a similar collection as the above experiments done for each device, Fig. 15 depicts the average SR of these devices is up to 96.8% and CER is down to 0.50%, where Mix2s reaches 100% and 0% on these metrics, proving the crafted V rifle's effectiveness on individual devices. Moreover, V rifle gets 95.8% SR on ReSpeaker, suggesting it can also attack devices with multi-channel microphones well. Furthermore, given that adversaries may attack unmodeled (i.e., unseen) devices, we want to investigate V rifle's transferability despite our ultrasound transformation modeling is device-specific. We apply the optimized perturbation of Mix2s to other devices. Among them, the Mix2s' combined perturbation can transfer to Pixel 3aXL and Reno5 pro with 94.2% and 83.3% SR. Besides, the performance reduces on iPhone14 pro (31.7%) and ReSpeaker (50.8%) due to their microphones' different frequency selectivity to ultrasound. The result indicates that V rifle is also transferable across devices.
#### V-C7 Different Speech & Perturbation Loudness
We further investigate the attack performance changes due to different loudness of the user speech and the universal perturbation. We set the representative audible sound pressure level to vary from 65\(\sim\)90 dB using a decibel meter and also vary ultrasonic emission power to keep the same loudness. We play our perturbation, repeating 5 times at each volume level. Due to page limitation, results are given in Appendix -E, Fig. 20. As the mutual loudness changes, we find that once the perturbation has the same volume as the benign audio, it achieves over 55% SR. Moreover, with 5 dB higher than the benign audio, V rifle can work effectively with an average SR up to 95.5%. When V rifle's volume is 10 dB higher than audible speech, it can dominate all the user commands. Notably, even if the direct ultrasound-based attack is 35 dB louder than the audible audio, the ASR model still recognizes a CER up to 46%. In that case, V rifle achieves all CERs down to 0%.
### _Attack with Portable Device and Off-the-shelf Loudspeaker_
Our sophisticated device facilities an extensive attack range, providing great flexibility to attackers. We have also implemented two other covert attacks with the portable device and everyday life loudspeaker, as shown in Fig. 16.
#### V-D1 Portable Device
Our portable device equipped with eight 25 kHz ultrasound transducers, a compact amplifier, and a rechargeable battery in Fig. 16(a), balances lightweight and attack range. It can be connected to the smartphone, where the attacker stores perturbations as 96 kHz USB-AM audio in advance. We evaluate the effectiveness of attacks with a portable device, setting it to point at Mix2s' bottom microphone with the target "open the door". Fig. 17(a) demonstrates 100% SR within 150 cm, and 78% SR along with CER down to 1.69% even at a distance of 180 cm, suggesting V rifle with portable devices can exceed the attack distance of almost prior AEs.
#### V-D2 Off-the-shelf Loudspeaker
Adversaries can embed USB-AM perturbations into audio or video files to manipulate user commands when played on a computer or smartphone connected to a loudspeaker. We investigate the use of off-the-shelf loudspeakers, such as the high-end Hivi [46], which have three distinct sound sources: woofer (37-140 Hz), mid-range (140-2000 Hz), and tweeter (\(>\)2000 Hz). To determine the optimal ultrasound frequency for embedding the perturbations, we conduct experiments scanning the carrier frequency from 21-27 kHz and find 25.2 kHz to be the best frequency, despite the gain decrease beyond the rated frequency range (37Hz\(\sim\)20kHz). Figure17(b) illustrates that V rifle's effective attack distance via off-the-shelf speakers is approximately 20 cm, with a low CER of 11.07%, demonstrating effective modification of user commands at the character level.
## VI Anti-defense Experiment
In this section, we validate whether V rifle can resist 6 kinds of representative defenses, involving audio pre-processing methods and inaudible attack detection. We consider two types of adversaries: 1) _Naive Adversary_: The naive adversary creates V rifle based on the undefended model to attack the defended model. 2) _Adaptive Adversary_: This
Fig. 16: Two additional attack forms of V rifle.
Fig. 17: V rifle’s performance at different distance with the portable device and off-the-shelf loudspeaker.
Fig. 14: Attack performance in face of noises from typical scenes at 4 sound pressure levels.
Fig. 15: Attack performance on different recording devices.
adversary has full knowledge of the defense mechanisms and applies customized strategies to craft Vrifle.
**Against Audio Pre-processing Defense Methods.** Referring to previous works [9, 12, 31, 47] that present a series of audio pre-processing methods against audio adversarial example attacks, we examine the robustness of Vrifle using 5 representative defenses: _(1) Quantization_: converting the audio sampling value from a 16-bit signed integer to an 8-bit precision, which reduces the sampling range from [-32,768\(\sim\)32,767] to [-128\(\sim\)127]. Notably, this introduces distortion and noise due to the small range of values at 8-bit precision. _(2) Voice Activity Detection (VAD)_: removing segments of audio that are less than -15 dB, where its maximum energy is normalized to 0 dB. _(3) Opus Compression Codec_: coding and compressing audio with flexible bit rate and low latency are widely used in real-time communication, particularly VoIP and online meetings. We set the default compression level as 5 according to [48]. _(4) Band-pass Filter_: filtering the input signal with given cut-off frequencies, e.g., 50\(\sim\)7000 Hz. _(5) Down-sampling_: reducing the audio to a given rate, e.g., rate=0.4 means down-sampling a 16 kHz audio to 6.4 kHz, and then recovering it to the required sampling rate of targeted ASRs (generally 16/48 kHz).
We obtain the attack success rate (successfully altering the tested speech into the targeted command "open the door"), attack CER, and benign CER (derived between the DeepSpeech recognized and ground-truth transcription) at 99.49%, 0.15%, 11.23%, respectively, when the model is undefended. The results are listed in Tab. VI, VII, VIII. We observe that quantization, VAD, and Opus compression barely affect the attack success rate (all\(\geq\)95.93%) in Tab. VI. Particularly, VAD significantly rises benign CER from 11.23% to 23.58%, while failing to lower our attack performance. Tab. VII and VIII demonstrate that naive Vrifle can maintain relatively effective even when facing a 50\(\sim\)4000 band-pass filter or being down-sampled to 8 kHz (rate=0.5). However, the attack performance degrades as the bandwidth or the down-sampling rate gets further lower. Note that we do not evaluate extreme cases, such as band-pass: less than 50\(\sim\)2000 Hz or down-sampling rate: smaller than 0.3, since they have severely affected the model's ability to transcribe benign speech commands with unacceptable CERs over 45%. After the adaptive adversary integrates the band-pass filtering operation during optimization, the attack performance increase significantly, especially the success rate and attack CER reach 78.88% and 5.34%, respectively, even under 50\(\sim\)3000 Hz band-pass filtering. Similarly, the adaptive adversary can realize an 81.42% success rate and 4.38% CER against a down-sampling rate=0.4.
**Against Inaudible Attack Detection Method.** Given that Vrifle utilizes ultrasound-based modulation mechanisms, prior inaudible attack detection methods are expected to distinguish such an attack well from benign speech. We reproduce the representative software-based method: LipRead [21], strictly following its instruction, which extracts and analyzes three features of speech samples: power in sub-50Hz, correlation coefficient (between the fundamental and harmonic components), and amplitude skew. We use the LipRead classifier to detect Vrifle samples crafted under the naive adversary setting and collected at different distances & angles; then obtain a detection accuracy down to 45.07%. Fig. 18 visualizes three types of audio samples in two significant feature dimensions. Vrifle presents compact skewness around 1.0 due to its symmetrical waveform, whose distribution is closer to the normal, while ultrasound-based attacks appear more shift toward 0.30 and greater power in sub-50Hz. Low-frequency power aggregation is still inevitable in our attack due to nonlinear demodulation [21]. Moreover, naive Vrifle appears low correlation coefficient compared to the traditional attacks, as its perturbations (see Fig. 7&9) barely present normal speech properties such as fundamental and harmonic frequencies. Overall, the inherent difference between Vrifle and traditional ultrasound-based attacks makes it probably compromise LipRead. Furthermore, the adaptive adversary extracts three features during the perturbation generation and constrains them close to the normal samples, further reducing the accuracy of LipRead detecting our attack to 30.55%.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline
**(\%) Method** & **Undefended** & **Quantization** & **VAD** & **Opus** & **Compress** \\ \hline Success Rate & **99.49** & 97.96 & 99.49 & 95.93 \\ \hline Attack CER & **0.1** & 0.28 & 0.1 & 0.84 \\ \hline Benign CER & **11.23** & 13.43 & 23.58 & 12.37 \\ \hline \hline \end{tabular}
\end{table} TABLE VI: Three defenses
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} \hline \hline
**(\%) & Band-pass (Hz)** & **50\(\sim\)7000** & **50\(\sim\)6000** & **50\(\sim\)5000** & **50\(\sim\)4000** & **50\(\sim\)3000** \\ \hline \multirow{2}{*}{_Naive_} & Success Rate & 97.96 & 96.95 & 95.67 & 75.57 & 12.47 \\ \cline{2-8} & Attack CER & 0.28 & 0.53 & 0.80 & 6.43 & 35.01 \\ \hline \multirow{2}{*}{_Adaptive_} & Success Rate & 99.75 & 99.24 & 97.20 & 91.60 & 78.88 \\ \cline{2-8} & Attack CER & 0.05 & 0.14 & 0.41 & 1.64 & 5.34 \\ \hline \hline \multicolumn{2}{c|}{Benign CER} & 11.63 & 13.60 & 19.27 & 28.83 & 41.06 \\ \hline \hline \end{tabular}
\end{table} TABLE VII: Defense with band-pass filter
Fig. 18: Two significant feature dimensions extracted from three classes of audio samples by LipRead (800 samples/class).
## VII Discussion and Future Work
**Potential Countermeasure:** We have demonstrated that V rifle are robust to audio pre-processing and inaudible attack detection methods. We envisage that defense approaches tracking ultrasound nature [49, 50] may be effective, although these methods are based on two hardware-dependent prototypes that can not adapt to off-the-shelf compact smart devices. For the remaining feature forensics-based [5, 21] or ML-based defenses [15, 51], we believe that the adaptive adversary shall adopt these defense strategies along with the ultrasonic transformation model during optimization and physically bypass them. But this would result in a less universal attack due to additional constraints.
**Prevent Airborne Self-demodulation Leakage.** Although our attack distance has significantly exceeded previous works, please note that V rifle cannot extend the range infinitely. As uncovered in [52], the self-demodulation occurs and then the modulated baseband becomes audible once a certain power is reached. To increase the attack range while ensuring in-audibility, we adopt the following strategies: 1) utilizing 25 kHz carrier frequency rather than higher frequencies, such as 40kHz, for less attenuation; 2) employing customized ultrasound transducers, signal generator, and amplifiers capable of suppressing nonlinear distortion at the speaker side; and 3) setting maximum power not to exceed 3.2W and implementing USB-AM to increase the attack efficiency with portable device and off-the-shelf loudspeakers.
**Limitations:** 1) V rifle achieves highly universal manipulation of user speech using DeepSpeech2's gradient information. However, its universality under black-box settings is limited in critical user-present scenarios due to variable user factors. Notably, targeted universal AE attacks in black-box scenarios remain an unsolved problem currently, despite several untargeted literature [53, 54]. 2) Although we have verified that our ultrasonic transformation model is effective on different recording devices, it is currently device-specific due to the microphone's frequency selectivity to ultrasound. We will investigate a device-generic transformation model in future work. 3) Our careful design enables the _man-in-the-middle_ attack strategy and our user testing in Appendix SSF demonstrate its high stealthiness. However, the testing results imply that replaying excessively long user commands may cause discomfort and might alert the user. We envision that understanding user intent and then replaying synthetic short commands can mitigate this issue.
**Attack on Speaker Recognition:** We envision that the idea of V rifle can be generalized to attack speaker recognition models deployed on access control systems, e.g., authentication of voice assistants and applications. We have conducted a preliminary experiment attacking the state-of-the-art ECAPAP-TDNN [55], a popular speaker recognition model. We maintain the almost identical design as used in attacking the ASR model and only reconfigure the optimization goal \(y_{t}\) as the target speaker label and the loss function \(\mathcal{L}(f(\cdot),y_{t})\) as the cosine similarity scoring module. Results demonstrate that, in a 10-person set, V rifle is universal to alter the viceprint of any user's speech samples into the targeted speaker's. We plan to delve into such an ability of V rifle in future work.
## VIII Related Work
**Custom Adversarial Examples & Inaudible Attacks.** The initial AE attacks construct a custom (i.e., non-universal) perturbation for a specific audio clip, whereas the same perturbation cannot compromise other audio. Signal-level transformations [10, 11, 56], such as modifying MFCC, are unintligible to human beings but can be recognized by the ASR model. As this class of attacks resembles obvious noises, they can easily alert users. Thus, inaudible attacks [5, 21, 22] have been proposed, which exploit carrier signals outside the audible frequencies of human beings (e.g., 40 kHz) to inject attacks into ASR systems utilizing the nonlinearity vulnerability of microphone circuits, yet entirely unheard by victims. However, compared with audible playback speech samples, such attacks usually suffer from signal distortion and low SNR due to their dependence on various convert channels, e.g., ultrasound [57], laser [6], or electricity [24] signals, and the hardware imperfections these channels introduce. There is also a major branch of the research community that leverages the vulnerability of ASR models by adding slightly audible perturbations on the benign audio based on \(\epsilon\)-constraint [7, 58] and psychoacoustic hiding [3, 4], to make the AEs sound benign but fool the ASR's transcription. It is worth noting that non-universal AEs lose effectiveness for streaming speech input and unpredictable user commands, as they rely on perfect temporal alignment. Constructing multiple AEs for altering different commands as an adversary-desired instruction is also impractical.
**Universal Adversarial Examples.** Recent studies propose universal AEs that can apply to tamper with multiple speech content as an adversary-desired command. Existing untargeted universal AE attacks adopt iterative greedy algorithms [59] can cause arbitrary speech to mis-classification [53] or false transcription [54]. In contrast, targeted universal AE attack is very challenging in speech recognition tasks because ASR models are context-dependent, and a certain minor perturbation superimposed even at different positions of a given benign audio, the whole sentence may yield various transcription results. This is distinct from the prior successful targeted universal AE attack in the text-independent speaker recognition [26, 31] and the universal adversarial patch attack in position-insensitive CNN-based image classification tasks [60]. Moreover, given that the victim user can easily notice the audible-band perturbation, AdvPulse [9] disguises short pulses in the environment sounds to be less perceptible. However, they only apply to a context-insensitive CNN-based audio command classification model to be universal. To overcome the mainstream RNN-based ASR context-dependent issues, a partial match strategy is proposed by SpecPatch [8], which also employs audible noise-like short pulses (0.5s) to alter multiple short user commands into the targeted instruction against the mainstream DeepSpeech ASR model. However, such an attack will not work in relatively long commands (\(\geq\) 4 words) and can be noticed despite following L2-imperceptibility constraints.
Overall, due to the fundamental differences between audible and ultrasonic channels, V rifle differs from prior works that encountered challenges related to _user auditory_ and _user disruption_. In addition to the four representative merits over existing AEs listed in Tab. I, V rifle offers several additional benefits: (1) the optimization process is no longer subject to
audibility constraints such as tiny \(\epsilon\), psychoacoustics, \(L_{p}\)-norm, nor does it need to limit the signal form as short pulses to reduce the possibility of being perceived. (2) V rifle's broad optimization space further allows for fewer iterations while maintaining a high degree of universality. Combining these two advantages, V rifle enables real-time manipulation of arbitrary user commands and long speech sentences in an _alter-and-mute_ fashion, as never before. (3) Unlike audible-band AEs that are easily compromised by interference due to their subtle perturbations, V rifle demonstrates robustness and remains effective even when faced with various audio pre-processing defenses. Notably, our initial modeling of ultrasound transformation precisely characterizes the ultrasound channel and justifies it as a promising carrier for IAP delivery. We believe that this modeling effort lays the groundwork for generating inaudible AEs and may inspire future works.
## IX Conclusion
In this work, we propose an inaudible adversarial perturbation (IAP) attack against ASR systems named V rifle, which can extend to scenarios where users are present and may use ASR services. In such scenarios, prior studies will fail due to _user auditory_ and _user disruption_. We make the first attempt to model the ultrasonic transformation process, based on which, V rifle can alter arbitrary user commands to the adversary-desired intent in real time without any knowledge of users' speech. Our comprehensive experiments in the digital and physical worlds across various configurations demonstrate V rifle's effectiveness and robustness. Overall, V rifle features merits including complete inaudibility, universality, and long-range attack ability.
## Acknowledgement
We thank the shepherd and anonymous reviewers for their valuable comments and suggestions. We also appreciate Dr. Jiangyi Deng for providing valuable insights and guidance to the paper's content. This work is supported by China NSFC Grant 61925109, 62201503, 62222114, and 62071428.
|
2309.01369 | Exploring Limits of Diffusion-Synthetic Training with Weakly Supervised
Semantic Segmentation | The advance of generative models for images has inspired various training
techniques for image recognition utilizing synthetic images. In semantic
segmentation, one promising approach is extracting pseudo-masks from attention
maps in text-to-image diffusion models, which enables
real-image-and-annotation-free training. However, the pioneering training
method using the diffusion-synthetic images and pseudo-masks, i.e., DiffuMask
has limitations in terms of mask quality, scalability, and ranges of applicable
domains. To overcome these limitations, this work introduces three techniques
for diffusion-synthetic semantic segmentation training. First,
reliability-aware robust training, originally used in weakly supervised
learning, helps segmentation with insufficient synthetic mask quality. %Second,
large-scale pretraining of whole segmentation models, not only backbones, on
synthetic ImageNet-1k-class images with pixel-labels benefits downstream
segmentation tasks. Second, we introduce prompt augmentation, data augmentation
to the prompt text set to scale up and diversify training images with a limited
text resources. Finally, LoRA-based adaptation of Stable Diffusion enables the
transfer to a distant domain, e.g., auto-driving images. Experiments in PASCAL
VOC, ImageNet-S, and Cityscapes show that our method effectively closes gap
between real and synthetic training in semantic segmentation. | Ryota Yoshihashi, Yuya Otsuka, Kenji Doi, Tomohiro Tanaka, Hirokatsu Kataoka | 2023-09-04T05:34:19Z | http://arxiv.org/abs/2309.01369v2 | Attention as Annotation: Generating Images and Pseudo-masks for Weakly Supervised Semantic Segmentation with Diffusion
###### Abstract
Although recent advancements in diffusion models enabled high-fidelity and diverse image generation, training of discriminative models largely depends on collections of massive real images and their manual annotation. Here, we present a training method for semantic segmentation that neither relies on real images nor manual annotation. The proposed method _attn2mask_ utilizes images generated by a text-to-image diffusion model in combination with its internal text-to-image cross-attention as supervisory pseudo-masks. Since the text-to-image generator is trained with image-caption pairs but without pixel-wise labels, attn2mask can be regarded as a weakly supervised segmentation method overall. Experiments show that attn2mask achieves promising results in PASCAL VOC for not using real training data for segmentation at all, and it is also useful to scale up segmentation to a more-class scenario, i.e., ImageNet segmentation. It also shows adaptation ability with LoRA-based fine-tuning, which enables the transfer to a distant domain i.e., Cityscapes.
## Introduction
Generative modeling in images is a task to estimate distributions of natural images and enable the sampling of new images from them. Although the generation of natural-looking images has been a long-standing challenge, recent diffusion models [11] brought rapid advancement for stable training of high-fidelity image generative models. Particularly, diffusion models trained with large-scale web-crawled image-text pairs [14] enabled highly controllable text-to-image synthesis (text2img) [15, 16], which can reflect prompting by free-form texts into the generated contents.
In contrast, the training of image-recognition models largely depends on supervised learning using massive manual annotations on image collections [1]. An extreme case is semantic segmentation, where we need pixel-wise class labeling in each training image to exploit the full supervision. Weakly supervised learning (WS), a methodology to utilize coarser signals such as boxes or image-level labels [1], has been extensively studied in segmentation to mitigate this labor. Although recent WS methods have achieved preferable segmentation qualities in the benchmarks [17], complex techniques are often needed to acquire pseudo-masks with satisfactory quality, which limits the generality of the methods. Furthermore, WS can not mitigate the necessity to gather images that are annotatable and aligned to the deployment (or test) domain.
Here, our idea is to leverage the advanced generative models to synthesize images and corresponding pseudo-labels jointly. Generative models' generation processes may contain semantics of the images generated by themselves, which are reusable for training another recognition model. We propose _attn2mask_, a semantic-segmentation training method that exploits synthesized images and internal attention maps from a text2img diffusion model as supervisory signals, as illustrated in Fig. 1. Attn2mask can be regarded as a WS method since it only requires image-text pairs in training the generative model and no pixel-annotated image. However, attn2mask's difference from the existing WS method is that it requires no real images in training its segmentation model, which reduces image collection labor once the text2img model is trained (or downloaded from public model hubs).
While we demonstrate that a simple combination of off-the-shell Stable Diffusion [16] and a supervised segmentation method [3] work unexpectedly well, we further investigate approaches for improvements from two viewpoints: 1) robust segmentation training and 2) domain adaptation. The former is based on the observation that the diffusion-based pseudo-masks are sometimes not very accurate due to leaking into back
Figure 1: Proposed method _attn2mask_ uses generated images paired with pseudo-mask computed from generative attentions per prompt token for semantic segmentation.
grounds or missing parts of objects, and we find that a real-image-based WS training method [14] can mitigate harmful effects from this, even in training with synthesized images. For the latter, we develop an unsupervised domain adaptation technique using test-domain images (but not labels) for our image-mask generator, apart from the purely synthesized-based training. This adaptation opens the possibility of exploiting generic image generators in new domains where the generators are not intended.
In experiments, we verify attn2mask's effectiveness by achieving 58.3 mIoU in the PASCAL VOC segmentation task, which is accurate for not using any real images or annotation. Attn2mask is also applicable to a larger-scale scenario in ImageNet-S 1k-class segmentation, where it performs sufficiently close to the semi-supervised BigDataset-GAN generator. Furthermore, an adaptation of Stable Diffusion using a small amount of target-domain images is shown to be beneficial for distant-domain transfer for the Cityscapes driving images, although it is a bit apart from the spirit of real-image-free training.
Our contributions are summarized as follows: 1) We show that off-she-shelf Stable Diffusion without any modification or fine-tuning can generate images and pseudo-masks that have sufficient quality to train semantic-segmentation models. 2) We improve the synthetic training results by incorporating a robust training method using co-training. 3) We present a domain-adaptative image-and-pseudo-mask generation method with Stable Diffusion exploiting LoRA to make attn2mask applicable to distant-domain tasks.
## Related work
### Segmentation with diffusion models
Several studies have tried to leverage the diffusion models' excellent image-generation performance in segmentation. There are two lines of research: reusing diffusion U-Net for segmentation tasks and generating training image-label pairs for segmentation. For the former line, it was shown that denoising diffusion probabilistic models (DDPMs) can be used as pre-trained feature extractors for semantic segmentation [1]. ODISE [23] similarly fine-tunes a text2img diffusion model for segmentation, but it enables phrase-based segmentation by reusing a text encoder in the text2img model. This usage of text2img models as visual-linguistic backbones can be seen as a generative counterpart of contrastive visual-linguistic foundation models, i.g., CLIP [1], which also can be repurposed for segmentation tasks [13]. However, all of the aforementioned methods rely on manually annotated pixel labels for training.
For the latter line, Grounded Diffusion [10] trains image-and-mask generation models by leveraging a pre-trained Mask R-CNN with pixel labels. We refer to this kind of approaches _semi-supervised_ mask generation for their dependency on annotated training images on at least partial classes. DiffuMask [22] is closer to ours in the sense of _weakly supervised_ mask generation, where masks are automatically acquired via text2img generation training. The main difference between DiffuMask and our attn2mask is that DiffuMask focuses on single-object generation but attn2mask supports multi-object image generation and is usable for training multi-class segmentation models. Note that these studies [10, 22] are contemporary to ours.
### Training with generated images
The idea of exploiting free annotations with synthesized images dates back before the large-scale generative models: for example, computer-graphics-based dataset creation using game engines was examined [15]. More primitively, even segmentation of mathematically drawn contours can help real segmentation as a pretraining task [11]. However, such rendered images tended to have gaps from real ones, which needed supervised retraining or unsupervised adaptation techniques [12] to utilize them fully to achieve preferable real-environment performances. In contrast, we exploit modern generative models' high-fidelity generation ability that narrows the gap without any adaptation in at least generic settings such as PASCAL VOC. Prior to the diffusion models, GAN-based image-and-mask generation was also tried [10] but they still required manual annotation for partial classes to fine-tune the GANs for mask generation. In addition to segmentation, classification [23] and unsupervised representation learning [16] with generated images have been studied, and they report performances gradually approaching to real-dataset-based counterparts, which is motivating for studies in synthetic training including ours.
### Weakly supervised semantic segmentation
The mainstream of weakly supervised semantic segmentation (WSSS) approaches exploits image-level class labels. Two-stage training that first trains an image-level classifier and then re-trains another segmentation model using the classifier's class-activation maps (CAM) [15] as weak labels is the most straightforward but still well-used strategy. Among the two-stage methods, existing studies attempted techniques for better CAM generation [22, 23, 24, 25] and robust segmentation training against weak labels [13, 14, 15]. From the viewpoint of WSSS, we replace classifier-based CAMs with generative attentions combined with generated images. In addition, we incorporate the latest robust segmentation-training method [14], which we confirm to be beneficial in our training scenario.
### Unsupervised segmentation and text-based segmentation
Recently, segmentation studies diverged in various training strategies. For example, unsupervised semantic segmentation does not use any form of supervision in the downstream segmentation training [26, 27]. Our attn2mask may be seen as unsupervised in the sense that it is annotation-free in the downstream domains,
but it operates under the stricter constraint of unavailability of downstream-domain (real) images than usual unsupervised methods. Text-based segmentation exploits large-scale vision-language pretraining [14, 15, 16], which often transfers well in the downstream segmentation tasks without fine-tuning. Attn2mask is related to these methods in the usage of vision-language pretraining except for the difference between generative (i.e., Stable Diffusion) and discriminative (i.e., CLIP) pretraining.
## Proposed method Attn2mask
An overview of attn2mask is shown in Fig.2. It consists of a) the image-and-pseudo-mask generation module and b) the segmentation module. We also apply post-processing to the pseudo-masks before passing them to the segmentation model to multi-class binary labels.
### Image-and-pseudo-mask generation
To extract semantic regions from diffusion-synthesized images, we need to compute the contributions of each word in the prompts on each location in images. Fortunately, image-text cross-attention, popularly used in most diffusion models including Stable Diffusion, is naturally available for this purpose. Transformer-style attention modules can be written as follows:
\[\text{Transformer}(Q,K,V) = A(Q,K)V, \tag{1}\] \[A(Q,K) =\text{Softmax}(\frac{QK^{T}}{\sqrt{\text{dim}(K)}}), \tag{2}\]
where Q, K, and V are the query, key, and value, each of which is a sequence of vectors, respectively. In using this as text-to-image cross-attention, Q is computed from the image embedding \(\mathbf{f}\in\mathbb{R}^{W\times H\times C}\) whose size is \(W\times H\) and dimensionality is \(C\), and K and V are computed from the text embedding \(\mathbf{s}\in\mathbb{R}^{L\times D}\) whose length is \(L\) and dimensionality is \(D\). This makes the attention maps' dimensionality
\[A(Q(\mathbf{s}),K(\mathbf{f})) \in \mathbb{R}^{W\times H\times L}, \tag{3}\]
where the element at \((x,y,t)\) can be interpreted as the amount of contribution from the \(t\)-th text token to the location in image coordinate \((x,y)\).
From the attention maps \(A\), we extract channels that are relevant to the classes of interest to use them as supervisory masks. Given a token sequence of the input prompt, we denote the positions of the relevant tokens of class \(c\) by \(\tau_{c}=[t_{c1},t_{c2},...,t_{cn_{c}}]\), allowing multiple occurrences of the relevant tokens for a single class. The "relevance" of words and classes is set manually by rule-based matching, for example, listing synonyms for a class's display name.
\[A_{c}=\frac{1}{n_{c}}\sum_{t\in\tau_{c}}A(Q(\mathbf{s}),K(\mathbf{f}))_{t} \in \mathbb{R}^{W\times H}, \tag{4}\]
Here, we denote the operation to extract \(t\)-th channels of the attention map by \(A(\cdot,\cdot)_{t}\).
We add one more probability map for the background as a class defined by
\[A_{0}(x,y) = 1.0-\max_{i\in[1,N]}A_{i}(x,y)-\beta, \tag{5}\]
which represents the absence of activations for any other classes. Here, \(\beta\) is a hyperparameter to control the strength of the background prior; larger \(\beta\) brings smaller portions of the background area in labeling results. The motivation behind this special treatment for the backgrounds is that they tend to be generated implicitly without concrete indication in the prompts, which makes class-word-based aggregation in Eq. 4 difficult.
After attention maps \(A_{0},A_{1},...,A_{N}\) are gathered, we apply binarization to them to acquire multi-class discrete pseudo-labels. We use the densely connected conditional random field (dCRF) [10], a non-learning-based labeling algorithm. This is also useful to improve mask quality by considering the color similarity and local smoothness in labeling. These generation processes are iterated for a given set of prompts, and generated images and masks are stored in the disk as a dataset.
### Training segmentation models
Once a dataset of synthesized images and pseudo-labels is generated, any supervised segmentation models are trainable with it in general. However, naive training of supervised learners may suffer since the attention-based pseudo-masks
Figure 2: Overview of the proposed method _attn2mask_.
are not always perfect in terms of accuracy. Robust learners that are tolerant of label noises would help. Then, we adopt a robust training algorithm from BECO [14], a recent WS segmentation method as shown in Fig. 3.
BECO is an adaptive co-training method exploiting reliability maps of the pseudo-labels: It performs supervised learning with full trust in the pseudo-labels within regions where the pseudo-labels are confident and performs co-training-based consistency regularization in regions where the pseudo-labels are unreliable. In the original work, BECO was trained with reliability maps generated by processing the classifier's confidence score maps. We do not use any classifier and hence simply binarize the attention map to use as the reliability map as follows:
\[R(x,y) = \left\{\begin{array}{ll}1&\text{if }\max_{i\in[0,N]}A_{i}(x,y) \geq r\\ 0&\text{otherwise},\end{array}\right. \tag{6}\]
where \(r\) is a hyperparameter for the reliability threshold. Other parts in BECO except the reliability map are used unchanged.
### Domain adaptive image-mask generation
While image-mask generation with off-the-shelf Stable Diffusion generally performs satisfactorily, in out-of-domain applications, it may suffer from the domain gap between the generative model and the downstream tasks. We notice that this is the case for example, in applying attn2mask to driving images in Cityscapes [1].
To make attn2mask applicable to new domains, we develop an adaptation method using unlabeled training images. Here, we repurpose the instantiated generation technique DreamBooth [14] for domain adaptation. For a brief review, DreamBooth learns an instance token (often denoted by [V]) from a relatively small set of instance images. After fine-tuning with the instance-image set, [V] is usable as a new word in prompts to indicate the instance after training. Using DreamBooth, we instead learn a domain token; by training [V] not for a specific instance but for general training images in a new domain, Stable Diffusion can adapt to the domain, while keeping the ability to reflect other prompt words to the image contents.
In implementation, we especially use DreamBooth-LoRA [15], which trains a low-rank adapter [14] that is a small sub-network instead of fine-tuning the whole network. This enables fast and stable training with a relatively small target-domain training set.
## Experiments
We evaluate semantic-segmentation models trained with attn2mask on three publicly available datasets, namely, PASCAL VOC [1], ImageNet-S [13], and Cityscapes [1]. We conduct the main experiments in PASCAL VOC, which has been the de facto standard in semantic-segmentation evaluation. While fully supervised learning in VOC seems close to saturation in terms of accuracy, it is challenging enough to tackle without full annotation and real images. Additionally, we evaluate attn2mask's generalizability to 1) a larger-scale segmentation task with ImageNet-S, which was built on ImageNet-1k [12] and 2) a more task-specific scenario with Cityscapes, a dataset toward urban auto-driving.
### Dataset generation
We used Stable Diffusion 1.4 [15] trained on LAION-2B [16] without modification except in the domain-adaptation settings. We generated 512 \(\times\) 512-resolution images. The attention maps had smaller resolutions but were resized to match the image resolution. Stable Diffusion's U-Net had multiple cross-attention layers and they were averaged after resizing. We used DDPM [13] sampler with 100 timesteps, and attention maps were taken at the 50th step. Generation of a sample took around eight seconds with an NVIDIA V100 GPU. Post-processing was conducted using a Python reimplementation of dCRF [1] with the default hyperparameters.
The prompt set for VOC was created from the image descriptions from COCO captions [1]. We curated captions that include VOC class names and their major synonyms that we manually listed up, which made 168,679 captions in total. The prompt set for ImageNet-S was created by a template "a photo of ${class_name} with a background". The variable ${class_name} was randomly chosen from the multiple-defined synonyms per class. This made the number of possible varieties of the prompts significantly fewer than the needed samples per class (around 100 images/class) for our experiments, and thus we increased the number of samples by using
Figure 3: Our fully supervised and weakly supervised training schemes.
multiple random seeds per prompt. The suffix "with a background" helps to reduce the generation of full-foreground images, which is less useful for training.
For Cityscapes, we used a variable-length template "a photo of ${class_names[0]}, ${class_names[1]},..., ${class_names[n]}, in cityscape by a dashboard camera", where class_names is a random variable-length list of Cityscapes class names. A number of the class names in Cityscapes were not household words that may not be reflected in the generation, and we replaced "terrain" with "sand" and "vegetation" with "plant". We generated 12k samples. In the adaptation experiments, we used a modified template "a photo of ${class_names[0]}, ${class_names[1]},..., ${class_names[n]} in sks cityscape by a dashboard camera", where "sks" was the reserved word for the domain token [V] in DreamBooth fine-tuning. More details of this fine-tuning are seen in Appendix.
### Configurations of our model
First, we implemented the simplest version of our proposed method with Stable Diffusion and a fully supervised segmentation model, which we refer to as attn2mask. We used DeepLabv3+ [3] with the ResNet50 [12] backbone from the mmsegmentation (MMSegmentation Contributors 2020) codebase. Additionally, we replaced the segmentation training with WS training, which we refer to as attn2mask+BECO; we used the official training codes of BECO [13], but it also used DeepLabv3+-ResNet50 and trained models from these two codebases were fully compatible. Reliability threshold \(r\) was set to 0.5. Finally, we fine-tuned Stable Diffusion using DreamBoothLoRA [10] for transfer to Cityscapes, which we refer to as attn2mask-LoRA. We set the base learning rate to 0.01, weight decay to 1e-4, and total batch size to 32 on 2-GPU machines. We trained real-supervised models for comparisons with the same settings.
### Evaluation protocol
For PASCAL VOC and Cityscapes, we followed the official protocol for supervised learners, which is applicable without modification for WS learners including ours. We used the mean intersection-over-union (mIoU) metric. For ImageNet-S, we derived a foreground/background (FG/BG) segmentation task by extracting masks that have the same class labels as their whole images. This was intended to replicate the evaluation protocol of BigDatasetGAN [11], in which the authors built their own ImageNet-based segmentation benchmark but it remains unpublished.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline Method & bg & aeroplane & bicycle & bird & boat & bottle & bus & car & cat & chair & cow & table \\ \hline Real-supervised & 93.6 & 88.0 & 40.0 & 84.5 & 66.2 & 63.2 & 89.3 & 84.5 & 87.6 & 32.4 & 74.3 & 37.6 \\ Attn2mask (ours) & 83.0 & 79.5 & 33.2 & 55.7 & 56.6 & 56.5 & 76.3 & 58.3 & 70.1 & 22.2 & 57.0 & 34.6 \\ Attn2mask+BECO (ours) & 86.6 & 77.2 & 29.0 & 79.0 & 72.0 & 56.2 & 84.8 & 68.0 & 80.9 & 24.9 & 76.3 & 46.0 \\ \hline Method & dog & horse & motorbike & person & plant & sheep & sofa & train & tv & mean \\ \hline Real-supervised & 83.0 & 73.8 & 78.1 & 86.1 & 56.5 & 76.4 & 45.0 & 82.8 & 71.5 & 71.2 \\ Attn2mask (ours) & 64.7 & 64.2 & 55.3 & 9.0 & 17.2 & 57.7 & 18.7 & 63.2 & 38.9 & 51.1 \\ Attn2mask+BECO (ours) & 73.6 & 71.3 & 65.9 & 19.8 & 22.5 & 67.8 & 7.4 & 68.7 & 46.1 & 58.3 \\ \hline \end{tabular}
\end{table}
Table 1: Segmentation performances on the PASCAL VOC 2012 val set.
\begin{table}
\begin{tabular}{c c c} \hline Method & Adapt. & Acc. & mIoU \\ \hline Attn2mask (ours) & & 63.7 & 13.2 \\ Attn2mask-LoRA (ours) & ✓ & **69.7** & **17.1** \\ \hline MaskCLIP & & 35.9 & 10.0 \\ MDC & & – & 7.0 \\ PiCIE & & – & 9.7 \\ ReCo & & **74.6** & **19.3** \\ \hline MDC & ✓ & 47.9 & 7.1 \\ PiCIE & ✓ & 40.7 & 12.3 \\ STEGO & ✓ & 73.2 & 21.0 \\ ReCo+ & ✓ & **83.7** & **24.2** \\ \hline \end{tabular}
\end{table}
Table 4: Results in Cityscapes
\begin{table}
\begin{tabular}{c c c c} \hline Method & \#Synth. imgs & \#Real imgs & mIoU \\ \hline Attn2mask (ours) & 2,000 & 0 & 41.2 \\ & 10,000 & 0 & 46.5 \\ & 50,000 & 0 & 49.7 \\ & 168,679 & 0 & 51.1 \\ Attn2mask+BECO & 10,000 & 0 & 51.7 \\ (ours) & 168,679 & 0 & 58.3 \\ Pseudo-mask (base) & 0 & 10,582 & 52.7 \\ & 0 & 10,582 & 54.8 \\ Pseudo-mask (full) & 0 & 10,582 & 70.0 \\ & 0 & 10,582 & 70.0 \\ & 0 & 10,582 & 70.9 \\ \hline \end{tabular}
\end{table}
Table 2: Comparisons with real-image-based WSSS methods
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline Method & Adapt. & Acc. & mIoU \\ \hline Attn2mask (ours) & & & 63.7 & 13.2 \\ Attn2mask-LoRA (ours) & ✓ & **69.7** & **17.1** \\ \hline MaskCLIP & & 35.9 & 10.0 \\ MDC & & – & 7.0 \\ PiCIE & & – & 9.7 \\ ReCo & & **74.6** & **19.3** \\ \hline MDC & ✓ & 47.9 & 7.1 \\ PiCIE & ✓ & 40.7 & 12.3 \\ STEGO & ✓ & 73.2 & 21.0 \\ ReCo+ & ✓ & **83.7** & **24.2** \\ \hline \end{tabular}
\end{table}
Table 3: Results in the ImageNet-S FG/BG segmentation evaluated with mIoU.
## Results
Table 1 summarizes the results in VOC. Attn2mask marked mIoU of 51.1%, which is of course worse than the results by supervised training with the real images but surprisingly effective for not using real images or manual annotation at all. In particular, attn2mask performs closely to the supervised counterpart in relatively easy classes such as _aeroplane_ (88.0% v.s. 79.5%), _bus_ (88.0% v.s. 79.5%), and _horse_ (73.8% v.s. 64.2%) or contrarily difficult classes where the supervised models do not work well, such as _bicycle_ (40.0% v.s. 33.2%) or _table_ (37.6% v.s. 34.6%). We also confirmed that the WS training with BECO benefits our synthetic training, which gained + 7.2%-point improvement on mIoU. However, we also acknowledge that there exists significant degradation of IoU in some classes such as _person_, _plant_, or _sofa_ compared with the real-image training. We observed that such classes tend to be generated as parts of backgrounds without being explicitly prompted. This made noise for training these classes, which is the major limitation of our synthetic training.
Table 2 shows comparisons between attn2mask and real-image-based weakly supervised methods. For comparisons, we selected Pseudo-mask matters [14] and RecAM [15], which were CAM-based and reported baselines that simply learned CAM with their networks. Interestingly, attn2mask with full synthetic images performed similarly to the baseline weak learners. However, there were gaps between the full configurations of the real-weak learners and attn2mask; the WS training gained +7.2 (51.1 \(\rightarrow\) 58.3) mIoU to attn2mask but over 15 points in the real-weak learners (52.7 \(\rightarrow\) 70.0 and 54.8 \(\rightarrow\) 70.5). These phenomena can be explained by that 1) the domain gaps between real and synthesized images are less dominant when the supervisory signals are weak, and 2) but they become non-negligible as the segmentation models fit tighter with the weakly supervised algorithms.
Table 3 shows the result in ImageNet-S FG/BG segmentation. We compared attn2mask with the same network trained with data from BigDatasetGAN [14]. BigDatasetGAN is a semi-supervised generator that relies on manual annotation of generated images to fine-tune GAN for mask generation. Attn2mask performed competitively without using pixel-label annotations at all, which is encouraging by showing the ability to freely increase classes within the range Stable Diffusion can generate.
Table 4 shows the results in Cityscapes. In contrast to segmenting object images such as VOC, holistic scene understanding in Cityscapes is extremely challenging for WSS methods using image-level labels. Hence, unsupervised segmentation methods [13, 12] are rather intensively developed and we placed reference mIoU values from them. Attn2mask performs similarly to the existing unsupervised methods even with the significant domain gap between web-crawled training images off Stable Diffusion and the driving images. We confirmed the LoRA-based adaptation contributed to the segmentation accuracy, which is encouraging for future development of domain-adaptive diffusion-based segmentation methods. Attn2mask outperformed the strong unsupervised segmentation methods MaskCLIP [13] and PiCIE [13] in both settings with and without adaptation. The two methods that outperformed attn2mask relied on test-time heavy computation; STEGO used test-time clustering [12] and ReCo used retrieval of relevant images for co-segmentation [16].
\begin{table}
\begin{tabular}{c c} \hline \hline Method & mIoU \\ \hline Attn2mask & **51.1** \\ NTI on the train set and retraining & 50.6 \\ NTI on the val set & 47.3 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Comparisons with null-text inversion (NTI)
Figure 4: Visualized attention maps corresponding to each of prompt tokens.
\begin{table}
\begin{tabular}{c c} \hline \hline Post-processing & mIoU \\ \hline Thresholding \(\theta=64\) & 43.5 \\ Thresholding \(\theta=128\) & 44.6 \\ Otsu & 41.2 \\ dCRF \(\beta=0\) & 44.3 \\ dCRF \(\beta=0.2\) & **46.5** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Variations of post-processing and hyperparameters
### Variations of post-processing
As ablative analyses of post-processing, we tested alternatives of dCRF in the post-processing. Table 5 summarizes the results measured in the attn2mask 10,000-training-data settings. The result shows the effectiveness of dCRF compared with simply applying threshold value \(\theta\) on the attention maps ranging in \([0,255]\), or adaptive threshold with Otsu's method [10].
### Stable-Diffusion attention on real images?
We witnessed the surprising effectiveness of Stable Diffusion's synthesized images and attention maps combined into a segmentation dataset, but raise a new question: _Is this effectiveness solely due to the quality of the attention maps from Stable Diffusion? Do we necessarily need to combine them with synthesized images?_ To answer these questions, we attempt to train segmentation models with real images and Stable-Diffusion attentions _projected_ on them. For this purpose, we use null-text inversion [17], an inversion method for diffusion models that estimates generation trajectories to generate a given image with a given prompt. The estimated trajectories, i.e., latents at each time-step are used to compute attention maps, and then we can use them as pseudo-labels for segmentation. However, this did not outperform attn2mask despite its additional usage of real images, as shown in Table 6. Using the ground-truth class labels as the prompts, we tried NTI-based pseudo-label generation for VOC training images and direct NTI-based inference of the validation images in combination with dCRF post-processing, but both did not outperform attn2mask. This indicates that attentions from Stable Diffusion are particularly good when combined with its own synthesized images, which would support our design of attn2mask.
### Visualizations
Figure 4 visualizes examples of the attention maps corresponding to each prompt token. We can see that Stable Diffusion distributes attention well in response to the prompts even in the multi-object generation. This suggests that visual grounding emerges in learning large-scale text2img generation without explicit location-based supervision.
Figure 5 shows examples of generated VOC training images and labels by attn2mask. In addition to the natural-looking object images, it is seen that labels are highly accurate except for several missing detailed parts and over-masking of backgrounds where contextual connections exist, i.e., shadows of objects or waves around boats. We also noticed there exist severe failure cases such as masks not covering whole objects (bottom left) or mismatches of object classes and labels (bottom right).
Figure 6 visualizes synthesized images and attention maps before and after DreamBooth-LoRA-based adaptation. Before adaptation, image contents and styles were not well-aligned with Cityscapes and the naturality of images was also to some extent harmed due to generation in a distant domain from the LAION dataset. After adaptation, generated images look enough like the images in Cityscapes, while attention maps were kept focused on the objects of interest. It is interesting that the correspondence between attention maps and objects is not corrupted after fine-tuning with the Cityscapes images that are not paired with any captions, which might be because DreamBooth-LoRA reuses the prior knowledge acquired in the original Stable Diffusion well.
## Conclusion
We presented a real-image-and-annotation-free semantic-segmentation method attn2mask. Although synthetic training did not outperform real-image-based counterparts to the
Figure 5: Examples of our training data generated for VOC.
Figure 6: Examples of images and attention maps before and after LoRA-based adaptations in Cityscapes.
extent we studied, attn2mask worked surprisingly well for its purely real-image-free segmentation training, and the gaps between the real and synthetic performances were nonnegligibly narrow. We hope that this work inspires future studies of generated-image-based methods in more various tasks.
|
2304.11192 | The reionising bubble size distribution around galaxies | Constraining when and how reionisation began is pivotal for understanding
when the first galaxies formed. Lyman-alpha (Ly$\alpha$) emission from galaxies
is currently our most promising probe of these early stages. At z>7 the
majority of galaxies detected with Ly$\alpha$ are in candidate overdensities.
Here we quantify the probability of these galaxies residing in large ionised
bubbles. We create (1.6 Gpc)$^3$ reionising intergalactic medium (IGM)
simulations, providing sufficient volume to robustly measure bubble size
distributions around UV-bright galaxies and rare overdensities. We find $M_{\rm
UV} \lesssim -16$ galaxies and overdensities are $\gtrsim$10-1000x more likely
to trace ionised bubbles compared to randomly selected positions. The brightest
galaxies and strongest overdensities have bubble size distributions with
highest characteristic size and least scatter. We compare two models: gradual
reionisation driven by numerous UV-faint galaxies versus more rapid
reionisation by rarer brighter galaxies, producing larger bubbles at fixed
neutral fraction. We demonstrate that recently observed z~7 overdensities are
highly likely to trace large ionised bubbles, corroborated by their high
Ly$\alpha$ detection rates. However, the z~8.7 association of Ly$\alpha$
emitters in EGS and GN-z11, with Ly$\alpha$ at z=10.6, are unlikely to trace
large bubbles in our fiducial model -- 11% and 7% probability of >1 proper Mpc
bubbles, respectively. Ly$\alpha$ detections at such high redshifts could be
explained by: a less neutral IGM than previously expected; larger ionised
regions at fixed neutral fraction; or if intrinsic Ly$\alpha$ flux is unusually
strong in these galaxies. We discuss how to test these scenarios with JWST and
the prospects for using upcoming wide-area surveys to distinguish between
reionisation models. | Ting-Yi Lu, Charlotte Mason, Anne Hutter, Andrei Mesinger, Yuxiang Qin, Daniel P. Stark, Ryan Endsley | 2023-04-21T18:01:47Z | http://arxiv.org/abs/2304.11192v1 | # The reionising bubble size distribution around galaxies
###### Abstract
Constraining when and how reionisation began is pivotal for understanding when the first galaxies formed. Lyman-alpha (Ly\(\alpha\)) emission from galaxies is currently our most promising probe of these early stages. At \(z>7\) the majority of galaxies detected with Ly\(\alpha\) are in candidate overdensities. Here we quantify the probability of these galaxies residing in large ionised bubbles. We create (1.6 Gpc)\({}^{3}\) reionising intergalactic medium (IGM) simulations, providing sufficient volume to robustly measure bubble size distributions around UV-bright galaxies and rare overdensities. We find \(M_{\rm{UV}}\lesssim-16\) galaxies and overdensities are \(\gtrsim 10-1000\times\) more likely to trace ionised bubbles compared to randomly selected positions. The brightest galaxies and strongest overdensities have bubble size distributions with highest characteristic size and least scatter. We compare two models: gradual reionisation driven by numerous UV-faint galaxies versus more rapid reionisation by rarer brighter galaxies, producing larger bubbles at fixed neutral fraction. We demonstrate that recently observed \(z\sim 7\) overdensities are highly likely to trace large ionised bubbles, corroborated by their high Ly\(\alpha\) detection rates. However, the \(z\approx 8.7\) association of Ly\(\alpha\) emitters in EGS and GN-z11, with Ly\(\alpha\) at \(z=10.6\), are unlikely to trace large bubbles in our fiducial model - 11% and 7% probability of \(>1\) proper Mpc bubbles, respectively. Ly\(\alpha\) detections at such high redshifts could be explained by: a less neutral IGM than previously expected; larger ionised regions at fixed neutral fraction; or if intrinsic Ly\(\alpha\) flux is unusually strong in these galaxies. We discuss how to test these scenarios with JWST and the prospects for using upcoming wide-area surveys to distinguish between reionisation models.
keywords: cosmology: theory - cosmology: dark ages, reionisation, first stars - galaxies: high-redshift; - galaxies: intergalactic medium
## 1 Introduction
The reionisation of intergalactic hydrogen in the Universe's first billion years was likely caused by photons emitted from the first galaxies, and is thus intimately linked to their nature (e.g., Stark, 2016; Dayal and Ferrara, 2018; Mesinger, 2019). Constraining the reionisation process thus enables us to infer properties of these first luminous sources, importantly giving us information about the earliest generations of galaxies which are too faint to observe directly, even with JWST. In the past decade, substantial progress has been made in measuring the timing of the late stages of reionisation. The electron scattering optical depth to the CMB indicates reionisation was on-going at \(z\sim 7-8\)(Planck Collaboration et al., 2020) and the attenuation of Lyman-alpha (Ly\(\alpha\), 1216 A) photons by neutral hydrogen in the intergalactic medium (IGM) in the spectra of \(z\gtrsim 5\) quasars and galaxies implies the IGM was almost entirely ionised by \(z\sim 5.5-6\)(McGreer et al., 2015; Lu et al., 2022; Qin et al., 2021; Bosman et al., 2022) but that the IGM was significantly neutral (volume-averaged neutral fraction \(\Sigma_{\rm{H}}\gtrsim 0.7\)) just \(\sim 300\) Myr earlier at \(z\sim 8\)(e.g., Davies et al., 2018; Hoag et al., 2019; Mason et al., 2019; Bolan et al., 2022).
While we are beginning to reach a consensus on when the end stages of reionisation occurred, we still do not understand _how_ it happened. Which sources drove it and when did it start? The onset of reionisation provides pivotal information about the onset of star formation. Simulations predict the first reionised regions grow around overdensities (e.g., Furlanetto et al., 2004; Zahn et al., 2007; Trac and Cen, 2007; Mesinger and Furlanetto, 2007; Ocvirk et al., 2020; Hutter et al., 2021; Qin et al., 2022), but, while there are strong hints (Castellano et al., 2016; Tilvi et al., 2020; Hu et al., 2021; Endsley and Stark, 2022; Jung et al., 2022; Larson et al., 2022), this is yet to be robustly confirmed observationally. Furthermore, the ionising emission properties of high-redshift sources are still highly uncertain, and, with current constraints on the reionisation timeline alone, there is a degeneracy between reionisation driven by numerous low mass galaxies with low ionising emissivity (e.g. ionising photon escape fraction \(\sim 5\%\)), and rarer bright galaxies with high ionising emissivity (e.g., Greig and Mesinger, 2017; Mason et al., 2019; Finkelstein
et al., 2019; Naidu et al., 2020). However, the clustering strength of the dominant source population has a large impact on the expected size distribution of ionised bubbles (e.g., McQuinn et al., 2007; Mesinger et al., 2016; Hassan et al., 2018; Seiler et al., 2019). Thus, identifying and measuring large ionised regions at early times provides vital information about the reionisation process.
Before we will detect the 21-cm power spectrum (e.g., Pober et al., 2014; Morales and Wyithe, 2010) the most promising tool to study the early stages of reionisation and the morphology of ionised regions is Ly\(\alpha\) emission from galaxies, which is strongly attenuated by neutral hydrogen (e.g., Malhotra and Rhoads, 2006; Stark et al., 2010; Dijkstra, 2014; Mesinger et al., 2015; Mason et al., 2018). If reionisation starts in overdensities we expect a strong increase in the clustering of Ly\(\alpha\)-emitting galaxies (LAEs) in the early stages of reionisation (McQuinn et al., 2007; Sobacchi and Mesinger, 2015; Hutter et al., 2015). Strong evidence of enhanced clustering has not yet been detected in wide-area Ly\(\alpha\) narrow-band surveys (e.g., Ouchi et al., 2017), likely because these surveys have mostly observed at \(z<7\), when the IGM is probably still \(<50\%\) neutral (e.g., Mason et al., 2019; Qin et al., 2021) and thus the clustering signal is expected to be weak (Sobacchi and Mesinger, 2015).
However, spectroscopic studies of \(z>7\) galaxies selected by broad- and narrow-band imaging in smaller fields have yielded tantalizing hints of spatial inhomogeneity in the early stages of reionisation. In particular, an unusual sample of four UV luminous (\(M_{\rm UV}\sim-22\)) galaxies detected in CANDELS (Koekemoer et al., 2011; Grogin et al., 2011) fields (three of which are in the EGS field) with strong Spitzer/IRAC excesses, implying strong rest-frame optical emission, were confirmed with Ly\(\alpha\) emission at \(z\approx 7.1\), \(7.3\), \(7.7\), and \(8.7\)(Zitrin et al., 2015; Oesch et al., 2015; Roberts-Borsani et al., 2016; Stark et al., 2017). Furthermore, Ly\(\alpha\) was recently detected at \(z=10.6\) in the UV-luminous galaxy GN211 (Bunker et al., 2023). The high detection rate of Ly\(\alpha\) in these UV bright galaxies is at odds with expectations from lower redshifts, where UV-faint galaxies are typically more likely to show strong Ly\(\alpha\) emission (e.g., Stark et al., 2011; Cassata et al., 2015).
This may imply that these galaxies trace overdensities which reionise early, or that they have enhanced Ly\(\alpha\) emission due to young stellar populations and hard ionising spectra, or, more likely, a combination of these two effects (Stark et al., 2017; Mason et al., 2018; Endsley et al., 2021; Roberts-Borsani et al., 2022; Tang et al., 2023). Photometric follow-up around some of these galaxies has found evidence they reside in regions that are \(\ga 3\times\) overdense (Leonova et al., 2022; Tacchella et al., 2023). Furthermore, spectroscopic follow-up for Ly\(\alpha\) in neighbors of these bright sources has proved remarkably successful: to date, of the \(\sim 30\) galaxies detected with Ly\(\alpha\) emission at \(z>7\), \(14\) of these lie within a few physical Mpc of three UV luminous galaxies detected in the CANDELS/EGS field at \(z\approx 7.3\), \(7.7\) and \(8.7\)(Tlivi et al., 2020; Jung et al., 2022; Larson et al., 2022; Tang et al., 2023). Do these galaxies reside in large ionised regions? Due to the high recombination rate at \(z\ga 10\) large ionised regions require sustained star formation over \(\ga 100\) Myr (e.g., Shapiro and Giroux, 1987), thus detection of large ionised regions at early times would imply significant early star formation.
Assessing the likelihood of detecting Ly\(\alpha\) emitting galaxies during reionisation requires knowledge of the expected distribution of ionised bubble sizes around the observed galaxies. Previous work has focused on predicting the size distribution of all ionised regions during reionisation, as is required for forecasting the 21-cm power spectrum (e.g., Furlanetto and Oh, 2005; Mesinger and Furlanetto, 2007; Geil et al., 2016; Lin et al., 2016). However, as galaxies are expected to be biased tracers of the density field (e.g., Adelberger et al., 1998; Overzier et al., 2006; Barone-Nugent et al., 2014), these ionised bubble size distributions likely underestimate the expected ionised bubble sizes around observable galaxies. The 21-cm galaxy cross-power spectrum for different halo masses (e.g. Lidz et al., 2009; Park et al., 2014) reflects the typical ionised region size around different halo masses. However, the size distributions of ionised regions were not discussed in previous works. Mesinger and Furlanetto (2008) show the Ly\(\alpha\) damping wing optical depth distributions around galaxies of various masses at \(z\sim 9\), finding the most massive halos have the lowest optical depth with smallest dispersion in optical depth, which corresponds to being hosted by larger bubble sizes with smaller variance in bubbles compared to lower mass halos, though that work did not model the UV magnitude of the halos and only presented optical depths for halos \(M_{h}<2\times 10^{11}\)\(M_{\odot}\). The correlation between galaxy properties and their host ionised bubbles has been explored in some semi-analytic simulations (Mesinger and Furlanetto, 2008; Geil et al., 2017; Yajima et al., 2018; Qin et al., 2022), finding that more luminous galaxies are likely to reside in large ionised bubbles. However, these studies have been restricted to small volumes, (100 Mpc)\({}^{3}\), simulations with only a handful of UV-bright galaxies and overdensities, so Poisson noise is large, or models of cosmological Stromgren spheres which do not account for the overlap of bubbles (Yajima et al., 2018).
In this paper we create robust predictions for the size distribution of ionised bubbles around observable (\(M_{\rm UV}\lesssim-16\)) galaxies. We create large volume (\(1.6\) cGpc)\({}^{3}\) simulations of the reionising IGM using the semi-numerical code 21cmfast(Mesinger et al., 2011). With these simulations, we can robustly measure the expected bubble size distribution around rare overdensities and UV-bright galaxies (\(M_{\rm UV}\lesssim-22\) or \(M_{\rm halo}\gtrsim 10^{11}M_{\odot}\)) to compare with observations. We assess how likely the observed \(z>7\) associations of Ly\(\alpha\) emitters are to be in large ionised bubbles, finding that while \(z\sim 7\) observations are consistent with our current consensus on the reionisation timeline, Ly\(\alpha\) detections at \(z>8\) are very unexpected. We further demonstrate how different reionising source models produce very different predictions for the bubble size distribution at any neutral fraction. We discuss the prospect of using upcoming wide-area surveys to distinguish the reionising source models based on our bubble size distribution predictions by observing a large number of overdensities to chart the growth of the first ionised regions.
This paper is structured as follows: we describe our simulations in Section 2, we present our results on the bubble size distributions as a function of galaxy luminosity and overdensity, and compare with observations in Section 3. We discuss our results in Section 4 and conclude in Section 5. We assume a flat \(\Lambda\)CDM cosmology with \(\Omega_{m}=0.31\), \(\Omega_{\Lambda}=0.69\), \(h=0.68\) and magnitudes are in the AB system.
## 2 Methods
In the following sections, we describe our simulation setup and analysis framework. In Section 2.1 we describe our reionisation simulations. In Section 2.2 we describe how we populate simulated halos with galaxy properties and in Section 2.3 we describe how we measure the ionised bubble size distribution using the mean free path method and the watershed algorithm.
### Reionisation simulations
To study the link between galaxy environment and reionisation, we use the semi-numerical cosmological simulation code, 21cmfast
v21 (Mesinger et al., 2011; Sobacchi & Mesinger, 2014; Mesinger et al., 2016). 21cmfast first creates a 3D linear density field at high redshift, which is evolved to the redshift of interest using linear theory and the Zel'dovich approximation. The ionisation field (and other reionisation observables such as 21cm brightness temperature) is then generated using an excursion-set theory approach assuming an ionisation-density relation and a given reionisation model. In this way, 21cmfast can quickly simulate reionisation on large scales (\(>100\) Mpc), with a simple, flexible model for the properties of reionising galaxies.
Here we briefly describe the creation of ionisation boxes before proceeding to our simulation setups, and refer the reader to Mesinger et al. (2011); Mesinger et al. (2016) for more details. For a density box at redshift, \(z\), a cell (at position \(\mathbf{x}\)) is flagged as ionised if
\[\xi_{\rm coll}(\mathbf{x},z,R,\overline{M}_{\rm min})\geq 1+\overline{n}_{\rm rec }(\mathbf{x},z,R), \tag{1}\]
where \(f_{\rm coll}(\mathbf{x},z,R,\overline{M}_{\rm min})\) is the fraction of a collapsed matter residing in halos larger than a minimum halo mass, \(\overline{M}_{\rm min}\), inside a sphere of radius \(R\), and \(\overline{n}_{\rm rec}\) is the average cumulative number of recombinations. \(\zeta\) is an ionising efficiency parameter:
\[\zeta=20\left(\frac{N_{\gamma}}{4000}\right)\left(\frac{f_{\rm esc}}{0.1} \right)\left(\frac{f_{\rm s}}{0.05}\right)\left(\frac{f_{\rm h}}{1}\right), \tag{2}\]
where \(N_{\gamma}\) and \(f_{\rm esc}\) are the input reionisation parameters, the number of ionising photons per stellar baryon, and the ionising photon escape fraction, respectively. \(f_{\rm s}\) is the fraction of galactic gas in stars. \(f_{\rm h}\) is the fraction of baryons inside the galaxy. While we expect a variation of these parameters with halo mass and/or time (see e.g., Wise et al., 2014; Kimm & Cen, 2014; Xu et al., 2016; Trebitsch et al., 2017; Lewis et al., 2020; Ma et al., 2020), simply changing \(\zeta\) and \(\overline{M}_{\rm min}\) can encompass a broad range of scenarios for reionising source models and thus produce different reionising bubble morphologies (e.g., Mesinger et al., 2016). High values of \(\zeta\) and \(\overline{M}_{\rm min}\) correspond to reionisation dominated by rare, massive galaxies, which require a larger output of ionising photons to produce a reionisation timeline consistent with observations, while low \(\zeta\) and \(\overline{M}_{\rm min}\) values correspond to reionisation driven by numerous faint galaxies with weaker ionising emissivity.
In this paper, we simulate large-scale boxes of dark matter halos and the IGM ionisation field in order to produce robust bubble size distributions as a function of galaxy properties with minimal Poisson noise, using two different reionising source models. We produce (1600 cMpc)3 coeval boxes at \(z=[7,8,9,10]\), with a grid size of 1024 pixels, resulting in a resolution of \(\sim 1.6\) cMpc. We generate a catalogue of dark matter halos from the density fields associated with these boxes using extended Press-Schechter theory (Sheth et al., 2001) and a halo-filtering method (see Mesinger & Furlanetto (2007) for full description of the method) which allows us to generate halos with accurate halo mass function down to M\({}_{\odot}\gtrsim 10^{8}\). We use identical initial conditions (and thus density field and halo catalogue at each redshift) for all of our models, so in our analysis below we can isolate the impact of the reionisation source model on the bubble size distribution in different galaxy environments.
Footnote 3: [https://github.com/andreimesinger/21cmFAST](https://github.com/andreimesinger/21cmFAST)
We create ionisation boxes spanning \(\overline{x}_{\rm in}=0.1-0.9\) (\(\Delta\overline{x}_{\rm in}=\)0.1), using Equation 1, for two reionising source models, similar to the approach of Mesinger et al. (2016), which span the plausible range expected by early galaxies:
1. _Gradual_: Reionisation driven by faint, low mass galaxies down to the atomic cooling limit (\(M_{\rm min}=5\times 10^{8}M_{\odot}\), \(M_{\rm UV}\lesssim-11.0\)). Reionisation driven by numerous faint galaxies leads to a gradual reionisation process, where the IGM can begin to reionise very early. We show in Figure 1 that the ionised regions in this model start slowly and gradually grow and overlap. We use this as our fiducial model.
Figure 1: Slices from our simulations at \(\overline{x}_{\rm in}=0.2,0.5,0.7,0.9\) for _Gradual_ (upper panel) and _Rapid_ (lower panel). White regions show ionised gas and black regions show neutral gas. We show 1.5 cMpc slices in a \(300\times 300\) cMpc region of our (1.6 cMpc)\({}^{3}\) coeval cubes. We plot galaxies in this slice, color-coded by \(M_{\rm UV}\), in the leftmost column. Here we only show galaxies with \(-22\leq M_{\rm UV}\leq-16\) for demonstration purposes.
2. _Rapid_: Reionisation driven by rarer bright galaxies (\(M_{\rm min}=10^{10}M_{\odot}\), \(M_{\rm UV}\lesssim-19.5\)). As massive galaxies take more time to assemble, reionisation starts later and the morphology is characterised by rarer, larger ionised regions at fixed neutral fractions.
For each model, at each redshift, we vary \(\zeta\) so as to compare different reionisation morphologies at the same \(\overline{x}_{\rm in}\). In the end, we create a total of 72 simulations: 4 (redshift) \(\times 2\) (reionisation model) \(\times 9\) (\(\overline{x}_{\rm in}\)) ionisation boxes, and 4 (redshift) halo catalogues. In addition, in Sec. 3.4, to compare our simulations with observations, we expand the \(\overline{x}_{\rm in}\) range at the high-\(\overline{x}_{\rm in}\) end to \(\overline{x}_{\rm in}\) =[0.85,0.90,0.95] at \(z=9\) for the two models.
Example slices of the ionisation field from the two sets of simulations are shown in Figure 1. This clearly shows that the _Rapid_ model has larger, rarer bubbles compared to the _Gradual_ model at fixed \(\overline{x}_{\rm in}\). Underdense regions are more likely to be ionised in the _Gradual_ model. This is because in the _Gradual_ model, faint galaxies, which live in a wider density range, are able to ionise the IGM. While in the _Rapid_ model, only bright, more massive galaxies, which most likely only live in overdensities, can ionise the IGM.
Figure 2 shows potential reionisation timelines of the two reionisation models, for demonstration purposes only. To produce example reionisation histories for our two models we follow the standard procedure (e.g., Robertson, 2010) and generate an ionising emissivity from the product of the halo mass density, integrated down to the two mass limits described above, and an ionising efficiency, \(\zeta\). We alter \(\zeta\) for both models to fix the redshift of the end of the reionisation to \(z\sim 6\). The _Gradual_ model has an earlier onset of reionisation and slower redshift evolution of \(\overline{x}_{\rm in}\) compared to the _Rapid_ model. We note that as we use coeval boxes we do not assume a model reionisation history in this work, rather we will use non-parametric reionisation timeline inferred by Mason et al. (2019) from independent constraints on the IGM neutral fraction, including the Ly\(\alpha\) equivalent width distribution (Mason et al., 2018, 2019; Hoag et al., 2019), Ly\(\alpha\) emitter clustering (Sobacchi & Mesinger, 2015), Ly\(\alpha\) forest dark pixels fraction (McGreer et al., 2015), and QSO damping wings (Davies et al., 2018; Greig et al., 2019) and the Planck Collaboration et al. (2020) electron scattering optical depth.
### Galaxy population model
To populate halos with realistic galaxy properties we use a conditional UV luminosity to halo mass relation, to assign UV luminosities, with intrinsic scatter, to our halo catalogue. We follow Ren et al. (2019) and assume UV magnitudes at a given halo mass are drawn from a Gaussian distribution with dispersion \(\sigma\) and median \(M_{\rm UV,c}\) (\(M_{\rm h}\), \(\sigma\), \(z\)):
\[p(M_{\rm UV}\mid M_{\rm h})=\frac{1}{\sqrt{2\pi}\sigma}\exp\left(\frac{-[M_{ \rm UV}-M_{\rm UV,c}(M_{\rm h},\sigma,z)]^{2}}{2\sigma^{2}}\right). \tag{3}\]
The dispersion was originally introduced to explain scatter in the Tully-Fisher relation (Yang et al., 2005). It is a free parameter in our model, and following Whitler et al. (2020) we assume \(\sigma=0.5\) mag. Ren et al. (2019) found that this value is consistent with observed luminosity functions over \(z\sim 6-10\), and this value is also consistent with the expected variance due to halo assembly times (Ren et al., 2018; Mason et al., 2023). Whitler et al. (2020) found that this scatter has only a minor impact on the transmission of Ly\(\alpha\) from galaxies in the reionising IGM, so we do not expect it to significantly change the relationship between galaxy luminosity and the size of the ionised bubbles they reside in.
The median relation \(M_{\rm UV,c}(M_{\rm h},\sigma,z)\) is set by calibration to the UV luminosity function. Ren et al. (2019) showed that above \(M_{h}\ga 10^{12}M_{\odot}\) a flattening is required in \(M_{\rm UV,c}(M_{\rm h})\) to maintain consistency with the observed UV LFs - which can be thought of as a critical mass or luminosity threshold for star formation. Given that our halo catalogue contains only a small number (0.001% of the total catalogue) of \(>10^{12}M_{\odot}\) halos at \(z\sim 7\), and far fewer at \(z>7\) due to the steepness of the halo mass function, we do not consider this flattening. We thus use the \(M_{\rm UV,c}(M_{\rm h},z)\) relations from the Mason et al. (2015) UV luminosity function model as the median UV magnitudes for Equation 3. Our resulting luminosity functions are consistent with \(z\sim 7-10\) observations over the range where observations are currently magnitude complete: \(-22\la M_{\rm UV}\la-17\)(e.g., Bouwens et al., 2021, see Appendix A).
### Measuring bubble sizes
We measure the size of ionised regions, \(R_{\rm ion}\), using both the mean-free-path (MFP) method (Mesinger & Furlanetto, 2007) and the watershed algorithm (Vincent & Soille, 1991), an image segmentation algorithm which was first applied to reionisation simulations by Lin et al. (2016).
Lin et al. (2016) tested a range of approaches for estimating the sizes of ionised bubbles in simulations and determined these two methods were optimal compared to other techniques in the literature because they most accurately recover input ionised bubble size distributions, can account for overlapping bubbles, and produce sizes
Figure 2: Example reionisation timelines for the _Gradual_ model (solid line) and the _Rapid_ model (dashed line) for demonstration purposes. Different symbols are neutral fractions constrained by Ly\(\alpha\) equivalent width (stars, Mason et al., 2018, 2019; Bolan et al., 2022), Ly\(\alpha\) emitter clustering (squares Sobacchi & Mesinger, 2015), Ly\(\alpha\) forest dark pixels fraction (circles, McGreer et al., 2015), and QSO damping wings (diamonds, Davies et al., 2018; Greig et al., 2019) observations. The grey line with shaded region is the reionisation timeline and its 16-84 percentile inferred using the aforementioned observations (Mason et al., 2019). In the following we will use this grey posterior for \(\overline{x}_{\rm in}\) for comparing to observations as a function of redshift, the _Rapid_ and _Gradual_ models are shown just to illustrate how these models differ when the ionisation efficiency is fixed (see Section 2.1).
corresponding to a physically intuitive quantity. Other commonly used approaches for modelling the bubble size distribution, i.e. the excursion set formulation (Furlanetto et al., 2004b; Furlanetto & Oh, 2005) or approaches which grow cosmological Stromgren spheres around halos (e.g., Yajima et al., 2018) will underestimate the largest bubble sizes because these approaches do not include the effect of overlapping bubbles.
Here we describe these two methods, and their advantages and limitations. We will discuss how our resulting bubble size distributions compare to works using other methods in Section 4.1.
#### 2.3.1 Mean free path (MFP)
This method was first used to measure ionised bubble sizes by Mesinger & Furlanetto (2007). It is essentially a Monte-Carlo ray-tracing algorithm, which enables us to measure a probability distribution for ionised bubble sizes by estimating the distance photons travel before they encounter neutral gas. We randomly choose a starting position (or the position of a galaxy, as described later), if the cell is fully ionised, we measure the distance from that position to where we encounter the first neutral or partially ionised cell at a random direction. Given our simulation resolution, the smallest bubble size we can measure is \(\sim 1\,\mathrm{cMpc}\). If the position is neutral, we set \(R_{\mathrm{ion}}=0\,\mathrm{cMpc}\). We measure bubble sizes over the full simulation volume by sampling the distance to neutral gas from \(10^{5}\) random positions and sightlines to build bubble size distributions for our simulations.
In Section 3.1 we will show the bubble size distribution as a function of galaxy \(M_{\mathrm{UV}}\), to estimate the sizes of ionised bubbles around observable galaxies. For this, we use the mean free path method as defined above, but start our measurements at the position of each galaxy in the simulation box.
We also will measure the bubble size distribution as a function of galaxy overdensity to compare with current observations. Galaxy overdensity depends on the dark matter density of the underlying field (e.g. Cole & Kaiser, 1989; Mo et al., 1997; Sheth et al., 2001): \(n=\overline{n}(1+b\delta)\), where \(n\) is the number density of galaxies observed in a field, \(\overline{n}\) is the mean cosmic number density, \(b\) is the bias, and \(\delta\) is the dark matter density in the field. Since 21cmfast populates halos and calculates \(x_{\mathrm{HI}}\) based on galaxy number density via the excursion set formulation (Furlanetto et al., 2004b), we expect a strong relation between bubble size and galaxy overdensity (e.g., Mesinger & Furlanetto, 2007).
We define the observed overdensity, \(N/\langle N\rangle\), as the number, \(N\), of galaxies brighter than a given limit in a survey volume relative to the number expected in that volume based on the average in the whole simulation box, \(\langle N\rangle\). To measure overdensity using our galaxy catalogue, described in Section 2.2, for a mock survey, we discard galaxies with \(M_{\mathrm{UV}}>M_{\mathrm{UV,lim}}\), where \(M_{\mathrm{UV,lim}}\) is the UV magnitude limit in an observed overdensity. While the galaxy catalogue and \(\overline{x}_{\mathrm{int}}\) boxes are generated from the same density field, as described in Section 2.1, galaxies are given sub-grid positions, thus to compare the overdensity and \(\overline{x}_{\mathrm{int}}\) fields we convert the resulting galaxy catalogue into a galaxy number count grid of the same size as the \(\overline{x}_{\mathrm{int}}\) grids. Then we convolve the galaxy number count grid with a 3-D kernel of the survey volume and divide the value of each cell by \(\langle N\rangle\), the mean number count per \(\mathrm{cMpc}^{3}\) in the halo box, to obtain the overdensity in each cell.
Cells in the resulting overdensity box correspond to positions with a given overdensity above the magnitude limit within the volume. We can then carry out an analogous procedure to that described above using the mean free path algorithm to find the bubble size distribution as a function of overdensity using the mean free path method, by starting in positions of a given overdensity.
#### 2.3.2 Watershed algorithm
This method was first used to measure ionised bubble sizes in reionisation simulations by Lin et al. (2016). It is an image segmentation algorithm which treats constant values of a scalar field as contour lines corresponding to depth in a tomographic map, which it then "floods" to break up the images into separate water basins (Vincent & Soule, 1991).
We use the implementation of the watershed algorithm in skikit-image (van der Walt et al., 2014). We apply the algorithm to 3D binary \(\overline{x}_{\mathrm{int}}\) cubes. We first apply the 'distance transform' to calculate the Euclidean distance, \(d_{i}\) of every point to the nearest neutral region (if the point is neutral then the distance is zero). We invert the distances to 'debts': \(d_{i}\rightarrow-d_{i}\). Centres of bubbles are then local minima in the depth cube and the bubble boundaries are identified by flooding regions starting from the local minima, and marking where regions meet - these are contours of constant depth \(d_{i}\).
As with any image segmentation algorithm, the identification of local minima will lead to over-segmentation, as every local minimum will be marked as a unique bubble, even if it is overlapping with a larger one, thus a threshold must be used to avoid this. We follow the prescription of Lin et al. (2016) and use the 'H-minima transform' to essentially 'fill in' small basins. We identify basins with a relative depth of \(h\) from the local minimum to the bubble boundary and set \(d_{i}\to d_{i}+h\) for these regions, reducing the depth of the local minima. After the H-minima transform, we can again identify bubbles as above and see that large bubbles are correctly identified. This process may remove small isolated bubbles which had depth \(<h\). These can be added back in manually using the initial segmentation cube.
The H-minima threshold \(h\) is a free parameter, we use \(h=2.5\), which is fixed so that the resulting bubble size distribution is comparable to that obtained with the MFP method above and bubbles do not suffer too much from over-segmentation. We solve for the value of \(h\) by minimising the Kullback-Leibberger (KL divergence; Kullback, 1968) between the watershed bubble size distribution and MFP bubble distribution in a (500 \(\mathrm{cMpc}\))\({}^{3}\) sub-volume of our simulation. We obtain a cube with the cells corresponding to unique bubbles labelled. From this we can calculate the volume of each bubble and calculate the size as the radius of each bubble assuming they are spherical: \(R=(3V/4\pi)^{1/3}\)
The watershed algorithm is a more computationally intensive method than the MFP method, and requires some tuning of the \(h\) threshold, so we predominantly use the MFP approach. However, the watershed algorithm has a significant advantage in that it can measure the absolute number of bubbles in a volume. It is also possible to use it to directly connect galaxies and their host bubbles. We will use it in Section 3.5 to make forecasts for the number of large bubbles expected in upcoming wide-area surveys.
## 3 Results
Previous works have focused on simulating the global bubble size distribution, in order to produce predictions for 21-cm experiments (e.g. Furlanetto & Oh, 2005; Mesinger & Furlanetto, 2007; Geil et al., 2016; Lin et al., 2016). Some 21cm-galaxy cross correlation studies (e.g. Lidz et al., 2009; Park et al., 2014) calculate the correlation scales for various halo masses but do not directly calculate the bubble size distribution. Here we focus on the expected bubble size distribution
_around observable galaxies_, which are likely to be more biased density tracers, and thus we expect are likely to trace the largest bubbles.
In Section 3.1 we present the bubble size distribution as a function of galaxy UV luminosity, and in Section 3.2 we show the bubble size distribution as a function of galaxy overdensity. The impact of different reionising source models on the bubble size distribution is discussed in Section 3.3. We demonstrate in Appendix B that our results do not significantly depend on redshift. In Section 3.4 we use our simulations to interpret recent observations of Ly\(\alpha\) emission in overdensities at \(z\ga 7\), and we make predictions for upcoming wide-area observations in Section 3.5.
### Bubble size distribution as a function of UV luminosity
To first order, UV luminosity traces dark matter halo mass and thus density (e.g., Cooray & Milosavljevic 2005; Tempel et al. 2009; Mason et al. 2015). We thus expect the brightest galaxies to reside in the most massive halos in overdense regions, and therefore these galaxies are likely to sit in large bubbles which reionised early.
We quantify this in Figure 3, where we show the size distribution of ionised bubbles around galaxies of a given UV luminosity as a function of the volume-averaged IGM neutral fraction, \(\overline{x}_{\rm{H}}\), in our simulations, compared to the bubble size distribution in the full volume. This is essentially the distribution at the mean density, \(\delta=0\). We measure the distribution of bubble sizes in 4 \(M_{\rm{UV}}\) bins: \(M_{\rm{UV}}=-16,-18,-20,-22\), with \(\Delta M_{\rm{UV}}=0.1\). We show our fiducial _Gradual_ simulation but will compare it to the _Rapid_ simulation in Section 3.3.
In contrast to previous literature we also include the fraction of galaxies (or randomly selected pixels for our full volume bubble size distribution) which are in neutral regions in our simulation. We mark these fractions with arrows in Figure 3. These sources may reside in ionised bubbles below our resolution limit (\(\sim 1\,\)cMpc for bubble radius). Including these occurrences in our bubble size distribution leads to important insights about the environments of galaxies as we discuss below. We note that the 'full volume' bubble size distribution excluding neutral cells and those below our resolution limit is equivalent to the bubble size distributions presented in previous literature (e.g., Furlanetto & Oh 2005; Mesinger & Furlanetto 2007).
Figure 3 shows that as \(\overline{x}_{\rm{H}}\) decreases, the bubble size distributions shift to higher values, which is expected as ionised regions grow. Compared to the bubble size distribution in the full volume, we see three important features of the bubble size distributions which we describe below.
First, while the bubble size distribution in the full volume has a high fraction of bubbles with \(R\la 1\,\)cMpc, observable galaxies (\(M_{\rm{UV}}\la-16\)) are \(>10-1000\times\) more likely to be in bubbles rather than neutral regions. This is because galaxies are biased tracers of the density field and therefore trace ionised regions more closely. At the end stages of reionisation, \(\overline{x}_{\rm{H}}\la 0.5\), we find only \(\la 10\%\) of observable galaxies are in small ionised or neutral regions below our resolution limit. This is consistent with the idea of the 'post-overlap' phase of reionisation (e.g., Miralda-Escude et al. 2000), where the majority of galaxies lie within ionised regions and only voids remain to be ionised.
We see a strong trend with UV luminosity, where the brightest galaxies are always least likely to be in small ionised or neutral regions, while UV-faint galaxies have a more bimodal bubble size distribution. The proportion of UV-faint galaxies in small ionised or neutral regions is high early in reionisation: but declines rapidly from \(\sim 60\%\) at \(\overline{x}_{\rm{H}}\sim 0.9\) to \(\sim 10\%\) at \(\overline{x}_{\rm{H}}\sim 0.5\) for \(M_{\rm{UV}}\ga-18\) galaxies. This is driven by the clustering properties of the UV-faint galaxies as we discuss below. This may explain the low detection rate of Ly\(\alpha\) in UV-faint galaxies at \(z\ga 8\)(Hoag et al. 2019; Mason et al. 2019a; Morishita et al. 2022) compared to the higher detection rate in UV bright galaxies seen by Jung et al. (2022) at the same redshift.
Second, we see that the bubble size distribution around observable galaxies peaks at a similar size for all \(M_{\rm{UV}}\) bins, which indicates that, on average, these galaxies are in the same bubbles. This peak, corresponding to the mean size of ionised regions, has been described as a 'characteristic' scale, \(R_{\rm{char}}\) (e.g., Furlanetto & Oh 2005). In the following we refer to the mean size of ionised regions as the characteristic size. We see that the characteristic scale of ionised regions increases by over two orders of magnitude during reionisation2. However, we do find an increasing characteristic scale as a function of UV luminosity: galaxies brighter than \(M_{\rm{UV}}\la-20\) are expected to reside in bubbles \(\sim 1.5-2\times\) larger than the characteristic bubble scale in the full volume.
Footnote 2: Note that our characteristic scale is at least an order of magnitude higher than that presented by Furlanetto & Oh (2005) due to our use of the mean free path approximation, which captures the sizes of overlapping bubbles (Lin et al. 2016)
Finally, we see that the width of the bubble size distribution decreases as galaxy UV luminosity increases. This is due to the clustering of galaxies: UV bright galaxies are more likely to be in overdense regions which will reionise early, whereas UV faint galaxies can be both'satellite' in overdense, large ionised regions, or 'field galaxies' in dense regions which remain neutral for longer (see also Hutter et al. 2017, 2021; Qin et al. 2022) This figure demonstrates that UV-faint galaxies will have very significant sightline variance in their Ly\(\alpha\) optical depth, and highlights the importance of using realistic bubble size distributions for inference of the IGM neutral fraction (see also, e.g. Mesinger & Furlanetto 2008a; Mason et al. 2018b).
### Bubble size distribution as a function of galaxy overdensity
In this section, we investigate the distribution of bubble sizes as a function of galaxy overdensity, \(N/\langle N\rangle\). This distribution should directly reflect how structure formation affects reionisation.
As described in Section 2.3 an observed galaxy overdensity, \(N/\langle N\rangle\), depends on the survey depth and volume. For our investigation here we explore expected overdensities within a medium-deep JWST observation within 1 NIRISS pointing (or \(1/2\) of the NIRCam field-of-view), aiming to simulate observations similar to those obtained by the JWST/NIRISS pure-parallel PASSAGE survey (Malkan et al. 2021). We thus use a survey limiting depth of \(m_{\rm{AB}}=28\) and area 4.84 sq. arcmin with a redshift window of \(\Delta z=0.2\). This corresponds to [\(M_{\rm{UV,\,lim}}\), \(V_{\rm{survey}}\)]=[-19, \(2014\,\rm{CMpc}^{3}\)] at \(z=8\pm 0.1\). We follow the procedure described in Section 2.3 to create a cube of \(N/\langle N\rangle\) using these survey parameters, and then select 200,000 cells3 to measure the bubble size distribution as a function of overdensity.
Footnote 3: Due to the sampling variance and slightly different binning, the bubble size distributions for the full volume here and in Section 3.1 are slightly different.
Figure 4 shows the bubble size distributions for \(N/\langle N\rangle\approx 5\), \(N/\langle N\rangle\approx 10\), and \(N/\langle N\rangle\ga 15\), along with the bubble size distribution in the full volume as a function of \(\overline{x}_{\rm{H}}\), for the _Gradual_ model. As in Section 3.1 we see the clear trend that the bubble size distributions increase to higher values as the universe reionises, but we can now identify where the reionisation process begins. We can see that the most overdense regions reionise first and inhabit the largest ionised
bubbles. As in Section 3.1, we investigate three clear trends in the bubble size distribution as a function of galaxy overdensity.
First, overdense regions start and finish carving out ionised bubbles earlier compared to regions at the mean density. We see a much larger proportion of overdense regions already in \(R_{\rm ion}>1\,\)cMpc bubbles early in reionisation. We find at \(\overline{x}_{\rm HI}=0.9\), when only 10% of the total IGM volume is ionised, \(\gtrsim 30\%\) of the \(N/\langle N\rangle\geq 10\) regions are already in \(R_{\rm ion}>1\,\)cMpc bubbles. By \(\overline{x}_{\rm HI}=0.5\), all of the \(N/\langle N\rangle\geq 10\,\)regions are in \(R_{\rm ion}>1\,\)cMpc bubble. This demonstrates that early in reionisation, we expect only the strongest overdensities to trace large ionised regions.
Second, ionised bubbles around overdense regions are larger than the characteristic bubble size in the full volume, particularly in the early stages of reionisation. At \(\overline{x}_{\rm HI}=0.8\), the characteristic bubble size around \(N/\langle N\rangle\geq 10\) regions is \(R_{\rm ion}\sim 10\,\)cMpc, which is \(\sim 2\times\) larger than the mean bubble size in the full volume at that time, and large enough for significant Ly\(\alpha\) transmission (e.g., Miralda-Escude, 1998; Mason and Gronke, 2020; Qin et al., 2022). Detection of Ly\(\alpha\) in a highly neutral universe is thus not unexpected if the LAEs are in highly overdense regions.
The mean bubble size of overdense regions grow more slowly than that of less overdense regions. In the early stage of reionisation, bubbles around the most overdense regions grow in isolation and do not merge with similarly sized bubbles because most overdense regions are far away from each other. By contrast, bubbles created by less overdense regions are more likely to grow rapidly by merging with other bubbles.
Finally, again we see the bubble size distributions are broad, but that the strong overdensities have the narrowest distribution of bubble sizes because they are guaranteed to trace ionised environments, whereas less dense regions can be isolated, and therefore in smaller bubbles, or contained within large scale overdensities in large bubbles.
### Bubble size distribution as a function of reionising source model
In the previous sections we have shown the bubble size distribution using only our fiducial _Gradual_ faint-galaxies driven, reionisation model. Here we demonstrate how the bubble size distribution changes if instead reionisation is driven by rarer, brighter galaxies in our _Rapid_ model. We show the bubble size distributions for the two models as a function of the IGM neutral fraction in Figures 5 and 6 for galaxies of given \(M_{\rm UV}\) and galaxy overdensities.
Both models have qualitatively similar bubble size distributions but the _Rapid_ model predicts much large bubble sizes at fixed neutral fraction, particularly at the earliest stages of reionisation. A key prediction of the _Rapid_ model is the existence of large (\(\sim 30-100\,\)cMpc) bubbles at the earliest stages of reionisation, \(\overline{x}_{\rm HI}\sim 0.9\), in order to fill the same volume with ionised hydrogen around the more biased ionising sources.
First, galaxies in the _Gradual_ model are more likely to reside in neutral IGM at the beginning of reionisation, compared to galaxies in the _Rapid_ model. At \(\overline{x}_{\rm HI}=0.9\), \(\sim 80\%\) of the \(M_{\rm UV}=-16\) galaxies have bubble sizes no greater than 1 cMpc in the _Gradual_ model. By contrast, in the _Rapid_ model, only \(\sim 60\%\) of \(M_{\rm UV}=-16\) galaxies are
Figure 3: Bubble size distributions as a function of UV luminosity for \(M_{\rm UV}=-16,-18,-20,-22\). We also show the bubble size distribution from the full volume as a thick grey line in each simulation. The fractions of galaxies in \(R<0.8\,\)cMpc bubbles (below our resolution limit) or neutral cells are marked with arrows. Each panel shows a different volume-averaged IGM neutral fraction, \(\overline{x}_{\rm HI}\). As the neutral fraction decreases, the bubble size distributions shift to higher values, as expected as bubbles grow as reionisation progresses. With increasing UV luminosity, the probability that a galaxy resides in big bubbles increases.
in such neutral regions at the same \(\overline{x}_{\rm HI}\). At the mid-point of reionisation (\(\overline{x}_{\rm HI}=0.5\)), UV-faint galaxies (\(M_{\rm UV}=-16\)) in the _Gradual_ model (\(\sim 9\%\)) are half as likely to be in small ionised/neutral regions compared to UV-faint galaxies in the _Rapid_ model (\(\sim 20\%\)). This is because the early ionised regions in the _Rapid_ model are concentrated around the most overdense regions, compared to a more uniform coverage of bubbles seen in the _Gradual_ model (see Figure 1). In the _Rapid_ model isolated faint galaxies cannot create \(R_{\rm ion}>1\,\)Mpc bubbles around themselves, because reionisation is dominated by \(M_{\rm UV}\la 19.5\) galaxies in this model. Therefore isolated faint galaxies remain in \(R_{\rm ion}<1\,\)Mpc bubbles even at the mid-point of the reionisation.
Second, galaxies in the _Rapid_ model blow out big ionised bubbles early in the reionisation. However, the bubble sizes do not grow as rapidly as those in the _Gradual_ model. At the beginning of reionisation, the characteristic bubble size in the _Rapid_ model is \(R_{\rm char}\approx 10\,\)Mpc. In the _Gradual_ model, the characteristic bubble size \(\sim 3\times\) smaller: no more than 3\(\,\)Mpc. By the late stages of reionisation (\(\overline{x}_{\rm HI}=0.1\)), the mean bubble sizes in the _Rapid_ model are \(\sim 300\,\)cMpc. However, in the _Gradual_ model, the mean bubble size has grown twice as rapidly, reaching \(\sim 200\,\)cMpc. The different evolutionary trends reflect the different bubble-merging histories of the two models. In the _Gradual_ scenario, many faint galaxies create small ionised bubbles and soon merge together to form big bubbles. In the _Rapid_ model, big bubbles form early, however, but due to the rarity of bright ionising galaxies, bubbles are less likely to merge and immediately double in size compared to those in the _Gradual_ model.
We can see from this comparison that there can be a degeneracy between the _Gradual_ and _Rapid_ model. If we find evidence of a large (\(>10\,\)cMpc) bubble at high redshift, it could be explained by a bright-galaxies-driven reionisation at a high neutral fraction, or by the faint-galaxies-driven reionisation but with a lower neutral fraction. However, independent information on the reionisation history and/or information from the dispersion of bubble sizes along multiple sightlines could break this degeneracy. We discuss this in Section 4.2.
### Interpretation of current observations
In this section, we use our simulations to interpret some recent observations of Ly\(\alpha\) emission at \(z\ga 7\) in candidate overdensities. Here we aim to establish if the enhanced Ly\(\alpha\) visibility in these regions can be explained by the sources tracing an ionised overdensity, and how likely that scenario is given our consensus timeline of reionisation and either of our two reionisation models.
We focus on observations of \(z\ga 7\) Ly\(\alpha\) emission from galaxies in 6 regions in the sky, in candidate overdensities: the COSMOS field at \(z\approx 6.8\)(Endsley & Stark, 2022), BDF field at \(z\approx 7.0\)(Vanzella et al., 2011; Castellano et al., 2016, 2018; Castellano et al., 2022), EGS field with two regions at \(z\approx 7.7\) and 8.7 (Oesch et al., 2015; Roberts-Borsani et al., 2016; Tilvi et al., 2020; Leonova et al., 2022; Larson et al., 2022; Jung et al., 2022; Tang et al., 2023), the field behind the galaxy cluster Abell 2744 at \(z\approx 7.9\)(Morishita et al., 2022) and the
Figure 4: Bubble size distributions as a function of galaxy overdensity at \(z=8\pm 0.1\) in a 4.84 arcmin\({}^{2}\) area (\(\sim(13\)cMpc\({}^{3}\))) with a survey limit of \(m_{AB}=28\), for \(N\,/\,(N)\approx 5\), \(10\) and \(\geq 15\), where \(\langle N\rangle=0.84\). We also show the bubble size distribution from the full volume as a thick grey line in each simulation. The fractions of galaxies in \(R<0.8\,\)cMpc bubbles (below our resolution limit) or neutral cells are marked with arrows. Each panel shows a different \(\overline{x}_{\rm HI}\). More overdense regions host larger \(R_{\rm ion}\) early at high \(\overline{x}_{\rm HI}\). As \(\overline{x}_{\rm HI}\) decreases, \(R_{\rm ion}\) of less overdense regions begins to catch up and ends up having similar bubble size distribution to those of the most overdense regions. This agrees with the general reionisation picture, that overdense regions are reionised first.
Figure 5: Bubble size distributions as a function of UV luminosity for \(M_{\rm{ev}}=-16,-22\), for the _Gradual_ (solid lines) and _Rapid_ (dashed lines) reionisation models. We also show the bubble size distribution in the full volume as a thick grey line for each simulation. The fractions of galaxies in \(R<0.8\,\)cMpc bubbles (below our resolution limit) or neutral cells are marked with arrows. Each panel shows a different volume-averaged IGM neutral fraction. We see that the bubble size distributions are broader for the _Rapid_ models than for the _Gradual_ model at \(\overline{x}_{\rm{in}}\geq 0.5\). The bubble size distributions of the _Rapid_ model peak at \(R_{\rm{iso}}\geq 10\)cMpc since as early as \(\overline{x}_{\rm{in}}=0.9\). By contrast the distributions of _Gradual_ models start with \(R_{\rm{iso}}\leq 6\,\)cMpc at \(\overline{x}_{\rm{in}}=0.9\), and gradually evolve to converge with the _Rapid_ models as IGM becomes more ionised.
Figure 6: Bubble size distributions as a function of overdensity for \(N\,/\,(N)=5,\,>15\), for the _Gradual_ (solid lines) and _Rapid_ (dashed lines) reionising source models. We also show the total bubble size distribution as a thick grey line in each simulation. The fractions of galaxies in \(R<0.8\,\)cMpc bubbles (below our resolution limit) or neutral cells are marked with arrows. Each panel shows a different volume-averaged IGM neutral fraction. In the _Rapid_ model we see that bubble size distribution of \(N\,/\,(N)>7\) already shows little bimodality at \(\overline{x}_{\rm{in}}=0.9\). Galaxies in \(N\,/\,(N)>15\) regions are mostly in bubbles of \(R_{\rm{iso}}>7\). In contrast, in _Gradual_ model even galaxies in \(N\,/\,(N)>5\) regions are in bubbles of \(R_{\rm{iso}}<7\).
area around the \(z=10.6\) galaxy GNz11 (Oesch et al., 2016; Bunker et al., 2023; Tacchella et al., 2023).
To compare with observations at known redshifts, we will switch from comoving to proper distance units. Due to the incompleteness of the observations, here we aim to create bubble size distributions for regions in our simulations that are approximately similar to those observed. Our simulations are coeval boxes at \(z=7,8,9\), 10, so in the following we use the box closest in redshift to the observations, but use the observed redshift to fix assumed IGM neutral fractions and to calculate physical distances. We demonstrate in Appendix B that our bubble size distributions do not depend significantly on redshift.
We create mock observations assuming the same area as the observed overdensities. For the regions which only have photometric overdensities, due to the large redshift uncertainties in the photometric overdensities, but motivated by the \(\Delta z\lesssim 0.2\) redshift separation of Ly\(\alpha\) emitters in all of these regions, we assume a redshift window of \(\Delta z=0.2\) for our mock observations. This corresponds to \(\sim 5-8\) pMpc at \(z\sim 7-10\). In all cases we will assume the observed overdensity of Lyman-break galaxies to be the same in that smaller volume as in the true observed volume. This means we are likely overestimating the true overdensity, in that case our estimated probabilities can be seen as upper limits.
Using these assumed volumes we then use the method described in Section 2.3.1 to convolve our galaxy field with the volume and depth kernel of the observations to create a cube of overdensity matched to each observation setup. We then select regions in the overdensity cube which match the observed overdensity estimates.
We assess the probability of the observed overdensities lying in ionised regions \(>1\) pMpc in radius, which would allow \(\gtrsim 30\%\) of Ly\(\alpha\) flux to be transmitted at the rest frame Ly\(\alpha\) line centre (up to \(\sim 50\%\) transmission for emission \(500\) km s\({}^{-1}\) redward of linecentre, e.g., Mason and Gronke, 2020; Qin et al., 2022). While the true Ly\(\alpha\) detection rate will depend on the flux limit of the survey and the Ly\(\alpha\) flux emitted by the galaxies, before attenuation in the IGM, this threshold gives us a qualitative approach with which to interpret the observations. We defer full forward-modelling of Ly\(\alpha\) observations to a future work. At the redshift of each observation we assume a non-parametric estimate of the IGM neutral fraction, \(\overline{x}_{\rm{in}}\), inferred by Mason et al. (2019) described in Section 2.1. We then calculate the final bubble size distribution by marginalising the bubble size distribution at each \(\overline{x}_{\rm{in}}\) over the inferred \(\overline{x}_{\rm{in}}\) distribution. We also measure the characteristic bubble sizes, \(R_{\rm{char}}\), for the observed overdensities, which indicates the mean size of ionised regions above our resolution limit around the overdensities.
We present a summary of our simulation setups to compare to these observations in Table 1, the resulting probability of each region residing in a large ionised region in Figure 7, and \(R_{\rm{char}}\). The full bubble size distributions are described in Appendix C.
#### 3.4.1 \(z\sim 7\) overdensities in COSMOS and BDF fields
In the COSMOS field, Endsley and Stark (2022) detected Ly\(\alpha\) in 9/10 \(M_{\rm{UV}}\lesssim-20.4,z\approx 6.8\) galaxies in a 140 pMpc\({}^{3}\) volume. Using these spectroscopic confirmations, they estimate the lower limit of the over
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{Field} & \multirow{2}{*}{\(z\)} & \multirow{2}{*}{\(N_{\rm{LAEs}}\)} & \multirow{2}{*}{Minimum \(M_{\rm{UV}}\)} & \multirow{2}{*}{\(\overline{x}_{\rm{in}}\)\({}^{\dagger}\)} & \multirow{2}{*}{Volume\({}^{\ddagger}\) [pMpc\({}^{3}\)]} & \multicolumn{2}{c}{\(p(R>1\,\)pMpc)\({}^{+}\)} & \multirow{2}{*}{} \\ \cline{1-1} & & & & & & & & & \\ \hline \hline COSMOS & 6.8 & 9 & \(-20.4\) & \(0.44^{+0.09}_{-0.07}\) & 140 & \(>3\) & 0.52 (0.72) & 0.93 (0.99) & 6.4 (11.2) & [1] \\ BDF & 7.0 & 3 & \(-19.5\) & \(0.56^{+0.09}_{-0.08}\) & 53 & \(>3\) & 0.27 (0.57) & 0.84 (0.98) & 3.1 (6.3) & [2-5] \\ EGS & 7.7 & 7 & \(-19.5\) & \(0.76^{+0.08}_{-0.08}\) & 2.6 & \(>3\) & 0.07 (0.26) & 0.39 (0.67) & 1.0 (2.2) & [6-10] \\ Abell 2744 & 7.9 & 0 & \(-17.7\) & \(0.80^{+0.09}_{-0.09}\) & 0.001 & \(>130\) & 0.10 (0.29) & 0.27 (0.53) & 0.5 (1.6) & [11,12] \\ EGS & 8.7 & 2 & \(-19.5\) & \(0.93^{+0.02}_{-0.15}\) & 12 & \(>3\) & 0.01 (0.07) & 0.11 (0.42) & 0.8 (1.6) & [8,93,13,14] \\ GOODS-N & 10.6 & 1 & \(-18.6\) & \(>0.92\) & 2.6 & \(>24\) & 0.004 (0.02) & 0.07 (0.48) & 0.5 (1.1) & [15-17] \\ \hline \end{tabular}
* Using the non-parametric reionisation history posteriors by Mason et al. (2019) including constraints from the CMB optical depth, quasar dark pixel fraction and measurements of the Ly\(\alpha\) damping wing in quasars and galaxies. We calculate \(p(R>1\,\)pMpc) by marginalising \(p(R>1\,\)pMpc) over the \(\overline{x}_{\rm{in}}\) posterior at each redshift inferred by Mason et al. (2019). *We assume a redshift window of \(\Delta z=0.2\) except for the COSMOS and Abell 2744 regions which are spectroscopically confirmed. For those regions we use the volumes estimated by Endsley and Stark (2022) and Morishita et al. (2022) respectively.
* Calculated using _Gradual (Rapid)_. [1] Endsley and Stark (2022), [2] Vanzella et al. (2011), [3] Castellano et al. (2016), [4] Castellano et al. (2018), [5] Castellano et al. (2022), [6] Oesch et al. (2015), [7] Titi et al. (2020), [8] Leonova et al. (2022), [9] Tang et al. (2023), [10] Jung et al. (2022), [11] Morishita et al. (2022), [12] Ishigaki et al. (2016), [13] Zitrin et al. (2015b), [14] Larson et al. (2022), [15] Oesch et al. (2016), [16] Bunker et al. (2023), [17] Tacchella et al. (2023).
\end{table}
Table 1: Assumed properties of \(z\gtrsim 7\) associations of Ly\(\alpha\) emitters used in our simulations.
Figure 7: Probability in our models of finding a bubble size \(>1\) pMpc around regions similarly overdense to the observed \(z\gtrsim 7\) associations of Ly\(\alpha\) emitters in our _Gradual_ and _Rapid_ simulations (black lines). Grey lines show the range of probabilities in the full simulation volume. We use the IGM neutral fractions expected at these redshifts (Mason et al., 2019). The plot is discussed in Section 3.4 and a summary of our simulation setup is given in Table 1. The bubble size distributions for all these fields is shown in Figure 11. It is highly likely for the observed \(z\sim 7\) Ly\(\alpha\) emitting galaxies to reside in large ionised bubbles. At \(z\gtrsim 8\), large bubbles are unexpected even in overdensities.
density of this region is \(\gtrsim 3\). They estimate that individual galaxies in this field can create ionised bubbles \(R_{\rm ion}\sim 0.69-1.13\) pMpc. Taking into account the \(N/(N)\sim 3\) overdensity and the ionising contribution from \(M_{\rm UV}<-17\) galaxies, they estimate an ionised bubble radius of \(R_{\rm ion}\sim 3\) pMpc in this volume.
We predict that almost 100% of regions this overdense at \(\overline{x}_{\rm HI}\approx 0.5\) are in \(>1\) pMpc bubbles and the characteristic bubble size is \(R_{\rm char}=6.4\) pMpc at \(\overline{x}_{\rm HI}\approx 0.4\). The high LAE fraction detected by Endesy & Stark (2022) is thus consistent with being a typical ionised region in our _Gradual_ model. In the _Rapid_ model, we predict even larger bubble sizes around this overdensity: the characteristic bubble size is \(R_{\rm char}=11.2\) pMpc, thus high Ly\(\alpha\) transmission would also be expected. In both cases we would expect an excess of Ly\(\alpha\) detections in neighbouring UV-faint galaxies.
In the BDF field, Vanzella et al. (2011) and Castellano et al. (2018) detected \(3\ z\sim 7.0\) LAEs, with \(M_{\rm UV}=[-21.1,-20.4,-20.4]\). Two of the galaxies have a projected separation of only 91.3 pkpc, and the third is 1.9 pMpc away (Castellano et al., 2018). The photometric overdensity of \(z\sim 7\) Lyman-break galaxies within \(\sim 3.86\) arcmin\({}^{2}\) around these galaxies is \(3-4\)\(\times\) times higher than expected (Castellano et al., 2016). Based on the star formation rate and age of the galaxies, and assuming a uniform IGM \(\overline{x}_{\rm HI}=0.5\) surrounding the sources, Castellano et al. (2018) estimated individual bubble sizes of the two galaxies at \(\sim 2\) pMpc separation to be \(R_{\rm ion}<0.8\) pMpc. We use a \(56\) arcmin\({}^{2}\) survey area (corresponding to the BDF field, where the sources have an angular separation of 6 arcmin, Vanzella et al., 2011) at \(z=7\pm 0.1\), which is \(>3\)\(\times\) overdense (Castellano et al., 2016). We see in Figure 7 that we expect nearly all regions (\(\sim 84\)% in our fiducial _Gradual_ model) with this galaxy overdensity to be inside \(>1\) pMpc ionised bubbles, enabling significant Ly\(\alpha\) escape.
The non-detection of Ly\(\alpha\) with equivalent width \(>25\) A in twelve surrounding UV-faint galaxies in this region may thus be surprising, but could be explained by a number of reasons, as discussed by Castellano et al. (2018). For example, even given the predicted most likely bubble size of 4 pMpc the fraction of transmitted Ly\(\alpha\) flux may be only \(\sim 60\)% for galaxies at the centre of the bubble (Mason & Gronke, 2020), thus with deeper spectroscopy Ly\(\alpha\) may be detected. It could also be possible that infalling neutral gas in this region resonantly scatters Ly\(\alpha\) photons emitted redward of systemic (which look blue in the rest-frame of the infalling gas, e.g., Santos, 2004; Weinberger et al., 2018; Park et al., 2021). As UV-faint galaxies are likely to be low mass, and thus have a lower HI column density in the ISM compared to UV-bright galaxies, they may emit more of their Ly\(\alpha\) close to systemic redshift, making it more easily susceptible to scattering by infalling gas. Measurements of systemic redshifts for the galaxies in this region may help explain the complex Ly\(\alpha\) visibility. Furthermore, given the large photometric redshift uncertainties of the Castellano et al. (2016) sample, it could also be possible that the actual galaxy overdensity associated with the three LAEs is smaller, thus the expected bubble size is smaller and the faint galaxies may not lie in the same bubble as the detected LAEs.
#### 3.4.2 \(z\sim 8\) overdensities in EGS and Abell 2744 fields
The EGS field contains the majority of \(z>7\) LAEs that have been detected to-date (Zitrin et al., 2015; Oesch et al., 2015; Roberts-Borsani et al., 2016; Tilvi et al., 2020; Larson et al., 2022; Jung et al., 2022; Tang et al., 2023). Among these LAEs, Tilvi et al. (2020), Jung et al. (2022) and Tang et al. (2023) have reported a total of \(8\ M_{\rm UV}<-20\ z\approx 7.7\) LAEs, including the \(M_{\rm UV}=-22\) LAE detected by Oesch et al. (2015), within a circle of radius \(\approx 1\) pMpc. Jung et al. (2022) estimates the \(R_{\rm ion}<1.1\) pMpc for the individual galaxies based on the model of Yajima et al. (2018) which relates Ly\(\alpha\) luminosity and bubble size. The photometric overdensity around these LAEs has been estimated to be \(N/(N)\sim 3-5\)(Leonova et al., 2022).
We calculate the bubble size distributions using a setup similar to the results of Leonova et al. (2022): an area of 4.5 arcmin\({}^{2}\) at \(z=7.7\pm 0.1\), with a limiting UV magnitude \(M_{\rm UV}>-19.5\). The result is shown in Figure 7, assuming the neutral fraction \(\overline{x}_{\rm HI}(z=7.7)=0.76^{+0.05}_{-0.09}\)(Mason et al., 2019). We find \(\sim 40\)% of regions this overdense are in large ionised bubbles in the _Gradual_ model and 70% of regions in the _Rapid_ model. We conclude this region is likely consistent with our consensus picture of reionisation.
In the Abell 2744 field, Morishita et al. (2022) found no Ly\(\alpha\) detections of \(7\ z\approx 7.89\), \(M_{\rm UV}>-20\) galaxies. These galaxies are within a circle of radius \(\sim\)60 pkpc. This area is \(N/(N)\sim 130\) overdense for galaxies with \(M_{\rm UV}>-17.5\)(Ishigaki et al., 2016). Morishita et al. (2022) estimated bubble sizes of \(R_{\rm ion}\sim 0.07-0.76\) pMpc for individual galaxies, based on their ionising properties derived from rest-frame optical spectroscopy with NIRSpec. We generate the bubble size distributions for a region of \(>130\)\(\times\) overdensity of \(M_{\rm UV}\lesssim-17.5\) galaxies within a volume of (0.9 cMpc)\({}^{3}\). A bubble size of \(R_{\rm ion}\sim 1\) pMpc or larger is unexpected for regions as overdense as this in our _Gradual_ model at \(\overline{x}_{\rm HI}\sim 0.8\): we find \(p(R>1\) pMpc) = 0.27.
The redshifts of sources in the EGS and Abell2744 fields are very similar. However, Ly\(\alpha\) has only been detected in the EGS field. We can see in Figure 7 and Table 1 that our predicted bubble size distributions for EGS are shifted towards higher bubble sizes than in Abell 2744. Although the Abell 2744 region is overdense in UV-faint galaxies, the volume of this region is very small, thus there may not be sufficient ionising emissivity to produce a large-scale ionised region. Thus non-detection of Ly\(\alpha\) in this overdensity is not surprising.
#### 3.4.3 \(z\sim 9-11\) overdensities in EGS and GOODS-N fields
The highest redshift association of LAEs in the EGS field is a pair at \(z\approx 8.7\)(Zitrin et al., 2015; Larson et al., 2022), which lies \(\sim 4\) pMpc apart. The photometric overdensity around these LAEs has been estimated to be \(N/(N)\sim 3-5\)(Leonova et al., 2022). We calculate the bubble size distributions using a setup similar to the results of Leonova et al. (2022): an area of \(27\) arcmin\({}^{2}\)(corresponding to \(\sim 6\) HST/WFC3 pointings between the two sources) with \(\Delta z=0.2\), with a limiting UV magnitude \(M_{\rm UV}>-19.5\).
At \(z=8.7\) the inferred IGM neutral fraction is \(\overline{x}_{\rm HI}=0.93^{+0.02}_{-0.15}\). We predict the probability of finding LAEs at \(\overline{x}_{\rm HI}\approx 0.93\) should be extremely low: in the full simulation volume in our fiducial _Gradual_ model, we obtain \(p(R>1\) pMpc) = 0.01 and there is \(<0.2\)% probability of finding a bubble with \(R_{\rm ion}>4\) pMpc. Around regions as overdense as that observed we find \(p(R>1\) pMpc) = 0.11. Thus in our fiducial model, we find it is extremely unlikely that the \(z\approx 8.7\) LAE pair in the EGS field are in one large ionised region.
The visibility of Ly\(\alpha\) therefore implies some missing aspect in our understanding of this system. If \(\overline{x}_{\rm HI}\) is lower, there will be a higher chance to find LAEs: we obtain \(p(R>1\) pMpc) = 0.17 in regions that overdense if \(\overline{x}_{\rm HI}=0.8\), so \(\overline{x}_{\rm HI}\) will need to be substantially lower to find a high probability of large ionised regions. Alternatively, in the _Rapid_ model, \(p(R>1\) pMpc) = 0.42 for such an overdensity, and the bubble size distributions at \(\overline{x}_{\rm HI}=0.8-0.6\) peak at \(R_{\rm ion}\gtrsim 3\) pMpc. the two LAEs could be in one large ionised bubble. Finally, the Ly\(\alpha\) visibility of these galaxies could be boosted by high intrinsic Ly\(\alpha\) production as suggested by their other strong emission lines, and potential contribution of AGN (Stark et al., 2017; Tang et al., 2023;
Larson et al., 2023), and facilitated transmission in the IGM if the Ly\(\alpha\) flux is emitted redward of systemic (e.g., Dijkstra et al., 2011; Mason et al., 2018).
Finally, Bunker et al. (2023) have detected Ly\(\alpha\) in GN-z11 at \(z=10.6\), in the GOODS-N field (Oesch et al., 2016). 9 fainter galaxy candidates (\(m_{\rm AB}\approx 29\)) at similar redshift are found within a (10 cMpc)\({}^{2}\) square centred at GN-z11 (Tacchella et al., 2023). We estimate the overdensity of \(m_{\rm AB}<29\) (\(M_{\rm UV}<-18.6\)), \(z=10\pm 0.1\) galaxies in this field using our \(z=10\) UV LF, finding that this region is \(z\sim 23\)x overdense. We obtain \(p(R>1\,\rm pMpc)=0.07\) and \(0.48\) in the _Gradual_ and the _Rapid_ model, respectively. It is thus extremely unlikely that all of the \(z\sim 11\) galaxies are in one \(R>1\)pMpc ionised region that allows significant Ly\(\alpha\) transmission, in our fiducial _Gradual_ model.
We find \(R_{\rm char}=0.5\) and 1.1 pMpc in the _Gradual_ and the _Rapid_ model, respectively. The characteristic bubble size is slightly smaller than the largest distance of galaxies from GN-z11 in this field (\(\sim 0.6\) pMpc) estimated by Tacchella et al. (2023) from photometric redshifts, implying that most of these galaxies could reside in the same (small) ionised region.
In summary, our simulations demonstrate that the regions discussed above at \(z\sim 7\) are extremely likely (\(>90\)%) to be in large ionised bubbles, given their large estimated overdensities. We also find it likely (\(\gtrsim\,\)40%) that the EGS region at \(z\approx 7.7\) is in a large ionised bubble. However, at higher redshifts we find it very unlikely that the \(z\approx 8.7\) Ly\(\alpha\)-emitters in EGS and the \(z\approx 10.6\) galaxies in GOODS-N, including GNz11, are in large ionised regions (\(\sim 11\)% and \(\sim 7\)% respectively) in our fiducial _Gradual_ model.
If the actual overdensities of these regions are smaller than the photometrically estimated values, we will find it even more unlikely for these galaxies to reside in large ionised bubbles, strengthening our result. These results clearly demonstrate the importance of measuring the IGM neutral fraction at \(z\,\hbox to 0.0pt{$>$}{\lower 4.3pt\hbox{$\sim$}}\,8\) and distinguishing between reionising source models (e.g., Bruton et al., 2023), and of understanding intrinsic Ly\(\alpha\) production and escape in the ISM in these galaxies (Roberts-Borsani et al., 2022; Tang et al., 2023).
### Forecasts for future observations
Anticipating upcoming large area surveys at \(z\,\hbox to 0.0pt{$>$}{\lower 4.3pt\hbox{$\sim$}}\,7\) we make forecasts for the expected number of large bubbles in the JWST COSMOS-Web survey (Casey et al., 2022), the Euclid Deep survey (Euclid Collaboration et al., 2022; van Mierlo et al., 2022), and the Roman High-Latitude Survey (Wang et al., 2022). These surveys will detect tens of thousands of UV-bright \(z>7\) galaxy candidates which could be used to constrain the underlying density field and pinpoint early nodes of reionisation.
The identification of ionised regions in these large surveys has the potential to distinguish between reionisation models. We sample simulated volumes equivalent to the survey areas (0.6 sq. degrees for COSMOS-Web, 53 sq. degrees for Euclid-Deep) at \(z=8.0\) and use the watershed algorithm (Section 2.3.2) to identify individual bubbles in these volumes. As we describe below the bubble size distribution in a Euclid Deep-like survey volume will suffer minimal cosmic variance, thus our Euclid forecast can be rescaled to forecast for the the Roman High-Latitude Survey (2000sq. deg). We show in Appendix B that the expected bubble sizes, in comoving units, do not depend strongly on redshift, so our results can be easily shifted to other redshifts without expecting significant differences. As discussed above, to reduce over-segmentation we use the H-minima threshold when calculating the bubble sizes using the watershed algorithm, this sets an effective resolution of 3 cMpc.
In Figure 8 we plot our predicted 'bubble size function' down to this resolution limit: the number density of ionised bubbles as a function of bubble size, for our _Gradual_ and _Rapid_ model at \(\overline{x}_{\rm in}=[0.5,0.7,0.9]\). The number of bubbles with \(R\,\hbox to 0.0pt{$>$}{\lower 4.3pt\hbox{$\sim$}}\,10\)cMpc can be considered a proxy for a cluster of Ly\(\alpha\)-emitting galaxies, as \(\sim 30-50\)% of Ly\(\alpha\) flux should be transmitted through regions this large (Mason and Gronke, 2020). As the neutral fraction decreases, as above, we expect to find an increasing number of large ionised regions, and the number of small ionised regions decreases as bubbles overlap. Figure 8 again shows the clear difference in the predicted number and size of ionised regions for the different reionisation models, as discussed in Section 3.3. We mark the survey volume of COSMOS-Web and Euclid Deep, (120cMpc)\({}^{3}\) and (530cMpc)\({}^{3}\) at \(z=8\), respectively, as horizontal lines. The survey volume of the Roman High-Latitude survey (not shown) is (1816cMpc)\({}^{3}\).
We note that when \(\overline{x}_{\rm in}\,\hbox to 0.0pt{$>$}{\lower 4.3pt\hbox{$\sim$}}\,0.7\) we expect a significant fraction of bubbles with \(R\,\hbox to 0.0pt{$>$}{\lower 4.3pt\hbox{$\sim$}}\,50\)cMpc (see Figures 3 and 4). COSMOS-Web is thus unlikely to capture the full extent of large ionised bubbles. Kaur et al. (2020) demonstrated that a simulated volume of \(>\)(250 cMpc)\({}^{3}\) is required for convergence of the 21-cm power spectrum during reionisation, so it is likely that a similar volume must be observed to be able to robustly measure the bubble size distribution, thus we expect the Euclid Deep and Roman High Latitude surveys can robustly sample the full bubble size distribution.
However, to detect \(R>10\) cMpc ionised bubbles, requires not only a large survey volume, but sufficient survey depth to detect the high redshift UV-bright galaxies which signpost large ionised regions. Only the Roman Space Telescope (RST) (Akeson et al., 2019) is likely to be able to carry out bubble counting. Zackrisson et al. (2020) study the number of galaxies within a \(\rm V_{ion}=1000\) cMpc\({}^{3}\) bubble that can be detected with upcoming photometric surveys with instruments such as Euclid, JWST, and RST. They found that the Euclid Deep survey can barely detect one \(M_{\rm UV}\approx-21\) galaxy in that volume at \(z>7\) given its detection limit, meaning that identifying large overdensities will be challenging. By contrast, a \(\approx 20\) deg\({}^{2}\) deep field observation by RST could detect \(\sim 10\)\(M_{\rm UV}\lesssim-18.5\) galaxies at \(z=7-10\) in a \(\rm V_{ion}=1000\) cMpc\({}^{3}\) volume. The wide survey area (\(\approx(400\) cMpc)\({}^{3}\) at \(z\sim 8\pm 0.2\)) and survey depth of a RST deep field observation will allow us to identify the UV-bright galaxies which trace large ionised regions. Deeper imaging or slitless spectroscopy around the UV-bright sources, for example with JWST to confirm overdensities, followed by Ly\(\alpha\) spectroscopy of these regions would enable estimates of the number density of ionised bubbles.
These results demonstrate that counting the number of overdensities of LAEs in a volume can be a useful estimate of the bubble size distribution, as they will probe ionised regions \(\,\hbox to 0.0pt{$>$}{\lower 4.3pt\hbox{$\sim$}}\,1\) pMpc, and thus \(\overline{x}_{\rm in}\) (especially at \(\overline{x}_{\rm in}>0.5\)). For example, in our fiducial _Gradual_ model we expect no \(R>10\) cMpc bubbles in the COSMOS-Web volume when \(\overline{x}_{\rm in}=0.9\). This implies detection of clusters of LAEs in this volume at a given redshift would indicate \(\overline{x}_{\rm in}<0.9\) (or a reionisation morphology similar to our _Rapid_ model). We expect tens of large bubbles in this volume when \(\overline{x}_{\rm in}<0.7\). However, multiple sightline observations, for example, a counts-in-cells approach can be a more efficient tool to recover the distribution than a single area survey (e.g., Mesinger and Furlanetto, 2008).
However, bubble size functions in a COSMOS-Web-like survey volume have high cosmic variance, making it challenging to measure \(\overline{x}_{\rm in}\) precisely. In Figure 9 we plot the median number of bubbles that can be observed by a COSMOS-Web-like survey using 50 realisations along with the 16-84 percentile number counts for our _Gradual_ model. The variance is large enough to make the bubble size functions at \(\overline{x}_{\rm in}=0.5-0.7\) indistinguishable. We do not plot the variance for the _Rapid_ model for clarity, but when taking that into
account, we cannot discriminate between the bubble size functions of _Gradual_ and _Rapid_ with a COSMOS-Web-like survey.
## 4 Discussion
In the following section we compare our results to those obtained from other simulations (Section 4.1) and discuss the implications of our results for the reionisation history and identifying the primary sources of reionisation (Section 4.2).
### Comparison to other simulations
In this work, we characterise the bubble size distributions around typically observed reionisation-era galaxies for the first time over the full timeline of reionisation. Previously, only the total bubble size distribution has been modelled as a function of the neutral fraction (e.g. Furlanetto & Oh, 2005; Mesinger & Furlanetto, 2007; McQuinn et al., 2007; Seiler et al., 2019).
In principle, the full bubble size distribution measured in this work should agree with previous works of similar reionisation setups. However, as seen in Figure 10 our mean bubble size is significantly larger than the characteristic bubble size modelled by Furlanetto & Oh (2005). Our bigger size comes from our use of the mean-free-path (MFP) method which is capable of taking into account the size of overlapped bubbles. The Furlanetto & Oh (2005) model underestimates the typical bubble size because they calculate bubbles via the excursion set formalism: as found by Lin et al. (2016), this method can underestimate bubble sizes by an order of magnitude. Our mean bubble size is comparable to those in works which use the mean free path approximation (e.g., Mesinger & Furlanetto, 2007; Seiler et al., 2019), modulo minor differences due to different assumptions for the ionising source population, as expected looking at the difference between our _Gradual_ and _Rapid_ models.
Other works have explored the correlation between ionised bub
Figure 8: Number density of ionised bubbles for a range of \(\overline{x}_{\rm HI}\) and for our _Gradual_ (solid) and _Rapid_ models (dashed), calculated using the watershed algorithm. We show the inverse of the survey volume for COSMOS-Web (120 \(\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{ \rm{ \rm{ }}}}}}}}}}}}}}}}}\)) (530 \(\rm{cMpc}\))\({}^{3}\) as horizontal lines, marking the number density where one bubble is expected in that volume.
Figure 10: The evolution of ‘characteristic’ bubble sizes as a function of \(\overline{x}_{\rm HI}\) for our simulations compared to previous work. We show the mean size of ionised regions in this work (black) for the _Gradual_ and _Rapid_ models (solid and dashed lines respectively), and the characteristic size of ionised region in Furlanetto & Oh (2005) (blue). As discussed in Section 3.3, characteristic sizes of ionised regions in the _Rapid_ model are much larger than those in the _Gradual_ model at fixed \(\overline{x}_{\rm HI}\). The excursion set formalism used by Furlanetto & Oh (2005) can underestimate the sizes of ionised regions by over an order of magnitude as it does not account for overlapping regions.
Figure 9: Number of ionised bubbles we predict from multiple realisations of a COSMOS-Web-like survey for our _Gradual_ and _Rapid_ models at a range of neutral fractions. The lines show the median number counts and the shaded regions are the 16-84 percentile of the number counts, demonstrating the large cosmic variance in this volume.
ble size and galaxy luminosity. For example, Geil et al. (2017) and Qin et al. (2022) presented results from the DRAGONS simulation (Poole et al., 2016), finding more luminous galaxies are more likely to reside in large ionised bubbles, and that UV-faint galaxies have a large scatter in their host bubble size, consistent with our results in Section 3.1. However, these works only investigated a single redshift, IGM neutral fraction and reionising source model. Furthermore, the DRAGONS simulation is only (100 cMpc)\({}^{3}\), meaning that it does not contain large numbers of rare overdensities and UV-bright galaxies (it contains only 2 galaxies as bright as GN211, Mutch et al., 2016) and thus their predicted bubble sizes around UV-bright galaxies were subject to substantial Poisson noise. Yajima et al. (2018) presented a model for the sizes of ionised bubbles around galaxies by modelling cosmological Stromgren spheres around each galaxy (e.g., Shapiro & Giroux, 1987; Cen & Haiman, 2000), finding more massive and highly star-forming galaxies (and therefore more luminous) lie in larger ionised bubbles than low mass galaxies. However, this model does not take into account the overlapping of ionised regions, which can happen very early during reionisation (e.g., Lin et al., 2016) and thus their bubble sizes will be underestimated.
Our results in Section 3.1 highlight the importance of considering the expected ionised bubble size as a function of \(M_{\rm UV}\) calculated using the MFP method. Previous works which used the characteristic bubble size predicted by Furlanetto & Oh (2005) will thus be underestimating the size of ionised bubbles around observed galaxies at fixed neutral fraction. Jung et al. (2020) estimated the ionised bubble size required to explain the drop in Ly\(\alpha\) transmission in the GOODS-N field at \(z\sim 7.6\), and compared this bubble size to the Furlanetto & Oh (2005) characteristic bubble size as a function of neutral fraction to estimate \(\overline{x}_{\rm HI}\sim 0.49\pm 0.19\). Our results in Figure 10 imply that this approach will lead to an underestimate in \(\overline{x}_{\rm HI}\). This likely explains the discrepancy between the neutral fraction estimated by Jung et al. (2020) and that inferred by Bolan et al. (2022) (\(\overline{x}_{\rm HI}=0.83^{+0.08}_{-0.11}\)) at a similar redshift, which was obtained by sampling sightlines in inhomogeneous IGM simulations.
### Implications for the reionisation history and identification of primary ionising sources
Our results demonstrate that the visibility of Ly\(\alpha\) emission at \(z>8\) is unexpected given our consensus timeline for reionisation. The visibility of Ly\(\alpha\) therefore implies some missing aspect in our understanding of reionisation.
As discussed in Section 3.4 there are three possibilities: (1) \(\overline{x}_{\rm HI}\) is lower than previously inferred; (2) reionisation is dominated by rarer sources providing larger, rarer bubbles; (3) these galaxies have high intrinsic Ly\(\alpha\) production (Stark et al., 2017; Tang et al., 2023) and facilitated transmission in the IGM if the Ly\(\alpha\) flux is emitted redward of systemic (e.g., Dijkstra et al., 2011; Mason et al., 2018). These scenarios should be testable with spectroscopic observations in the field of high redshift Ly\(\alpha\)-emitters. The most important first step is confirming if the large regions really are ionised. As the Ly\(\alpha\) damping wing due to nearby neutral gas strongly attenuates Ly\(\alpha\) close to systemic velocity, detecting Ly\(\alpha\) with high escape fraction (estimated from Balmer lines) and very low velocity offset would be a key test to infer if the sources lie in large ionised regions. The \(z>8\) LAEs that have been detected so far have Ly\(\alpha\) velocity offset \(>300\) km s\({}^{-1}\)(Tang et al., 2023; Bunker et al., 2023), thus the large ionised regions cannot be confirmed, but spectroscopy of the fainter galaxies (which are more likely to emit Ly\(\alpha\) closer to systemic velocity, Prieto-Lyon et al., 2023) in these overdensities could be used to confirm large bubbles. These observations are now possible with JWST/NIRSpec, which can also importantly spectroscopically confirm overdensities. Excitingly, recent observations have discovered strong Ly\(\alpha\) at low velocity offsets at \(z>7\), implying large ionised regions (Tang et al., 2023; Saxena et al., 2023), and we will discuss quantitative constraints on the sizes of ionised regions in a future work.
We have also shown the bubble size distribution around observable galaxies depends on both the average IGM neutral fraction \(\overline{x}_{\rm HI}\) and the reionising source model. As the characteristic bubble size evolves strongly with \(\overline{x}_{\rm HI}\) (Figure 10), we may be able to constrain the reionisation history by simply counting overdensities of LAEs as a function of redshift. Trapp et al. (2022) recently used observed overdensities of LAEs to place joint constraints on the IGM neutral fraction and underlying matter density of those regions. That work is complementary to our approach in that it demonstrates a strong link between the overdensity of a region and the expected size of the ionised region around an overdensity.
In Sections 3.3 and 3.5 we show that the reionising source models have a strong impact on the predicted number of galaxies in large ionised bubbles early in reionisation. Finding evidence for a high number density of large ionised regions (\(\gtrsim 10\) cMpc) at high redshift would thus provide evidence for reionisation driven by rare, bright sources. However, it is clear from our work that characteristic bubble sizes in different reionisation models at different \(\overline{x}_{\rm HI}\) can be degenerate, so focusing purely on observing overdensities likely to reside ionised regions will not be able to break this degeneracy.
As seen clearly in Figure 1, the _Rapid_ model is characterised by biased, isolated large bubbles, thus it is much more likely that galaxies outside of overdensities will still be in mostly neutral regions early in reionisation in this scenario (see Figure 6). Thus to measure \(\overline{x}_{\rm HI}\) and fully break the degeneracy between reionisation morphologies requires observing a range of environments over time during reionisation. For example, observing the Ly\(\alpha\) transmission from multiple sightlines to galaxies at different redshifts (e.g., Mesinger & Furlanetto, 2008; Mason et al., 2018; Whitler et al., 2020; Bolan et al., 2022) and the 21-cm power spectrum as a function of redshift (e.g., Furlanetto et al., 2004; Geil et al., 2016).
## 5 Conclusions
We have produced large-scale (1.6 Gpc)\({}^{3}\) simulations of the reionising IGM using 21cmfast and explored the size distribution of ionised bubbles around observable galaxies. Our conclusions are as follows:
1. Observable galaxies (\(M_{\rm UV}<-16\)) and galaxy overdensities are much less likely to reside in neutral regions compared to regions at the mean density. This is because galaxies are the source of reionisation.
2. The bubble size distribution around UV-bright (\(M_{\rm UV}<-20\)) galaxies and strong galaxy overdensities is biased to larger characteristic sizes compared to those in the full volume.
3. At all stages of reionisation we find a trend of increasing characteristic host bubble size and decreasing bubble size scatter with increasing UV luminosity and increasing overdensity.
4. As shown by prior works, we find the bubble size distribution strongly depends on both the IGM neutral fraction and the reionising source model. The difference between these models is most apparent in the early stages of reionisation, \(\overline{x}_{\rm HI}>0.5\): if numerous faint galaxies drive reionisation, we expect a gradual reionisation with numerous small bubbles, whereas if bright galaxies drive reionisation we expect a more rapid process characterised by larger bubbles biased around only the most overdense regions, with sizes \(>30\) cMpc even in a 90% neutral IGM.
(v) We use our simulations to interpret recent observations of galaxy overdensities detected with and without Ly\(\alpha\) emission at \(z\gtrsim 7\). We find the probability of finding a large ionised region with \(R_{\rm ion}>1\,\)pMpc, capable of transmitting significant Ly\(\alpha\) flux, at \(z\approx 7-8\) is high (\(\gtrsim 40-93\%\)) for large-scale galaxy overdensities, implying that Ly\(\alpha\)-emitting galaxies detected at these redshifts are very likely to be in large ionised regions.
* We find a very low probability of the \(z\approx 8.7\) association of Ly\(\alpha\) emitters in the EGS field and the \(z=10.6\) galaxy GN\(211\), also detected with Ly\(\alpha\) emission, to be in a large ionised bubble (\(\sim 11\%\) and \(\sim 7\%\), respectively). The Ly\(\alpha\) detections at such a high redshift could be explained by either: a lower neutral fraction (\(\overline{x}_{\rm HI}\lesssim 0.8\)) than previously inferred; or if UV bright galaxies drive reionisation, which would produce larger bubbles; or if the intrinsic Ly\(\alpha\) production in these galaxies is unusually high.
* We make forecasts for the number density of ionised bubbles as a function of bubble size expected in the JWST COSMOS-Web survey and the Euclid Deep survey. Our fiducial model predicts no ionised regions \(>10\,\)cMpc in the COSMOS-Web volume unless \(\overline{x}_{\rm HI}<0.9\), with tens of large bubbles expected by \(\overline{x}_{\rm HI}<0.7\), though with large cosmic variance. We find Euclid and Roman wide-area surveys will have sufficient volume to cover the size distribution of ionised regions with minimal cosmic variance and should be able to detect the UV-bright galaxies which signpost overdensities. Deeper photometric and spectroscopic follow-up around UV-bright galaxies in these surveys to confirm overdensities and Ly\(\alpha\) emission could be used to infer \(\overline{x}_{\rm HI}\) and discriminate between reionisation models.
Our simulations show that in interpreting observations of \(z>6\) galaxies it is important to consider the galaxy environment. We showed the bubble size distribution around observable galaxies and galaxy overdensities can be significantly shifted from the bubble size distribution over the whole cosmic volume. This motivates using realistic inhomogeneous reionisation simulations, or at least tailored bubble size distributions to interpret observations.
Our results imply that the early stages of reionisation are still very uncertain. Identifying and confirming large ionised regions at very high redshift is a first step to understanding these early stages, and thus the onset of star formation. This is now possible with deep JWST/NIRSpec observations which could map the regions around \(z>8\) Ly\(\alpha\) emitters. The detection of Ly\(\alpha\) with high escape fraction and low velocity offset from other galaxies in the observed \(z>8\) overdensities could confirm whether the Ly\(\alpha\) emitters at \(z>8\) are tracing unexpectedly large ionised regions (e.g., Tang et al., 2023; Saxena et al., 2023).
## Acknowledgments
TYL, CAM and AH acknowledge support by the VILLUM FONDEN under grant 37459. The Cosmic Dawn Center (DAWN) is funded by the Danish National Research Foundation under grant DNRF140. This work has been performed using the Danish National Life Science Supercomputing Center, Computorne. Part of this research was supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project #CE170100013.
## Data Availability
Tables of bubble sizes as a function of \(\overline{x}_{\rm HI}\), \(M_{\rm UV}\), and galaxy overdensity (using the same volume as in Section 3.2) are publicly available here: [https://github.com/ting-yi-lu/bubble_size_overdensities_paper](https://github.com/ting-yi-lu/bubble_size_overdensities_paper).
Bubble size distributions around other overdensities can be distributed upon reasonable request to the authors.
|
2305.17812 | Tab-CoT: Zero-shot Tabular Chain of Thought | The chain-of-though (CoT) prompting methods were successful in various
natural language processing (NLP) tasks thanks to their ability to unveil the
underlying complex reasoning processes. Such reasoning processes typically
exhibit implicitly structured steps. Recent efforts also started investigating
methods to encourage more explicitly structured reasoning procedures to be
captured. In this work, we propose Tab-CoT, a novel tabular-format CoT
prompting method, which allows the complex reasoning process to be explicitly
modelled in a highly structured manner. Despite its simplicity, we show that
our approach is capable of performing reasoning across multiple dimensions
(i.e., both rows and columns). We demonstrate our approach's strong zero-shot
and few-shot capabilities through extensive experiments on a range of reasoning
tasks. | Ziqi Jin, Wei Lu | 2023-05-28T20:49:52Z | http://arxiv.org/abs/2305.17812v1 | # Tab-CoT: Zero-shot Tabular Chain of Thought
###### Abstract
The chain-of-though (CoT) prompting methods were successful in various natural language processing (NLP) tasks thanks to their ability to unveil the underlying complex reasoning processes. Such reasoning processes typically exhibit implicitly structured steps. Recent efforts also started investigating methods to encourage more explicitly structured reasoning procedures to be captured (Zhou et al., 2022). In this work, we propose Tab-CoT, a novel tabular-format CoT prompting method, which allows the complex reasoning process to be explicitly modelled in a highly structured manner. Despite its simplicity, we show that our approach is capable of performing reasoning across multiple dimensions (i.e., both rows and columns). We demonstrate our approach's strong zero-shot and few-shot capabilities through extensive experiments on a range of reasoning tasks.1
Footnote 1: Our code is available at [https://github.com/Xalp/Tab-CoT](https://github.com/Xalp/Tab-CoT)
## 1 Introduction
The chain-of-thought (CoT) prompting method (Wei et al., 2022) encourages the large language models (LLMs) to engage in a thought process before providing the answer to the given question. Such an approach shows impressive performance improvements in reasoning tasks. Notably, in the zero-shot setting, it was shown that a simple prompt such as "let's think step by step" could facilitate the step-by-step thinking process before answering the original question (Kojima et al., 2022). Such a task-agnostic method unveiled that LLMs can be descent zero-shot reasoners.
The reasoning process is inherently structured. This gives rise to some new developments along this line of work recently. Specifically, Zhou et al. (2022) suggests an alternative prompting approach that enables a two-stage structured reasoning process. Gao et al. (2022) proposes an approach that involves code in the prompt design, allowing structured information in the form of formal language to participate in the reasoning process. While effective, such methods require specific prompt engineering for different domains or defining multiple variables, which can be difficult to maintain or keep track of.
Inspired by the fact that state-of-the-art large language models, such as GPT-3 (Brown et al., 2020) and CodeX (Chen et al., 2021), have the capability of reasoning over tabular structured data 2, we propose a novel framework called _Tabular Chain of Thought_ (Tab-CoT) that models the structured reasoning process using a table-filling procedure.
Footnote 2: This is because such models are trained on massive data collected from the Internet, which contains a large amount of tabular formed data.
We show that the model can perform step-by-step reasoning by creating a table without further fine-tuning by using a table
Figure 1: A comparison between Tab-CoT with standard prompting and zero-shot-CoT on the same question. Chain-of-thought prompts are highlighted in orange.
header with column names in the form of "|step|question|response|" as a prompt. While conventional natural language texts are generated in a 1-dimensional sequential order, the table has a 2-dimensional structure, allowing inference along both columns and rows to be performed simultaneously. Unlike previous works which focused on extracting information from existing tabular structured data Gong et al. (2020), our approach generates the table while performing the reasoning process (and extracts the answer from the generated table at the end).
Figure 1 shows the results with standard prompting, conventional zero-shot CoT, and our zero-shot Tab-CoT. Our method generates a table as the output, which is more organized and concise than the output from the conventional CoT method. In this example, while zero-shot CoT generates 140 words, our method only generates 28. Besides, we found our method can reason horizontally and vertically at the same time.3 This demonstrates that our Tab-CoT method benefits from the 2-dimensional structure of the table, where the information can flow in two dimensions.
Footnote 3: “74 loaves” is the sum of “68 loaves” from the same row and “6 loaves” from the same column.
We summarize our main contributions in this work as follows:
* We propose a new approach called Tabular Chain-of-Thought (Tab-CoT) that utilizes a tabular structured reasoning scheme in combination with state-of-the-art large language models to generate answers. To the best of our knowledge, this is the first method that uses tables in a "chain of thought" process.
* The 2-dimensional tabular structure of Tab-CoT allows for improved unlocking of the step-by-step reasoning capabilities of LLMs, transforming the linear "chain of thought" process into a more structured one.
* Extensive experiments have revealed that our Tab-CoT outperforms traditional CoT techniques in zero and few-shot settings. This indicates that Tab-CoT has strong potential as a superior alternative to current chain-of-thought prompting methods.
## 2 Related Work
Chain-of-thought prompting Wei et al. (2022), a variation of few-shot prompting that adds step-by-step reasoning in those few-shot examples instead of just providing answers, has achieved significant improvements across multiple datasets. The LLMs can generate solutions following the solution format of prompts. Compared to traditional prompting, chain-of-thought prompting decomposes the task into smaller steps, which makes difficult tasks easier to solve.
The chain-of-thought prompting method is not necessarily purely natural language based. Program Aided Language Models (PAL) Gao et al. (2022) provides few-shot samples that contain executable Python code. Such an approach enables the LLMs to interact with the Python shell, allowing the model to focus on learning how to do mathematical reasoning rather than numerical calculations.
These chain-of-thought methods provide the solution structure and pattern via few-shot samples, but can these be provided without these few-shot samples in the zero-shot setting? Zero-shot CoT Kojima et al. (2022) is a zero-shot chain-of-thought prompting method. The prompt phrase "Let's think step by step" added after the question triggers the explicit reasoning process. However, compared to few-shot CoT Wei et al. (2022), zero-shot CoT allows more flexibility in the structure of the reasoning process.
Recently, Zhou et al. (2022) proposed Least-to-Most prompting, which is a prompting strategy that reduces a complex problem into a list of sub-questions, and sequentially solves the sub-questions. Each sub-question is solved with the answer to previously solved sub-questions. Compared to zero-shot CoT, this method has more restrictions on the structure of reasoning by decomposing and sequentially answering. Moreover, importing external tools (like calculator and python shell) can further aid the math computation within the arithmetic domain Gao et al. (2022).
These works reveal the importance of promoting structures in the chain-of-thought process. However, the nature of the zero-shot prompting makes the injection of structures into the generation process challenging. This motivates us to devise a better mechanism to prompt the language models under the zero-shot setting - a new prompting scheme that allows highly structured outputs in the form of tables to be generated.
## 3 Tab-CoT
Similar to zero-shot CoT (Kojima et al., 2022), our method involves two prompts that can be used in large language models (LLMs), one for table generation and the other for answer extraction. The details are shown in Figure 2. While our method is primarily applied in zero-shot settings, it can also work in few-shot settings.
Tables in LLMsWe found that in the official "parse unstructured data" demo provided by OpenAI4, a table header is provided as part of the prompt, which is as follows: "|Fruit|Color|Flavor|". With such a prompt, the underlying LLM can automatically generate a table. This suggests possible formatting for tables in such state-of-the-art LLMs. And "!" is the recognizable delimiter of tables in OpenAI models.
Footnote 4: [https://beta.openai.com/playground/p/default-parse-data](https://beta.openai.com/playground/p/default-parse-data)
To validate this observation, we queried the LLM "code-davinci-002" (Chen et al., 2021) with the following question: "a=2, b=3, what is 2*a+3*b?", and provided another table header: "|step|solution|"5. We found that it completes a structured table as follows:
Footnote 5: The temperature is set to 0 for reproducibility.
a=2, b=3, what is 2*a+3*b?
|step|solution|
|:---|:---|
|1|2*a+3*b|
|2|2*a+3*a|
|3|4+9|
|4|13|
This experiment essentially unveils how the tables are represented in such LLMs. The results also illustrate how the table can potentially be used for generating a reasoning process. Next, to validate this, we designed several simple experiments to understand how reasoning over such tabular-structured data is performed on such LLMs, as shown in Figure 3. Our first experiment (A) shows that such LLMs are able to perform potential vertical reasoning. However, if we replace '|' with ',' (B), the LLM fails to capture the patterns in the data. This tells us that the correct formatting is crucial when reasoning with tables in such LLMs.
Next, we intentionally insert a mistake into the partial table and ask the model to continue the generation process (circled in C). Surprisingly, the LLM is able to generate the correct entries even though the mistake occurred in the same row. This further confirms the LLM's strong potential in per
Figure 3: Understanding how state-of-the-art LLM (“code-davinci-002”) reason with tabular-structured data.
Figure 2: Overview of our zero-shot Tab-CoT method, which contains two steps: (1) table generation and (2) answer extraction. Added prompts are highlighted in orange. Texts generated by the LLM are highlighted in green.
forming vertical reasoning with tabular-structured data.
Moreover, to prove both vertical and horizontal reasoning exists, we increase the difficulty by directly appending the first two elements from step 9 after step 6 (D). If only vertical reasoning existed, the value under "v4" would have been "11". Instead, the value generated is "13," confirming that the LLMs have the potential to perform a combination of horizontal and vertical reasoning simultaneously.
Table Generation PromptTo make use of the 2-dimensional structure of the table, we replace the natural language prompt with a table-generation prompt (e.g., "\(|\)step\(|\)question\(|\)response\(|\)"), which serves as the header of the table. This regulates the context of this table, forcing the LLMs to conduct step-by-step reasoning by completing the table. Meanwhile, the choice of columns can be very specific. If each row of the table is regarded as a step, the row-by-row table generation process will become a step-by-step reasoning process. Within each step (row), we have multiple columns, each of which contributes certain detail towards the current reasoning step.
For any text question \(x\), we have a table generation prompt (all column names) \(c\). Concretely, we add the table generation prompt in the next row of the text question:
\[\mathsf{LLM}(x,c)=\begin{bmatrix}c_{1}&c_{2}&\cdots&c_{n-1}&c_{n}\\ t_{1,1}&t_{1,2}&\cdots&t_{1,n-1}&t_{1,n}\\ \vdots&&\ddots&&\vdots\\ t_{m,1}&t_{m,2}&\cdots&t_{m,n-1}&t_{m,n}\end{bmatrix} \tag{1}\]
where \(t_{1,1}\cdots t_{m,n}\) are the entries within the generated table, which contains \(m\) rows and \(n\) columns.
Answer Extraction PromptAfter the table content, denoted as \(T\), is generated from the previous step, we perform answer extraction. The answer extraction step helps us to extract the answer from the table, as the final results may not always be in the last cell of the generated table. Following zero-shot CoT (Kojima et al., 2022), we add another answer extraction prompt \(a\): "the answer is" after the generated table, to extract the final answer from the table:
\[Answer=\mathsf{LLM}(x,c,T,a) \tag{2}\]
Structure-Promoting Table SchemeDifferent table generation prompts (headers) may result in different tables generated (with different content). We propose a "structure-promoting scheme", which maximally unlocks the reasoning abilities of LLMs.
We define each row as a reasoning step. A table containing multiple rows will depict the step-by-step reasoning procedure leading to the final answer. Thus, our first column is "step", containing a number that indicates which reasoning step the current row represents.
Least-to-most prompting (Zhou et al., 2022) contains two stages: problem reduction and sequential solving. In problem reduction, they decompose a question into multiple subquestions. Similarly, we add "subquestion" as our second column. At the beginning of each step, the LLMs will generate a subquestion under this column, which demonstrates the objective of the current reasoning step.
The conventional zero-shot CoT (Kojima et al., 2022) shows that allowing the model to generate some reasoning process before answering can achieve a better result. Inspired by this observation, we add a third column, "process", into our table. Given a subquestion in the previous column, we expect to generate the reasoning process in the current column before answering.
The last column is named "answer". As the previous reasoning process under the "process" column may not necessarily provide an answer, we hope to use the "answer" column to explicitly request an (intermediate) answer at the end of each reasoning step.
With the above considerations, our primary scheme for the table header is designed as follows, which serves as our main table generation prompt:
\[|\mathsf{step}|\mathsf{subquestion}|\mathsf{process}|\mathsf{result}|\]
## 4 Experimental Setup
Large Language Models
3 family Brown et al. (2020) in our experiments, namely "code-davinci-002" and "text-davinci-002", whose APIs are made available by OpenAI6. For brevity, we use "code" to refer to the model "code-davinci-002" and "text" to refer to "text-davinci-002" in our experiments.
Footnote 6: [https://openai.com/api/](https://openai.com/api/)
Tasks and DatasetsWe primarily focus on mathematical reasoning in this work. Thus, we evaluate our method on 6 arithmetic reasoning datasets. Specifically, they are SingleEq Koncel-Kedziorski et al. (2015), AddSub Hosseini et al. (2014), MultiArith Roy and Roth (2015), GSM8K Cobbe et al. (2021), AQUA-RAT Ling et al. (2017), and SVAMP Patel et al. (2021), which are standard datasets widely used in the community.
We also conducted additional experiments on datasets involving other types of reasoning tasks. Specifically, we evaluate our method on two symbolic reasoning tasks: Last letter and Coin Flip7: the former is the task that asks for the concatenation of the last letters of 4 words, and the latter asks for the state of the coin after being flipped a few times. We investigate how the specificity of column names affects the performance and report in our ablation study. We also evaluate our method on two commonsense reasoning tasks: CommonsenseQA Talmor et al. (2019) and StrategyQA Geva et al. (2021).
Footnote 7: We use the file generated by Kojima et al. (2022).
Following zero-shot CoT Kojima et al. (2022), we set the first generated number as the numeral answer, the first capitalized letter as the answer for multiple-choice questions, and the first "yes" or "no" as the answer for "Yes or No" questions.
## 5 Results
### Main Results
Our main experiments are conducted on arithmetic reasoning tasks under the zero-shot setting. We tested the performance of both text-based and code-based LLMs on all methods. The results are shown in Table 2. Under the scheme "\(|\)step\(|\)subquestion\(|\)process\(|\)result\(|\)", our zero-shot Tab-CoT approach significantly outperformed the standard prompting in all tasks. Furthermore, our best-performing Tab-CoT model (using code-based LLM) outperforms the best conventional CoT model in 5 out of 6 tasks (with an average improvement of 2.2%).
When the standard prompting method is considered, using the text-based LLM leads to significantly better results than the code-based counterpart (15.7% on average). Similarly, when zero-shot CoT is considered, using the former also outperforms the latter by 10.9% on average. However, for our Tab-CoT approach, "code" outperforms "text" by 4.0%, leading to the best overall performance among all configurations.
From such results, we can see that the conventional CoT method responds differently from our Tab-CoT method with different types of underlying LLMs involved. The conventional CoT method (and the standard prompting method) strongly favors a text-based LLM under the zero-shot setting. In contrast, our approach works well with both types of LLMs, but the code-based version can give it an additional boost in performance. Compared with "text", the "code" model is further fine-tuned on code Chen et al. (2021). We conjecture that table generation resembles the code generation process - both involve structured procedures that are highly organized and follow a step-by-step process. Comparing our Tab-CoT approach with conventional CoT, we can conclude that our proposed table-generation prompt is able to significantly better unlock the strong reasoning abilities within the code-based LLM.
Based on the above main experiments, we choose to use "code" as the default LLM for all subsequent experiments unless otherwise specified.
### Importance of Scheme Design
To understand the significance of our proposed table scheme design, we evaluate the performance of
\begin{table}
\begin{tabular}{l l l l l l l l l l l|l} \hline \hline \multicolumn{2}{c}{Method} & \multicolumn{2}{c}{CoT Prompt} & \multicolumn{1}{c}{LLM} & SingleEq & AddSub & MultiArith & GSM8K & AQUA & SVAMP & Average \\ \hline \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & Standard Prompting & \(-\) & text & 74.6 & 72.2 & 17.7 & 10.4 & 22.4 & 58.8 & 42.7 \\ & & code & 46.3 & 51.4 & 7.2 & 4.1 & 23.6 & 29.5 & 27.0 \\ \cline{2-11} & CoT & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & text & 78.0 & 69.6 & 78.7 & 40.7 & 33.5 & **62.1** & 60.4 \\ & & code & 65.6 & 65.6 & 64.8 & 31.8 & 29.5 & 39.9 & 49.5 \\ \cline{2-11} & Tab-CoT & \multirow{2}{*}{
\begin{tabular}{} \end{tabular} } & text & 74.6 & **71.9** & 72.2 & 39.3 & 36.6 & 57.0 & 58.6 \\ & & code & **81.9** & 70.9 & **81.2** & **44.4** & **37.0** & 60.5 & **62.6** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Zero-shot results on the arithmetic datasets. All methods use the same answer extraction prompt in these datasets for a fair comparison. All methods are evaluated under the zero-shot setting.
"\(|\texttt{step}|\texttt{subquestion}|\texttt{process}|\texttt{result}|\)", along with four variations, each of which is obtained by removing one of the four columns as ablation. The results in Table 4 show that each column of "\(|\texttt{step}|\texttt{subquestion}|\texttt{process}|\texttt{result}|\)" is crucial. From the result, we notice that removing the column "step" from our scheme results in the most significant performance drop. This implies although the step only contains a number indicating "which step this is", it organized the table in sequential order over rows. The column "subquestion" is also important. Removing "subquestion" from the scheme also shows an average performance drop of 5.4%. The "subquestion" column forms step-by-step instructions vertically, indicating the subquestion under consideration for each step. The "step" and "subquestion" columns play important roles in maintaining the structure of the table, building vertical connections across rows.
### Effectiveness of Self-Consistency
The self-consistency (Wang et al., 2022) decoding strategy was shown to obtain better results by generating and exploring multiple, diverse reasoning paths. We also adopt a similar approach here. In the original self-consistency paper, up to 40 reasoning paths were considered. We show the feasibility of using only 3 paths in our work.8 This is conveniently achieved by using 3 different prompts - we select another two table schemes besides the standard scheme. One is a highly similar prompt, which we expect to perform similarly well, and the other is less similar, which we expect to yield a worse performance (based on Sec 5.2). They are shown in Table 3. We then perform majority voting based on the outputs from these 3 prompts. Interestingly, although a prompt with worse performance is used in the voting process, the overall performance improves. This shows the benefits of integrating different table schemes for such tasks, which helps improve the overall robustness of the approach.
Footnote 8: The self-consistency decoding method did not show significant improvement when the number of reasoning paths is below 5 in their paper.
### Few-shot Tab-CoT
Tab-CoT shows impressive reasoning ability under the zero-shot setting. It can generate a structured output in the form of a table that enables the chain-of-thought reasoning process without few-shot samples. Tables are capable chain-of-thought carriers, but can they also serve as good chain-of-thought teachers? To answer this question, we evaluated Tab-CoT under the few-shot setting.9
Footnote 9: We did not compare with least-to-most prompting (Zhou et al., 2022) as it requires task-specific supervision, it only evaluated on GSM8K and provide task-specific prompt for GSM8K in the paper.
For a fair comparison, we use the same few-shot sample questions described in Wei et al. (2022) (listed in Appendix D). We use "\(|\texttt{step}|\texttt{subquestion}|\texttt{process}|\texttt{result}|\)" as the table scheme when representing few-shot samples. The results are reported in Table 5, our method outperformed few-shot CoT by 1% on average. While the performance difference between Tab-CoT and CoT on other datasets is below 2%, the performance difference on SVAMP is 6.5%. The large improvement on SVAMP is likely related to the selection of few-shot samples because Wei et al. (2022) select 8 sample questions from SVAMP for all arithmetic reasoning tasks except AQUA10.
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline \multirow{2}{*}{} & \multirow{2}{*}{Method} & \multicolumn{2}{c}{Standard} & \multirow{2}{*}{CoT} & \multirow{2}{*}{T} & \multirow{2}{*}{Tab-CoT} \\ & & & & & & \\ \cline{3-6} \multirow{5}{*}{\(\mathsf{step}\)} & SingleEq & **86.3** & **89.3** & **92.1** \\ & AddSub & **99.9** & 89.1 & 89.1 \\ & MultiArith & 44.0 & 96.2 & **96.3** \\ & GSM8K & 19.7 & **63.1** & 61.6 \\ & AQUA & 29.5 & 45.3 & **46.9** \\ & SVAMP & 69.9 & 76.4 & **82.9** \\ \cline{2-6} & Average & 68.2 & 77.2 & **78.2** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance if a column is removed from the scheme (detailed results are in Appendix A).
\begin{table}
\begin{tabular}{l l c c c c c c|c} \hline \hline & Scheme & SingleEq & AddSub & MultiArith & GSM8K & AQUA & SVAMP & Average \\ \hline \multirow{4}{*}{\(\mathsf{step}\)} & Standard Prompting & 46.3 & 51.4 & 7.2 & 4.1 & 23.6 & 29.5 & 27.0 \\ & \(|\texttt{step}|\texttt{subquestion}|\texttt{process}|\texttt{result}|\) & 81.9 & 70.9 & 81.2 & 44.4 & 37.0 & 60.5 & 62.6 \\ & \(|\texttt{step}|\texttt{subquestion}|\texttt{procedure}|\texttt{result}|\) & 83.7 & 69.1 & 77.8 & 43.4 & 38.2 & 60.4 & 62.1 \\ & \(|\texttt{step}|\texttt{subquestion}|\texttt{response}|\) & 77.6 & 73.9 & 79.0 & 38.1 & 34.3 & 63.9 & 61.1 \\ & Self-consistency (using above) & **86.4** & **78.2** & **85.2** & **48.2** & **44.1** & **66.9** & **68.2** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Zero-shot performance comparison between the three schemes (and with self-consistency).
\begin{table}
\begin{tabular}{l l c c c} \hline \hline & Scheme & Average & & & \\ \hline & \(|\texttt{subquestion}|\texttt{process}|\texttt{result}|\) & 54.3 & & & \\ \(|\texttt{step}|\texttt{proposs}|\texttt{result}|\) & 57.2 & & & & \\ \(|\texttt{step}|\texttt{subquestion}|\texttt{result}|\) & 61.3 & & & & \\ \(|\texttt{step}|\texttt{subquestion}|\texttt{process}|\) & 60.9 & & & & \\ \(|\texttt{step}|\texttt{subquestion}|\texttt{process}|\) & 60.9 & & & & \\ \(|\texttt{step}|\texttt{subquestion}|\texttt{process}|\) & **62.6** & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Zero-shot performance comparison between the three schemes (and with self-consistency).
### Case Studies
The main experimental results show that "code" under-performs "text" with conventional CoT but yields better results in our Tab-CoT. To understand this better, we conduct case studies to compare their generated tables in Table 6.
While "code" only generated short text snippets or formulas under "process", the words generated by "text" under the same column tend to form complete sentences whenever possible. As we mentioned earlier, "code" is an LLM that is further fine-tuned on code (Chen et al., 2021). This explains why it appears more amenable to the tabular-structured format of the output. In question 1, the model with "text" overwrites the generated "subquestion" by asking another question. Thus, the "result" fails to answer the "subquestion" in the same row. In question 2, "text" generated 5 steps while "code" only took 3. The "subquestion" generated by "text" is also ambiguous (e.g., "what is the known information?"). In question 3, "text" presents a wrong reasoning order. Overall, "code" shows better reasoning ability by demonstrating a more concise and straightforward reasoning process.
### Additional Experiments
We further evaluate our methods on symbolic reasoning and commonsense reasoning tasks. We also conducted some new experiments based on the GPT-3.5 model to understand our approach's effectiveness on such newer models 11. With such additional experiments, we hope to draw further insights into our approach.
Footnote 11: GPT-3.5 is released on Mar 2023.
Symbolic ReasoningWe evaluate Tab-CoT on two symbolic reasoning datasets: Coin Flip (CF)12 and Last Letter (LL)13. Unlike the arithmetic reasoning tasks, these tasks focus on some specific problems. This also opens up the opportunity for us to examine whether the specificity of the table
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Question 1** & _Geroches has some coins. There are 30 many gold coins than silver coins. If the had 70 gold coins, how many coins did Geroches have in total?_ (GT: **110**) \\ \hline \hline \multirow{2}{*}{**CoT**} & \multirow{2}{*}{1} & \multirow{2}{*}{Geroches has some coins. There are 30 more gold coins than silver coins. If the had 70 gold coins, how many coins did Geroches have in total?} & \multirow{2}{*}{100 coins} \\ \cline{3-3} \cline{5-5} & & & \\ \cline{3-3} \cline{5-5} & & & \\ \hline \hline \multirow{3}{*}{text} & \multirow{3}{*}{2} & \multirow{3}{*}{How many silver coins does Geroches have?} & \multirow{3}{*}{30 more gold coins than silver coins} & \multirow{3}{*}{70 gold coins} \\ \cline{3-3} \cline{5-5} & & & & \\ \cline{3-3} \cline{5-5} & & & & \\ \cline{3-3} \cline{5-5} & & & & \\ \hline \hline \multirow{3}{*}{code} & \multirow{3}{*}{2} & \multirow{3}{*}{How many gold coins does Geroches have?} & \multirow{3}{*}{30 more gold coins than silver coins} & \multirow{3}{*}{40 silver coins} \\ \cline{3-3} \cline{5-5} & & & & \\ \cline{3-3} \cline{5-5} & & & & \\ \cline{3-3} \cline{5-5} & & & & \\ \hline \hline \multirow{3}{*}{**Question 2**:**} & \multirow{3}{*}{_A down bag 2 boxes of chocolate cards and 50 boxes of caramel cards. If each box has 4 pieces inside \(k\), how much cards did the have total?_ (GT: **28**) \\ \cline{3-3} \cline{5-5} & & & & \\ \cline{3-3} \cline{5-5} & & & & \\ \hline \hline \multirow{3}{*}{**Cot**} & \multirow{3}{*}{Step} & \multirow{3}{*}{Subquestion} & \multirow{3}{*}{Process} & \multirow{3}{*}{2} \\ \cline{3-3} \cline{5-5} & & & & \\ \cline{3-3} \cline{5-5} & & & & \\ \cline{3-3} \cline{5-5} & & & & \\ \cline{3-3} \cline{5-5} & & & & \\ \hline \hline \multirow{3}{*}{text} & \multirow{3}{*}{1} & \multirow{3}{*}{What is the known information?} & \multirow{3}{*}{Alam bought 2 boxes of chocolate cards and 5 boxes of caramel cards.} & \multirow{3}{*}{2 boxes of chocolate cards and 5 boxes of caramel cards.} & \multirow{3}{*}{5 boxes of caramel cards, with 4 pieces inside \(k\), how much cards?} & \multirow{3}{*}{5 boxes of caramel cards, with 4 pieces inside \(k\), how much cards?} & \multirow{3}{*}{5 boxes inside \(k\), how much cards?} & \multirow{3}{*}{5 boxes inside \(k\), how much cards?} & \multirow{3}{*}{5 boxes inside \(k\), how much cards?} & \multirow{3}{*}{5 boxes inside \(k\), how much cards?} & \multirow{3}{*}{5 boxes inside \(k\), how much cards?} & \multirow{3}{*}{5 boxes inside \(k\), how much cards?} & \multirow{3}{*}{5 boxes inside \(k\), how much cards?} & \multirow{3}{*}{5 boxes inside \(k\), how much cards?} & \multirow{3}{*}{5 boxes inside \(k\), how much cards?} & \multirow{3}{*}{5 boxes inside \(k\), how much cards?} & \multirow{3}{*}{5 boxes inside \(k\), how much cards?} & \multirow{3}{*}{5 boxes inside \(k\), how much cards?} & \multirow{3}{*}{5 boxes inside \(k\), how much cards?} & \multirow{3}{*}{5 boxes inside \(k\), how many cards?} & \multirow{3}{*}{5 boxes inside \(k\), how many cards?
scheme may have an impact on the reasoning process in such tasks.
To this end, we split table schemes into three categories: (1) _general_: the table scheme that can be generally applied to most text questions. (2) _domain-specific_: the table scheme that can be adapted to a specific domain. (3) _task-specific_: the scheme that can only be adopted by a single task.
Our experiments in Table 7 illustrate that the specificity of the table schemes highly affects the performance of symbolic reasoning tasks. One may expect the performance to increase as the table scheme becomes more task-specific. Our task-specific scheme outperformed the zero-shot CoT in both tasks. However, the increased specificity does not always lead to higher accuracy. In the Coin Flip task, we noticed that another task-specific scheme "\(|\)step\(|\)initial coin state\(|\)flip or not\(|\)next coin state\(|\)" only achieves an accuracy of 68.0%. To understand this, we investigate their reasoning flows in Figure 4. Although the left scheme is more task-specific, it largely disabled the vertical reasoning in the table. While the right scheme is general, it effectively enables reasoning along both vertical and horizontal directions, leading to significantly better results. 14
Footnote 14: We further evaluate the general scheme under the one-shot setting, and the results are in Appendix A
CoT process. 17
Footnote 17: Based on our observations, those tables generated in conventional Zero-shot CoT under GPT 3.5 can be different from those generated with our method. They appear to be mostly used to organize information related to the question but do not appear to be used for presenting reasoning steps.
### Ablation Studies
Model SizesKojima et al. (2022) evaluated the family of GPT-3 models of four different sizes: 2.7B, 6.7B, 13B, and 175B parameters. The results show that only the largest model ("text-davinci-002") shows the chain-of-thought reasoning ability.
We compare the performance of the smaller model "code-cushman-001" (13B) with "code-davinci-002" (175B). Similar to zero-shot CoT, smaller models do not show the ability to conduct chain-of-thought reasoning. The performance of "code-cushman-001" cannot reach 10%, except AQUA (a multiple choice dataset with 5 choices for each question). The experimental results are reported in Table 10.
Structure-Promoting SchemeAs mentioned in Table 4, we compare the performance when we remove any column from "\(|\)step\(|\)subquestion\(|\)process\(|\)result\(|\)". The detailed experimental results are reported in Table 11. Results suggest that each column of our proposed scheme is important because removing any column will lead to a drop in performance.
## 6 Discussion
Our experimental results confirmed the effectiveness of our proposed tabular chain-of-thought method under both zero-shot and few-shot settings. We summarize several advantages of our method compared to conventional chain-of-thought methods and list them below.
Tab-CoT generates a table illustrating the reasoning process, which is more _organized_. This nature of the generated text, as can be seen from Table 6, makes the reasoning process much easier.
Additionally, from Figure 4, we conclude that Tab-CoT encourages a more _structured_ reasoning process to be explicitly modelled. As a 2-dimensional data structure, tables enable both horizontal reasoning along rows and vertical reasoning along columns.
Practically, table schemes are also _easy to craft_. Designing a specific table generation prompt typically involves deciding concise header names without concerning grammar. It is thus less cumbersome than choosing a natural language prompt from a diverse set of candidates.
Overall, we argue that under current state-of-the-art LLMs, table schemes are _natural prompts_ that are well suited for zero-shot learning.
## 7 Conclusion
In this paper, we propose Tab-CoT, a novel prompting framework that performs effective zero-shot reasoning by generating a table.
Tab-CoT shows competitive results on arithmetic reasoning tasks under both zero-shot and few-shot settings. We further conducted comprehensive experiments across different reasoning tasks under different settings. Our comprehensive experiments revealed some specific benefits of our method and identify the optimal way to use it. We hope that, through our work, we can sparkle new ideas and provide some inspiration to our community.
In the future, we would like to explore methods to automate the scheme selection process, using the generated schemes to meet task-specific requirements. Future work also includes integrating external calculators Gao et al. (2022), or task-specific supervision Zhou et al. (2022) into the learning process, under both zero-shot and few-shot settings.
Our Tab-CoT also provides a straightforward decomposition of the intermediate thought process. This highly structured chain of thought produced by our approach may help people to observe and interpret how large language models decompose complex problems. We believe our proposed method can help reveal the underlying mechanisms associated with the emergence of certain complex behaviours associated with large language models.
\begin{table}
\begin{tabular}{l l c c c c c c|c} \hline \hline \multicolumn{2}{c}{} & Scheme & \multicolumn{1}{c}{SingleEq} & \multicolumn{1}{c}{AdjSub} & \multicolumn{1}{c}{Multi-Arith} & \multicolumn{1}{c}{GSM8K} & \multicolumn{1}{c}{AQUA} & \multicolumn{1}{c}{SVAMP} & \multicolumn{1}{c}{Average} \\ \hline \multicolumn{2}{c}{} & \multicolumn{1}{c}{Standard Promping} & 46.3 & 51.4 & 7.2 & 4.1 & 23.6 & 29.5 & 27.0 \\ \hline \multirow{3}{*}{\begin{tabular}{l} **S**awquestion[process/result] \\ \end{tabular} } & & 69.9 & 51.9 & 84.0 & 40.1 & 35 & 44.7 & 54.3 \\ & \(|\)step[process/result] & 77.0 & 55.7 & 84.2 & 41.5 & 37.8 & 46.9 & 57.2 \\ & \(|\)step[subquestion/result] & 76.0 & 77.9 & 76.8 & 40.1 & 36.2 & 60.6 & 61.3 \\ & \(|\)step[subquestion/process] & 78.0 & 75.9 & 76.3 & 39.7 & 34.3 & 60.9 & 60.9 \\ \hline \multirow{3}{*}{
\begin{tabular}{l} **S**awquestion[process/result] \\ \end{tabular} } & & 81.9 & 70.9 & 81.2 & 44.4 & 37.0 & 60.5 & **62.6** \\ \cline{1-1} \cline{2-10} & \(|\)step[subquestion/process] & & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 11: Performance if a column is removed from the scheme.
### Limitations
We identify a few limitations of this work. First, our approach is applicable to language models pretrained with tables, which may not always be included in all language models (especially small ones). Second, our approach's limited improvement in commonsense reasoning tasks suggests that its effectiveness may depend on the specific task and the level of structured reasoning required.
## Acknowledgement
We would like to thank the anonymous reviewers, our meta-reviewer, and senior area chairs for their constructive comments and support on our work. This research/project is supported by the National Research Foundation Singapore and DSO National Laboratories under the AI Singapore Program (AISG Award No: AISG2-RP-2020-016), and the Ministry of Education, Singapore, under its Tier 3 Programme (The Award No.: MOET32020-0004).
|
2302.02506 | Generating Dispatching Rules for the Interrupting Swap-Allowed Blocking
Job Shop Problem Using Graph Neural Network and Reinforcement Learning | The interrupting swap-allowed blocking job shop problem (ISBJSSP) is a
complex scheduling problem that is able to model many manufacturing planning
and logistics applications realistically by addressing both the lack of storage
capacity and unforeseen production interruptions. Subjected to random
disruptions due to machine malfunction or maintenance, industry production
settings often choose to adopt dispatching rules to enable adaptive, real-time
re-scheduling, rather than traditional methods that require costly
re-computation on the new configuration every time the problem condition
changes dynamically. To generate dispatching rules for the ISBJSSP problem, we
introduce a dynamic disjunctive graph formulation characterized by nodes and
edges subjected to continuous deletions and additions. This formulation enables
the training of an adaptive scheduler utilizing graph neural networks and
reinforcement learning. Furthermore, a simulator is developed to simulate
interruption, swapping, and blocking in the ISBJSSP setting. Employing a set of
reported benchmark instances, we conduct a detailed experimental study on
ISBJSSP instances with a range of machine shutdown probabilities to show that
the scheduling policies generated can outperform or are at least as competitive
as existing dispatching rules with predetermined priority. This study shows
that the ISBJSSP, which requires real-time adaptive solutions, can be scheduled
efficiently with the proposed method when production interruptions occur with
random machine shutdowns. | Vivian W. H. Wong, Sang Hun Kim, Junyoung Park, Jinkyoo Park, Kincho H. Law | 2023-02-05T23:35:21Z | http://arxiv.org/abs/2302.02506v2 | Generating Dispatching Rules for the Interrupting Swap-Alllowed Blocking Job Shop Problem Using Graph Neural Network and Reinforcement Learning
###### Abstract
The interrupting swap-allowed blocking job shop problem (ISB/SSP) is a complex scheduling problem that is able to model many manufacturing planning and logistics applications realistically by addressing both the lack of storage capacity and unforeseen production interruptions. Subjected to random disruptions due to machine malfunction or maintenance, industry production settings often choose to adopt dispatching rules to enable adaptive, real-time re-scheduling, rather than traditional methods that require costly re-computation on the new configuration every time the problem condition changes dynamically. To generate dispatching rules for the ISBJSSP problem, a method that uses graph neural networks and reinforcement learning is proposed. ISBJSSP is formulated as a Markov decision process. Using proximal policy optimization, an optimal scheduling policy is learnt from randomly generated instances. Employing a set of reported benchmark instances, we conduct a detailed experimental study on ISR/SSP instances with a range of machine shutdown probabilities to show that the scheduling policies generated can outperform or are at least as competitive as existing dispatching rules with predetermined priority. This study shows that the ISBJSSP, which requires real-time adaptive solutions, can be scheduled efficiently with the proposed machine learning method when production interruptions occur with random machine shutdowns.
Smart Manufacturing, Job Shop Problems, Priority Dispatching Rule, Machine Learning, Reinforcement Learning, Graph Neural Networks
## 1 Introduction
Effective scheduling strategies to various production scheduling problems have been widely studied in academia and industry with the goal to streamline manufacturing systems and hence to improve production efficiency. For example, the classical job shop scheduling problem (JSSP), which aims to find optimal assignment of jobs composed of operations given some prescribed machine sharing and precedence constraints, is an NP-hard combinatorial optimization problem that finds many practical applications. Many manufacturing systems in the real settings, however, have more constraints to consider than the capacity of machines. For example, many components of vehicles and machines are often expensive items that are huge in size. It is therefore not desirable to have to invest in the storage of intermediate components and products [1]. The lack of storage capacity is therefore a constraint in this case. Furthermore, unforeseen interruptions to production, such as machine shutdowns, could occur that changes the list of available machines. To model modern manufacturing systems more realistically by considering both the _lack of storage capacities_ and _production interruptions_, this work studies a new class of job scheduling problem, the interrupting swap-allowed blocking job shop scheduling problem (ISBJSSP). Many methods, such as mathematical optimization [2], branch-and-bound search [3] and meta-heuristic algorithms [4, 5], have been developed to generate optimum or near-optimum solutions to the JSSP problems. However, these solutions are not adaptive, requiring a completely new execution when encountering a new scenario or a new configuration. These non-adaptive solutions are therefore not suitable for the ISBJSSP setting, where the problem condition constantly changes, for example, due to machine
interruptions. To cope with potential dynamic changes, priority dispatching rules (PDRs), which are simple rule-based heuristics, are the most common approach used in modern manufacturing systems, as they can be applied instantaneously to an unknown instance. PDRs, first-in-first-out (FIFO) as an example, simply loads jobs based on some predetermined priority [6]. Although PDRs are widely used in real world situations due to their simplicity and speed, their performance varies widely depending on the problem condition. For example, shortest processing time (SPT) is a common benchmarking PDR that performs well in heavily loaded or congested job shop problem instances, but fails with low load levels [7]. These simple rules, although they can deal with dynamic changes, have poor generalizability and need to be manually selected or combined based on the job shop condition. Furthermore, with the random interruptions in the ISBJSSP formulation, where problem conditions change often, it is not clear apriother any of the PDRs can be effective on an ISBJSSP problem.
To improve the generalizability of dispatching rules, researchers have started to leverage artificial intelligence (AI) methods to solve job shop scheduling problems. To consider both the ability to adapt and to generalize, methods that are based on reinforcement learning (RL) are receiving increasing attention in the research community on planning problems. Much like PDRs, these methods output sequential decisions according to a dispatching policy. The difference is that rather than using predetermined priority rules, RL's dispatching policy is learned by observing and accumulating experience from previous simulations.
RL has been used to learn policies in various planning and scheduling problems. Traditional RL algorithms, such as Q-learning and its variants, are often used to learn dispatching rules for small-scale job scheduling problems with discrete state space [8, 9]. In contrast, large-scale job shop scheduling problems are relatively unexplored. For large-scale, continuous state space problems, it is necessary to consider deep RL methods that approximate the value function.
Sparked by increased availability in computational power in recent years, deep RL methods combining deep neural networks with RL have received much attention due to its powerful ability to generate solutions for continuous state space. Examples of deep RL applications to planning and scheduling problems include task scheduling in computing [10], robotic scheduling for manufacturing [11], and semiconductor manufacturing scheduling [12]. However, the problem formulation of deep RL for job shop problems varies widely, for even the classical JSSP. Liu et al. [13] used process time matrices to represent the state space and trained a deep RL model to select from a list of existing PDRs. Park et al. [14] and Zhang et al. [15] use disjunctive graph and graph representation learning to obtain a vectorized state space, and directly learned new dispatching rules. For ISBJSSP, there lacks a formal Markov Decision Process formulation that enables the study of deep RL approach for this new class of job scheduling problem.
This paper introduces a GNN-RL scheduler, combining graph neural network (GNN) and deep RL learning for ISBJSSP that considers dynamic interruptions to machine availability in job shop scheduling. The formulation of the ISBJSSP as a dynamic disjunctive graph and a Markov Decision Process (MDP) is formally presented. To generate training data sets and experimental test scenarios, we implement an ISBJSSP simulator, building upon a python-based JSSP simulator (pyjssp) previously developed by Park et al. [14]. The simulator is designed to simulate ISBJSSP instances to include blocking constraints, swapping conditions, and machine shutdown interruptions to mimic realistic concerns in practical applications. Using the simulator, GNN-RL models are trained with randomly generated ISBJSSP instances. In this study, the performance of the trained models are evaluated on two sets of benchmark scenarios. The first test set is a benchmark with \(10\times 10\) instances that are commonly used in job shop scheduling studies [16]. To demonstrate the scalability and generalization of the GNN-RL models, the second test set includes job shop instances with varying sizes [17]. The experimental results show that GNN-RL schedulers can be used to schedule unknown ISBJSSP instances robustly and efficiently and can potentially be applied in a real manufacturing environment without shutting down the entire job shop when interruption occurs.
The paper is organized as follows: Section 2 introduces the ISBJSSP formulation and how the problem can be modeled as a disjunctive graph and a Markov Decision Process. Section 3 describes the methodology employed to learn ISBJSSP scheduling models. Section 4 describes the experimental results obtained with benchmark ISBJSSP instances of different sizes. Section 5 concludes this paper with a brief summary and discussion.
## 2 Problem Formulation
This section briefly introduces the background for the job shop problem and the various constraints that exist. The modeling of ISBJSSP as a disjunctive graph and a Markov Decision Process (MDP) is then described.
### Job Shop Scheduling
For the classical JSSP of size \(m\times n\), there exists a set of \(n\) jobs, \(O\colon\{O_{1},O_{2},...,O_{n}\}\), to be optimally allocated and executed on a set of \(m\) machines, \(M\colon\{M_{1},M_{2},...,M_{m}\}\). Each job has a series of tasks or operations that must be processed according to the problem's precedence and machine-sharing constraints. In this study, without loss of generality, we assume each job, \(O_{j}\colon\{o_{1j},o_{2j},...,o_{pj}\}\), has the same number of \(p\) operations. Each operation \(o_{ij}\) has a pre-defined processing time for completion. The precedence constraint implies that for all consecutive operations \(o_{ij}\) and \(o_{i+1,j}\) of job \(O_{j}\), \(o_{ij}\) must be completed before starting \(o_{i+1,j}\). Furthermore, the machine-sharing constraint indicates that each operation \(o_{ij}\) must be processed uninterrupted on a dedicated machine. Additionally, each operation of a job is assigned on a different machine, and each machine that is in process of an operation is only freed when that operation finishes. Therefore, given the above constraints,
the number of machines \(m=p\), the number of operations for each job. The objective of the classical JSSP is to find a schedule that minimizes the makespan, the total time to finish all jobs [18].
The classical job shop problem assumes that there is sufficient storage or buffer space available to store each job in between consecutive operations. However, buffers are undesirable in practical applications. Therefore, many real manufacturing applications are better modeled as the Blocking JSSP (BJSSP) [19]. BJSSP introduces the blocking constraint in that no buffers are available for storing a job as the job moves between machines; the job must wait on the machine until it can be processed on the next machine. That is, for any job \(O_{j}\), its operation \(o_{lj}\) is a blocking operation until \(o_{lj}\)'s succeeding operation, \(o_{l+1,j}\), starts.
In practical job shops without buffers, parts that are blocking idling machines can be swapped to avoid a deadlock situation. A deadlock situation occurs when no unprocessed operations could be processed, because each unprocessed operation is waiting for a blocked machine to become "unblocked" and available. Thus, BJSSP often allows swapping to avoid deadlocks, referred to as the swap-allowed blocking job shop problem (SBJSSP) [16]. A swap can be done if there exists a set of blocking operations, each one waiting for a machine blocked by another operation in the set. A swap resolves the deadlock and ensures that the manufacturing process can proceed and that there exists at least one solution to a randomly generated SBJSSP instance.
### _Isbjssp_
Although the SBJSSP models can be applied to many manufacturing production lines, there is an additional factor that exists in a real production line but is often overlooked - the possibility of production interruption, for example, caused by machine failures. While solution methods such as mathematical programming, branch and bound search and meta-heuristic methods (such as tabu search, genetic algorithm, simulated annealing, etc.) can generate optimal solution to static (uninterrupted) job shop scenarios, the dynamic scenarios with real time machine interruptions would require recomputing a new solution for each scenario change by the methods. The possibility of such interruption also results in priority dispatching rules [20] being generally favored in practice, as dispatching rules can easily adapt to dynamic changes in availability of machines in real time. Our work includes this additional constraint where machine availability can be interrupted in the formulation: At any given time step, an idling machine in \(M\) has a probability \(P_{interrupt}\) that the machine is unavailable to process any job for a period of \(T_{interrupt}\) time steps. If a job's next operation is waiting on an unavailable machine, the job will block the machine used by the precedent operation due to the lack of buffer. When the shutdown machine becomes available again after \(T_{interrupt}\) time steps, the machine will then process one of the waiting jobs, determined by the job shop scheduler. We refer the job shop problem with the interruption constraint as the interrupting swap-allowed blocking job shop problem (ISBJSSP).
### _Dynamic Disjunctive Graph Formulation_
The ISBJSSP can be represented by a dynamic disjunctive graph \(G=(V,C\cup D)\)[21]. Here, \(V\) are the nodes, each corresponding to an operation \(o_{lj}\) of job \(O_{j}\). \(C\) is the set of conjunctive edges, where each edge connects two consecutive operations \(o_{lj}\) and \(o_{i+1,j}\) of job \(O_{j}\). The conjunctive edges represent the set of processing order constraints. \(D\) denotes the set of disjunctive edges, which connect any two vertices if the two corresponding operations need to be processed on the same machine. The disjunctive edges represent the machine-sharing constraints. Nodes and edges of the disjunctive graph, however, are subjected to deletions and additions due to the machine availability constraint, making the disjunctive graph dynamic. More specifically, when a machine is shutdown, the nodes (i.e., the operations) that need to be processed on the machine and their connected edges are temporarily removed from the graph, indicating that the machine is no longer observed at that time instance. When the machine becomes available after \(T_{interrupt}\) time steps, the previously removed nodes and edges are added back to the graph.
Figure 1 shows a disjunctive graph of a small example instance. The instance contains three machines, on which three jobs, each with three operations, are to be processed. As an example, the first job contains the operations labeled with node numbers 0, 1, 2, which must be processed in this specified order due to the existence of precedence constraints. The precedence constraint is shown as directed edges in the disjunctive graph. Similarly, the second job must be processed in the order of 3, 4, 5, and the third job in the order of 6, 7, 8. The bi-directional disjunctive edges specify machine constraints. In our example, operations 1, 5, 6 need to be processed on a dedicated machine. Similarly, operations 0, 3, 8 share a machine, and operations 2, 4, 7 share a machine. At the time where this disjunctive graph was plotted as shown in Figure 1, operations 0, 1, 3, 6 are completed, and operation 7 is being processed. Furthermore, there is swap-allowed blocking to consider. For example, even when operation 7 is completed, it will block its machine,
Figure 1: Example of a disjunctive graph for an instance containing three jobs, each with three operations. Directed conjunctive edges represent precedence constraints. Bidirectional disjunctive edges represent machine constraints, where the nodes in a cycle are operations that require to be processed on the same machine. Nodes with dashed perimeters indicate completed operations. Nodes with solid perimeters indicate operations that have not been started. The double-outlined node indicates the operation currently being processed.
preventing operations 2 and 4 to be processed, until the part moves to the next machine to commence operation 8. At the current time step, all three machines are either blocked by an unstarted operation or busy processing an operation, and are therefore not idle. As defined earlier, the machine shutdown interruption of probability \(P_{interrupt}\) only occurs to idling machines.
To incorporate the time-dependent job shop information into the disjunctive graph, we assign a node feature vector \(x_{v}\) to each node \(v=o_{ij}\in V\). The node features are stacked vectors with the following components:
* Node status: a one-hot index vector of size 3, indicating whether the operation \(v\) is not yet started, being processed, or is completed.
* Processing time: the total time required to finish operation \(v\).
* Degree of completion: the ratio of the accumulated processing time of \(v\)'s job to the total processing time of \(v\)'s job (i.e., the job that contains the operation).
* Number of succeeding operations: the number of operations including both \(v\) and the operations after \(v\) in \(v\)'s job.
* Waiting time: the time for which \(v\) must wait for processing after it is ready to be processed.
* Remaining time: the remaining processing time needed to complete the operation \(v\) once it has started.
Since node features \(x_{v}\) are time-dependent, the resulting graph is now dynamic and will be denoted as \(G_{t}\) hereon to represent the disjunctive graph of an ISBJSSP instance at time \(t\).
### Markov Decision Process Formulation
The scheduling process of an ISBJSSP instance can be viewed as a sequential decision-making process. Specifically, ISBJSSP can be formulated as an MDP, denoted as a \((S,A,P,R,Y)\) tuple, whose elements represent the set of Markov states \((S)\), set of actions \((A)\), transition model \((P)\), reward function \((R)\) and discount factor \((\gamma)\), respectively.
* State: A disjunctive graph \(G_{t}\) representing a snapshot of state \(s_{t}\in S\) of the ISBJSSP instance at time \(t\).
* Action: A scheduling action \(a_{t}\in A\) of loading an operation to an available machine at time \(t\).
* Transition model: The transition between states, which, in this study, is handled and generated by the job shop simulator.
* Reward function: A function defined to stipulate the behavior of an action. The reward function used in this study mimics the utilization of a machine and is defined as \[r_{t}=-n_{w_{t}}\] (1) where \(n_{w_{t}}\) is the number of jobs waiting at time \(t\).
* Discount factor: The discount factor for "caring" of future reward by an action.
## 3 Methodology
This section describes a machine learning approach for deriving a policy to solve the ISBJSSP. The method consists of two parts, graph neural network (GNN) and reinforcement learning (RL). Figure 2 depicts the overarching framework of the proposed GNN-RL approach.
Figure 2: Proposed GNN-RL framework
A disjunctive graph \(\ G_{t}\) (Figure 2(c)) is observed from the ISBJSSP simulator environment (Figure 2(d)) and is used as the input to a GNN model (Figure 2(f)) for representation learning. The learned embedded graph \(\ G_{t}^{\prime}\) (Figure 2(a)) is then used as the input to the RL algorithm, learning a parameterized policy, or a probability distribution of feasible actions, using an actor-critic model with proximal policy optimization (Figure 2(b,c)). Finally, an action to process a specific operation is sampled from the parameterized policy \(\pi(\cdot\ |\ G_{t})\) and executed via the ISBJSSP simulator (Figure 2(d)).
### Representation Learning with GNN
The process of obtaining an embedded graph using GNN can be thought of as learning an embedding vector for each node \(\nu\) that represents the necessary neighborhood structure and node feature information around the node. The embedded graph in Figure 2(a) can be learned from a GNN model. A GNN is a neural network that consists of layers of differentiable functions with learnable parameters and computes an embedding vector for each node in the graph. A GNN layer needs to be designed such that for each target node, the embedding of the target node is updated not only using the previous layer's target node embedding, but also node embedding aggregated from the neighboring nodes in order to represent the structure information of the disjunctive graph, \(\ G_{t}\), in the learned embeddings.
As shown in Figure 2(f), the computation process of a GNN layer implemented in this study can be separated into three steps. Firstly, a different multi-layer perceptron (MLP) network [22] with ReLU activation [23] is applied to each of the following three sets of nodes neighboring the target node \(\nu\): the set of all precedent nodes \(N_{p}(\nu)\) connected through the conjunctive (precedence constraint) edges, succeeding nodes \(N_{s}(\nu)\) connected also through the conjunctive edges and disjunctive nodes \(N_{d}(\nu)\) connected through the (bidirectional) disjunctive (machine-sharing constraint) edges. Secondly, the vector outputs of the three MLP networks, an aggregated representation of the overall graph, the node embedding updated in the previous layer, and the initial node feature of the target node are stacked in a vector, which, as the last step, is passed through another MLP network without activation. Mathematically, the operations of the \(k^{th}\) layer of a GNN can be written as:
\[h_{\nu}^{(k)}=f_{n}^{(k)}(\ ReLU(f_{p}^{(k)}(\sum_{i\in N_{p}( \nu)}h_{i}^{(k-1)}))\ ||\] \[\ ReLU(f_{s}^{(k)}(\sum_{i\in N_{s}(\nu)}h_{i}^{(k-1)}))\ ||\] \[\ ReLU(f_{d}^{(k)}(\sum_{i\in N_{d}(\nu)}h_{i}^{(k-1)}))\ || \tag{2}\] \[\ ReLU(\sum_{i\in V}h_{t}^{(k-1)})\ ||\] \[h_{\nu}^{(k-1)}\ ||\] \[h_{\nu}^{(0)})\]
where \(\ ReLU(\cdot)=\max(0,\cdot)\) is a non-linear activation function. The previously mentioned MLP networks are denoted by \(f_{p},f_{s},\) and \(f_{d}\), each computes a vector from node embeddings in the neighborhood of \(\nu\) (i.e., \(N_{p}(\nu),N_{s}(\nu),\) and \(N_{d}(\nu)\), respectively). \(V\) is the set of all nodes of the graph \(\ G_{t}=(V,C\cup D)\). \(h_{\nu}^{(0)}\) is the feature vector \(x_{\nu}\) of node \(\nu\). \(||\) is the vector concatenation operator.
After \(K\) GNN layers, we have computed an embedded graph \(\ G_{t}^{(K)}\), as shown in Figure 2(a), whose node features are now the updated embedding vectors \(h_{\nu}^{(K)}\ \forall\ \nu\in V\), from the input disjunctive graph \(\ G_{t}\) in Figure 2(e) with initial node features \(h_{\nu}^{(0)}\).
### Dispatching Policy Learning with RL
The graph embedding \(\ G_{t}^{(K)}\) outputted from GNN described in the previous section is used as the input for the RL algorithm. More specifically, an actor-critic method is used [24]. As the name suggests, there are two neural networks in the RL process: an actor and a critic. As shown in Figure 2(b), the actor \(\pi\big{(}a_{t}^{\nu}\big{|}G_{t}^{(K)}\big{)}\) maps the embedded graph \(\ G_{t}^{(K)}\) to the probability distribution over the set of all available actions, or the set of processible nodes. The actor model, used to compute the parameterized policy, is structured like a softmax function computing the probability of performing action \(a_{t}^{\nu}\) for the current state \(\ G_{t}^{(K)}\) as follows:
\[\pi\big{(}a_{t}^{\nu}\big{|}G_{t}^{(K)}\big{)}=\frac{\exp\big{(}f_{t}\big{(}h_ {\nu}^{(K)}\big{)}\big{)}}{\sum_{u\in A_{G}}\exp\big{(}f_{t}\big{(}h_{u}^{(K)} \big{)}\big{)}} \tag{3}\]
where \(a_{t}^{\nu}\) denotes the action of selecting operation (node) \(\nu\) to process and \(\nu\) is a processible node in the disjunctive graph's node set \(V\). Following the same notation in the previous section, \(h_{\nu}^{(K)}\) is the embedded vector of node \(\nu\), and \(f_{t}\ (l=p,s,d)\) denotes a MLP network. \(A_{G_{t}}\) represents the set of available actions for the disjunctive graph \(\ G_{t}\).
The critic model, as depicted in Figure 2(c), is another network that learns the value function to reliably optimize the policy. The current study approximates the critic function as
\[V^{\pi}\big{(}G_{t}^{(K)}\big{)}\approx f_{\nu}\big{(}\sum_{i\in V}h_{t}^{(K)} \big{)} \tag{4}\]
where \(f_{\nu}\) is an MLP network, and \(\sum_{i\in V}h_{t}^{(K)}\) returns a summation of all node embeddings.
It can be observed that the policy \(\pi\big{(}a_{t}^{\nu}\big{|}G_{t}^{(K)}\big{)}\) learned by the actor is now parameterized by a set of parameters \(\Theta=\{\theta_{\pi_{\theta}},\theta_{\alpha_{t}},\theta_{\nu},\theta_{\nu}, \theta_{\nu}\}\), corresponding to the MLP networks \(f_{p},f_{s},f_{u},f_{t},f_{t}\) and \(f_{v}\). The parameters can be iteratively updated via gradient ascent. In each training iteration, we use the policy \(\pi_{\Theta_{\mathit{old}}}\) with the current "old" parameters \(\Theta_{\mathit{old}}\) to interact with the job shop simulator (Figure 2(d)) and collect transition samples. The parameters \(\Theta\) are updated to optimize the policy:
\[\Theta=\Theta_{\mathit{old}}+\eta\nabla_{\Theta}L(\Theta) \tag{5}\]
where \(\eta\) is the learning rate and \(L(\Theta)\) denotes an objective function to be optimized for an optimal policy.
In this work, proximal policy optimization (PPO) is employed to optimize the policy. To prevent unstable training due to substantial policy changes and encourage exploration
during training, Schulman et.al. [24] proposes an objective function \(L(\Theta)\) to be optimized at each time step \(t\) as follows:
\[L_{t}(\Theta)=\mathbb{E}[L_{t}^{CLIP}(\Theta)-\alpha L_{t}^{VP}(\Theta)+\beta E_{ t}(\pi_{\Theta})] \tag{6}\]
where \(\alpha\) and \(\beta\) are parameters for the objective function. \(L_{t}^{CLIP}(\Theta),L_{t}^{VP}(\Theta)\), and \(E_{t}(\pi_{\Theta})\) are, respectively, a clipped-surrogate function, square-error value function loss, and an entropy bonus given as follows [20].
1. The clipped-surrogate function is defined as
\[L_{t}^{CLIP}(\Theta)=\mathbb{E}\left[\min(\rho_{t}\sigma_{t}a_{t},clip(\rho_{ t},1-\epsilon,1+\epsilon)\sigma_{t})\right] \tag{7}\]
where \(\rho_{t}\) denotes a probability ratio of the current and old policies as
\[\rho_{t}=\frac{\pi_{\Theta}\big{(}a_{t}\big{|}G_{t}^{(K)}\big{)}}{\pi_{\Theta _{old}}\big{(}a_{t}\big{|}G_{t}^{(K)}\big{)}} \tag{8}\]
and the estimator of the advantage function \(\sigma_{t}\) at time step \(t\) is computed as
\[\sigma_{t}=\delta_{t}+(\gamma\lambda)\delta_{t+1}+\cdots+(\gamma \lambda)^{T-t+1}\delta_{T-1} \tag{9}\] \[\text{and }\delta_{t}=\rho_{t}+\gamma V_{\Theta}\big{(}G_{t+1}^{(K)} \big{)}-V_{\Theta}\big{(}G_{t}^{(K)}\big{)} \tag{10}\]
The coefficients \(\gamma\) and \(\lambda\) are, respectively, the discount factor and the parameter for the advantage function estimator. The clip operation ensures that \(\rho_{t}\) does not move outside the interval \([1-\epsilon,1+\epsilon]\), thereby preventing substantial changes in policy.
2. The square-error value function loss is given as: \[L_{t}^{VF}(\Theta)=\big{(}V_{\Theta}\big{(}G_{t}^{(K)}\big{)}-V_{t}^{target} \big{)}^{2}\] (11) where \(V_{t}^{target}=\sum_{i=t}^{T}r_{i}\) denotes the sum of rewards.
3. The entropy bonus term for the current policy \(\pi_{\Theta}(\alpha)\) is introduced to ensure sufficient exploration and is defined as \[E_{t}(\pi_{\Theta})=-\sum_{\alpha}\log(\pi_{\Theta}(\alpha))\pi_{\Theta}(\alpha)\] (12) where \(a\) is an action in the current embedded graph \(G_{t}^{(K)}\).
The PPO procedure maximizes the objective function \(L(\Theta)\) by updating the parameters \(\Theta\) following the gradient direction \(\nabla_{\Theta}L(\Theta)\). Further discussion of the PPO algorithms can be found in References [24] and [14].
## 4 Experimental Results
This section describes the results of the experiments on a number of benchmark instances to evaluate the schedulers trained using GNN-RL. We will first describe the benchmark instances and details of the schedulers used in the experiments. We then report the experimental results, including: (1) a performance comparison of the GNN-RL method with other dispatching rules for the standard SBJSSP (a special case of the ISBJSSP with \(P_{interrupt}\) = 0); (2) a performance comparison demonstrating the practicability of the above methods for the ISBJSSP, which is subjected to random machine interruptions with probability \(P_{interrupt}\) > 0; and (3) a demonstration of the GNN-RL method's ability to generalize a model trained with instances of a specific size to handle ISBJSSP instances of different sizes.
### The Baseline Benchmark Problem Instances
As a baseline to evaluate and compare the GNN-RL methodology with the PDR methods, a set of 18 job shop scheduling problem instances, each being \(10\times 10\) in size (consisting of 10 machines and 10 jobs), are employed. Each job involves 10 operations and, depending on the benchmark instance, the operations may have different machine processing time. The 18 instances are commonly used as benchmarks for job shop scheduling [16]. Even though this study focuses on the ISBJSSP where machine interruptions may occur, the benchmark instances serve as a fair metric for evaluating scheduling efficacy between the GNN-RL method and the PDR methods.
### Scheduler Models and Configurations
**Priority Dispatching Rules (PDRs).** As mentioned before, PDRs [20] are the most common approaches employed in practice for generating immediate solutions for scheduling job shops with unseen instances. We therefore compare the makespans obtained by the GNN-RL schedulers with those obtained using the following PDRs for prioritizing the preference for job execution:
* Most Total Work Remaining (MTWR): the job that has the greatest number of remaining operations
* Least Total Work Remaining (LTWR): the job that has the fewest number of remaining operations
* Shortest Processing Time (SPT): the job whose next operation has the shortest processing time
* Longest Processing Time (LPT): the job whose next operation has the longest processing time
* First In First Out (FIFO): the first job that arrives
* Last In First Out (LIFO): the last job that arrives
* Shortest Queue Next Operation (SQNO): the job whose next operation requires a machine that has the fewest number of jobs waiting
* Longest Queue Next Operation (LQNO): the job whose next operation requires a machine that has the most number of jobs waiting
* Shortest Total Processing Time (STPT): the job with the shortest total processing time
* Longest Total Processing Time (LTPT): the job with the longest total processing time
* Random: the job that is randomly selected from the set of all doable jobs
The PDRs can be applied irrespective of whether machine interruptions occur during the job operations.
**GNN-RL Schedulers.** Initially targeted for the baseline benchmark instances of \(10\times 10\) in size, a random ISBJSSP instance of size \(m\sim\mathcal{U}(5,9)\times n\sim\mathcal{U}(m,9)\) and operation processing times from \(\mathcal{U}(1,99)\), where \(\mathcal{U}\) denotes a uniform distribution, is generated using the ISBJSSP simulator for training the GNN-RL models. The order of machines that each job visits is randomly permuted. After 20 episodes of training on the instance, a new ISBJSSP instance, once again with size \(m\sim\mathcal{U}(5,9)\times n\sim\mathcal{U}(m,9)\), processing times from \(\mathcal{U}(1,99)\) and randomly permuted machine order, is generated every 100 iterations. Note that, as discussed in a latter section, even though training is conducted on small-size instances to limit computational time and demand, the scheduling strategy that the
model learns can be transferred to solve instances of other sizes effectively. Algorithm 1 outlines the procedure to train a GNN-RL scheduler.
```
Generate a random ISBJSSP instance as starting state \(G_{0}\); Initialize parameters \(\Theta\) and the parameterized policy \(\pi_{\theta}\); Initialize iteration = 0; repeat iteration \(\leftarrow\) 1; for episode=1,2,..., T do Observe and collect transition sample \((G_{t-1},a_{t-1},r_{t-1},G_{t})\); Execute action \(a_{t}\)\(\leftarrow\)\(\pi_{\theta}\)\(\left(\cdot\left|G_{t}^{(R)}\right.\right)\) to assign operations to available machines; endfor Update parameters \(\Theta\) with gradient ascent to maximize \(L_{t}(\Theta)\), calculated from the collected transition samples; if iteration = 100 then Generate a new random ISBJSSP as starting state \(G_{0}\); Reset iteration = 0; endif until Validation performance has converged.
```
**Algorithm 1** Training procedure for GNN-RL scheduler
An Adam optimizer [25] with a learning rate (\(\eta\)) of \(2.5\times 10^{-4}\) is used. Two different discount factors (\(\gamma\)) are used in training, which are 0.9 and 1.0 (no discount) respectively. We use a GNN with \(K=3\) layers to obtain the graph embeddings. The MLP networks, namely \(f_{p}\), \(f_{s}\), \(f_{d}\), \(f_{m}\), \(f_{t}\) and \(f_{t}\), each consists of two hidden layers with 256 ReLU activation units. \(f_{p}\), \(f_{s}\) and \(f_{d}\) have 8-dimensional inputs and outputs. \(f_{n}\) has 48-dimensional input and 8-dimensional output. \(f_{t}\) and \(f_{v}\) have 8-dimensional inputs and scalar outputs. For the PPO hyperparameters, we set \(\lambda=0.95,\ \epsilon=0.2,\ \alpha=0.5,\ \text{and}\ \beta=0.01\), which are the same as proposed in [14]. For comparison purpose, models are trained without interruptions and with the possibility of machine interruptions. When trained on SBJSSP (without interruptions), we set the probability of interruption \(P_{interrupt}=0\). The trained GNN_RL models are used for a baseline comparison with the PDR methods. For models that are trained on ISBJSSP instances, we train a different model for each \(P_{interrupt}\) value, ranging from 1% to 20%. The trained GNN-RL models for the ISBJSSP are then used to compare with the SPT (shortest process time) priority rule (which achieves the shortest makespan among the PDRs for the non-interrupting SBJSSP) for the cases with machine interruptions. All experiments are conducted on a machine equipped with an Intel Core i7-7820X processor.
### Results on Non-interrupting SBJSSP
The goal of the job shop problem is to minimize the makespan, which is employed here as the evaluation criterion for comparing the performances of the different SBJSSP schedulers. Two GNN-RL schedulers, namely GNN-RL (1) with \(\gamma\) = 0.9 and GNN-RL (2) with \(\gamma=1.0\), are trained. Figure 3 shows the results of the two GNN-RL schedulers and the makespans obtained using the PDR schedulers for the 18 benchmark instances. Among the PDRs, the best scheduler appears to be problem dependent. As shown in Figure 3, on most of the benchmark instances, at least one of the GNN-RL schedulers is able to outperform or as competitive as the PDR schedulers, assuming no machine interruptions occur.
Table 1 reports the sum of the makespans of all problem instances. As shown, on average over all the benchmark instances, the GNN-RL schedulers produce shorter makespans than those by the PDRs schedulers. The GNN-RL(2) scheduler with \(\gamma\) = 1.0 has the best average performance. Among the PDR schedulers, the SPT strategy, which prioritizes the jobs according to the next operation having the shortest processing time, appears to perform the best on the average.
### Real-time Adaptive Scheduling of the ISBJSSP for the Baseline Benchmark
In practice, unforeseen interruptions could occur during production. For example, machines in a production line can misbehave unexpectedly at times that require a shutdown. To
\begin{table}
\begin{tabular}{l|c c} \hline \hline Scheduler name & Discount factor & Total makespan \\ \hline GNN-RL (1) & 0.9 & 27337 \\ GNN-RL (2) & 1.0 & **26856** \\ \hline MTWR & - & 28687 \\ LTWR & - & 27996 \\ SPT & - & 27827 \\ LPT & - & 29009 \\ FIFO & - & 28169 \\ LIFO & - & 28497 \\ SQNO & - & 28563 \\ LQNO & - & 28818 \\ STPT & - & 28439 \\ LTPT & - & 28845 \\ RANDOM & - & 28988 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Total makespan computed by the GNN-RL and PDR schedulers for the 18 benchmark instances, without machine interruptions
Figure 3: Makespans obtained using the two trained GNN-RL schedulers and the PDRs on the 18 benchmark instances without machine interruptions.
assess whether the GNN-RL method can cope with real-time changes, we simulate the scenarios where at any given time step, each idling machine, excluding those in the middle of processing a job, has a certain probability of failing or shutdown, denoted as \(P_{interrupt}\), for a duration of 50 time steps. During a simulated machine failure, no further job can be assigned to the machine. More specifically, a machine being shutdown is equivalent to removing the nodes utilizing that machine from the disjunctive graph representation for 50 time steps according to our problem formulation. The schedulers have no prior knowledge on the probability and the down time of the machines.
Three GNN-RL models are trained. They include the same two models, namely GNN-RL (1) and GNN-RL (2) trained without machine interruptions, as described in Section 4.3. The third model, GNN-RL (3), has the same hyperparameters as GNN-RL (2), but is trained with the same probability of interruption, \(P_{interrupt}\), as assigned to the simulation scenario.
We perform 50 simulations for each of the benchmark instances for a number of \(P_{interrupt}\) values, ranging from 1% to 20%. Among the PDR schedulers, SPT shows the best average performance for almost all the cases and is therefore employed here to compare with the GNN-RL schedulers. Figure 4 shows a comparison of the average results between the GNN-RL and the SPT schedulers. It can be seen that for most instances, the GNN-RL schedulers outperform or are as competitive as the SPT scheduler for \(P_{interrupt}<10\%\). Furthermore, the GNN-RL (2) model and the GNN-RL (3) model trained with interruptions perform consistently better than the GNN-RL (1) model. Moreover, Figure 5 plots the performance, averaged over the 18 benchmark instances, of each scheduler with respect to \(P_{interrupt}\). Also shown in Figure 5 are the means and standard deviations (Std) of the scheduling results for 50 randomly generated instances. Based on the results shown in Figures 4 and 5, the following can be observed:
1. As expected, when the probability of interruption for the machines increases, the makespans produced by the schedulers for completing all the jobs increase.
2. All GNN-RL models produce more efficient makespans than the SPT scheduler when the probability of machine interruption are lower than 5%. It can be seen from Figure 5 that, with interpolation, the GNN-RL models can potentially be effective up to 8-10% probability of machine interruptions. Beyond \(P_{interrupt}=10\%\), the SPT scheduler produces more efficient makespans in this case study.
3. It is interesting to observe that, for the set of benchmark instances tested, the GNN-RL (3) model trained with the same probability of interruption assigned to the simulator performs quite competitively for almost all cases.
4. As can be seen in Figure 5, when the probability of interruption becomes high (\(P_{interrupt}>10\%\)), the standard
Figure 4: Mean of makespans obtained using the GNN-RL and the SPT schedulers on the 18 baseline benchmark instances.
Figure 5: Total makespans of the GNN-RL and the SPT schedulers for different probabilities of interruption.
deviations for the GNN-RL schedulers are higher than the SPT scheduler. The higher standard deviation is probably due to the increase in uncertainties on machine interruptions that affect the predictability of the trained GNN-RL models.
In summary, based on the experimentation on the 18 benchmark instances, the GNN-RL schedulers are shown to be robust for the scenarios where the probability of interruptions for each machine is less than 10%, even when the GNN-RL model is trained based on the scenarios with no machine interruptions.
### Scheduling ISBJSSP Instances of Difference Sizes
To assess the scalability and generalization of GNN-RL models to instances of different sizes, we apply the same GNN-RL models trained previously with job shop instances of size \(m\sim\mathcal{U}(5,9)\times n\sim\mathcal{U}(m,9)\) to the 40 LA benchmark instances, with a range of sizes from 10\(\times\)5 to 30\(\times\)10 and 15\(\times\)15 [17]. Makespans are obtained for 50 simulations on each of benchmark instances. Figure 6 shows the makespans computed with the SPT scheduler, the GNN-RL (1) and (2) models trained without interruptions and the GNN-RL (3) models trained with interruptions. In general, especially for cases with \(P_{interrupt}<10\%\), on average, the GNN-RL schedulers perform better or at a similar level comparable to the SPT scheduler.
In summary, this experimental study with benchmark instances of different sizes shows that the GNN-RL methods remain robust for production scenarios with different job shop sizes even though the models are trained originally with the baseline benchmark of different size.
## 5 Summary and Discussion
The ability to assign jobs to machines under possible changes of operational conditions is important in practice. This study shows that GNN and RL can be a viable approach for solving the ISBJSSP, a complex and computational demanding problem subjected to unforeseen changes to the problem condition. We implemented a simulator to generate ISBJSSP instances for training and to validate the GNN-RL models for real-time scheduling of the ISBJSSP. For the simulations with no machine interruptions, the dispatching rule generated by the best trained GNN-RL scheduler achieves the best overall makespan exceeding that of the PDRs. (It should be noted that under perfect job shop conditions, mathematical optimization can produce more superior schedules [16].) As the key objective of this study, we simulate scenarios where machines in the job shop can possibly be interrupted and shut down temporarily. The results show that the GNN-RL trained schedulers are robust under interruptions and outperform the PDRs approaches when the probability of machine interruption is low (less than 10% in the examples). In practice, it is very unlikely that the job shop would remain operational when the machines are deemed to have a high probability for being shut down. Furthermore, with an emphasis on robustness and practicality, our experimental study shows that the GNN-RL method is able to generalize to different job shop sizes subjected to a range of interruption probabilities. Given the speed of outputting actions and its decent performance, the GNN-RL method represents a viable approach applicable to real manufacturing problems that can be closely modeled as a ISBJSSP. While our research utilizes random simulations, domain-specific knowledge should be strategically incorporated in the real production environment to, for example, build specialized reward functions and fine-tune hyperparameters.
Future work could focus on developing methodologies for fine-tuning the parameters of the GNN and RL models in order to further improve the scheduling results. Empirical studies of other learning, adaptive algorithms could be explored. Additional experiments could be conducted to demonstrate on machine interruptions in real world job shop environments. Finally, further investigation on the GNN-RL method may
Figure 6: Mean of the makespans obtained on the LA benchmarks.
include other domain-specific constraints, such as limited buffer capacity, queue time constraints and multi-line scheduling that are commonly encountered in semiconductor manufacturing.
## Acknowledgements
The research was partially supported by Samsung Electronics Co. Ltd., Agreement Number SPO-168006, and the US National Institute of Standards and Technology (NIST), Grant Number 70NANB22H098, awarded to Stanford University. The research has also been partially supported by the Stanford Center at the Incheon Global Campus (SCIGC), which is sponsored in part by the Ministry of Trade, Industry, and Energy of the Republic of Korea and managed by the Incheon Free Economic Zone Authority. Certain commercial systems are identified in this article. Such identification does not imply recommendation or endorsement by Samsung, NIST or SCIGC; nor does it imply that the products identified are necessarily the best available for the purpose. Further, any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of Samsung, NIST, SCIGC or any other supporting U.S. Government or corporate organizations.
|
2302.12715 | Hiding Data Helps: On the Benefits of Masking for Sparse Coding | Sparse coding, which refers to modeling a signal as sparse linear
combinations of the elements of a learned dictionary, has proven to be a
successful (and interpretable) approach in applications such as signal
processing, computer vision, and medical imaging. While this success has
spurred much work on provable guarantees for dictionary recovery when the
learned dictionary is the same size as the ground-truth dictionary, work on the
setting where the learned dictionary is larger (or over-realized) with respect
to the ground truth is comparatively nascent. Existing theoretical results in
this setting have been constrained to the case of noise-less data. We show in
this work that, in the presence of noise, minimizing the standard dictionary
learning objective can fail to recover the elements of the ground-truth
dictionary in the over-realized regime, regardless of the magnitude of the
signal in the data-generating process. Furthermore, drawing from the growing
body of work on self-supervised learning, we propose a novel masking objective
for which recovering the ground-truth dictionary is in fact optimal as the
signal increases for a large class of data-generating processes. We corroborate
our theoretical results with experiments across several parameter regimes
showing that our proposed objective also enjoys better empirical performance
than the standard reconstruction objective. | Muthu Chidambaram, Chenwei Wu, Yu Cheng, Rong Ge | 2023-02-24T16:16:19Z | http://arxiv.org/abs/2302.12715v2 | # Hiding Data Helps:
###### Abstract
Sparse coding refers to modeling a signal as sparse linear combinations of the elements of a learned dictionary. Sparse coding has proven to be a successful and interpretable approach in many applications, such as signal processing, computer vision, and medical imaging. While this success has spurred much work on sparse coding with provable guarantees, work on the setting where the learned dictionary is larger (or _over-realized_) with respect to the ground truth is comparatively nascent. Existing theoretical results in the over-realized regime are limited to the case of noise-less data.
In this paper, we show that for over-realized sparse coding in the presence of noise, minimizing the standard dictionary learning objective can fail to recover the ground-truth dictionary, regardless of the magnitude of the signal in the data-generating process. Furthermore, drawing from the growing body of work on self-supervised learning, we propose a novel masking objective and we prove that minimizing this new objective can recover the ground-truth dictionary. We corroborate our theoretical results with experiments across several parameter regimes, showing that our proposed objective enjoys better empirical performance than the standard reconstruction objective.
## 1 Introduction
Modeling signals as sparse combinations of latent variables has been a fruitful approach in a variety of domains, and has been especially useful in areas such as medical imaging (Zhang et al., 2017), neuroscience (Olshausen and Field, 2004), and genomics (Tibshirani and Wang, 2008), where learning parsimonious representations of data is of high importance. The particular case of modeling high-dimensional data in \(\mathbb{R}^{d}\) as sparse _linear_ combinations of a set of \(p\) vectors in \(\mathbb{R}^{d}\) (referred to as a _dictionary_) has received significant attention over the past two decades, leading to the development of many successful algorithms and theoretical frameworks.
In this case, the typical assumption is that we are given data \(y_{i}\) generated as \(y_{i}\sim Az_{i}+\epsilon_{i}\), where \(A\in\mathbb{R}^{d\times p}\) is a ground-truth dictionary, \(z_{i}\) is a sparse vector, and \(\epsilon_{i}\) is a noise vector. When the dictionary \(A\) is known a priori, the goal of modeling is to recover the sparse representations \(z_{i}\), and the problem is referred to as _compressed sensing_. However, in many applications we do not have access to the ground truth \(A\), and instead hope to simultaneously learn a dictionary \(B\) that approximates \(A\) along with learning sparse representations of the data. This problem is referred to as _sparse coding_ or _sparse dictionary learning_, which is what we focus on in this work.
One of the primary goals of analyses of sparse coding is to provide provable guarantees for recovering the ground-truth dictionary \(A\), both with respect to specific algorithms and information-theoretically. Most prior work with such guarantees has focused exclusively on the setting where the learned dictionary \(B\) has the same size as the ground truth (in \(\mathbb{R}^{d\times p}\)), which is in line with the fact that recovery error is often formulated as some norm of \((B-A)\).
Unfortunately, in practice one does not necessarily have access to the structure of \(A\). It is thus natural to consider what will happen (and how to formulate recovery error) if we learn a dictionary \(B\in\mathbb{R}^{d\times p^{\prime}}\) with \(p^{\prime}>p\), where it is possible to recover \(A\) as a sub-dictionary of \(B\). The study of this _over-realized_ setting was recently taken up in the work of Sulam et al. (2020), in which the authors showed that a modest level of over-realization can be empirically and theoretically beneficial. However, the results of Sulam et al. (2020) are restricted to the noise-less setting where data is generated simply as \(y_{i}\sim Az_{i}\), which motivates the following questions:
_Does over-realized sparse coding run into pitfalls when there is noise in the data-generating process?_
_And if so, is it possible to prevent this by designing new sparse coding algorithms?_
### Main Contributions and Outline
In this work, we answer both of these questions in the affirmative. After providing the necessary background on sparse coding in Section 2, we show in Theorem 3.2 of Section 3 that, using standard sparse coding algorithms for learning over-realized dictionaries in the presence of noise leads to overfitting. In fact, our result shows that even if we allow the algorithms to access infinitely many samples and solve NP-hard problems, the learned dictionary \(B\) can still fail to recover \(A\).
The key idea behind this result is that existing approaches for sparse coding largely rely on a two-step procedure (outlined in Algorithm 1): solving the compressed sensing problem \(B\hat{z}=y_{i}\) for a learned dictionary \(B\), and then updating \(B\) based on a reconstruction objective \(\left\|y_{i}-B\hat{z}\right\|^{2}\). However, because we force \(\hat{z}\) to be sparse, by choosing \(B\) to have columns that are linear combinations of the columns of \(A\), one can effectively get around the sparsity constraint on \(\hat{z}\). Consequently, it can be optimal to use such a \(B\) for reconstructing the data \(y_{i}\), in which case \(A\) cannot be recovered as a sub-dictionary of \(B\).
On the other hand, we show in Theorem 3.6 that for a large class of data-generating processes, it is possible to prevent this kind of \(B\) by _masking_ data (as outlined in Algorithm 3): performing the compressed sensing step on a subset \(M\) of the coordinates of \(y_{i}\), and then computing the reconstruction loss on the complement of \(M\). This idea of masking has seen great success in self-supervised learning (Devlin et al., 2019). Our result shows that it can lead to provable benefits in the context of sparse coding.
Finally, in Section 4 we conduct experiments comparing our masking approach with the standard sparse coding approach across several parameter regimes. In all of our experiments, we find that our masking approach leads to better ground truth recovery, which becomes more pronounced as the amount of over-realization increases.
### Related Work
**Compressed Sensing.** The seminal works of Candes et al. (2006), Candes & Tao (2006), and Donoho (2006) established conditions on the dictionary \(A\in\mathbb{R}^{d\times p}\), even in the case where \(p\gg d\) (the _overcomplete_ case), under which it is possible to recover (approximately and exactly) the sparse representations \(z_{i}\) from \(Az_{i}+\epsilon_{i}\). In accordance with these results, several efficient algorithms based on convex programming (Tropp, 2006; Yin et al., 2008), greedy approaches (Tropp & Gilbert, 2007; Donoho et al., 2006; Efron et al., 2004), iterative thresholding (Daubechies et al.; Maleki & Donoho, 2010), and approximate message passing (Donoho et al., 2009; Musa et al., 2018) have been developed for solving the compressed sensing problem. For comprehensive reviews on the theory and applications of compressed sensing, we refer the reader to the works of Candes & Wakin (2008) and Duarte & Eldar (2011).
**Sparse Coding.** Different framings of the sparse coding problem exist in the literature (Krause & Cevher, 2010; Bach et al., 2008; Zhou et al., 2009), but the canonical formulation involves solving a non-convex optimization problem. Despite this hurdle, a number of algorithms (Engan et al., 1999; Aharon et al., 2006; Mairal et al., 2010; Arora et al., 2013, 2014, 2015) have been established to (approximately) solve the sparse coding problem under varying conditions, dating back at least to the groundbreaking work of Olshausen & Field (1997) in computational neuroscience. A summary of convergence results and the conditions required on the data-generating process for several of these algorithms may be found in Table 1 of Gribonval et al. (2014).
In addition to algorithm-specific analyses, there also exists a complementary line of work on characterizing the optimization landscape of dictionary learning. This type of analysis is carried out by Gribonval et al. (2014) in the general setting of an overcomplete dictionary and noisy measurements with possible outliers, extending the previous line of work of Aharon et al. (2006b), Gribonval & Schnass (2010), and Geng et al. (2011). However, as mentioned earlier, these theoretical results rely on learning dictionaries that are the same size as the ground truth. To the best of our knowledge, the over-realized case has only been studied by Sulam et al. (2020), and our work is the first to analyze over-realized sparse coding in the presence of noise.
**Self-Supervised Learning.** Training models to predict masked out portions of the input data is an approach to self-supervised learning that has led to strong empirical results in the deep learning literature (Devlin et al., 2019; Yang et al., 2019; Brown et al., 2020; He et al., 2022). This success has spurred several theoretical studies analyzing how and why different self-supervised tasks can be used to improve model training (Tsai et al., 2020; Lee et al., 2021; Tosh et al., 2021). The most closely related works to our own in this regard have studied the use of masking objectives in autoencoders (Cao et al., 2022; Pan et al., 2022) and hidden Markov models (Wei et al., 2021).
## 2 Preliminaries and Setup
We first introduce some notation that we will use throughout the paper.
**Notation.** Given \(n\in\mathbb{N}\), we use \([n]\) to denote the set \(\{1,2,...,n\}\). For a vector \(x\), we write \(\left\|x\right\|\) for the \(\mathcal{L}_{2}\)-norm of \(x\) and \(\left\|x\right\|_{0}\) for the number of non-zero elements in \(x\). We say a vector \(x\) is \(k\)-sparse if \(\left\|x\right\|_{0}\leq k\) and we use \(\text{supp}(x)\) to denote the support of \(x\). For a vector \(x\in\mathbb{R}^{d}\) and a set \(S\subseteq[d]\), we use \([x]_{S}\in\mathbb{R}^{|S|}\) to denote the restriction of \(x\) to those coordinates in \(S\).
For a matrix \(A\), we use \(A_{i}\) to denote the \(i\)-th column of \(A\). We write \(\left\|A\right\|_{F}\) for the Frobenius norm of \(A\), and \(\left\|A\right\|_{op}\) for the operator norm of \(A\), and we write \(\sigma_{\min}(A)\) and \(\sigma_{\max}(A)\) for the minimum and maximum singular values of \(A\). For a matrix \(A\in\mathbb{R}^{d\times q}\) and \(S\subseteq[q]\), we use \(A_{S}\in\mathbb{R}^{d\times|S|}\) to refer to \(A\) restricted to the columns whose indices are in \(S\). We use \(\mathbf{I}_{d}\) to denote the \(d\times d\) identity matrix. Finally, for \(M\subseteq[d]\), we use \(P_{M}\in\mathbb{R}^{|M|\times d}\) to refer to the matrix whose action on \(x\) is \(P_{M}x=[x]_{M}\). Note that for a \(d\times q\) matrix \(A\), \(P_{M}A\) would give a subset of rows of \(A\), which is different from the earlier notation \(A_{S}\) which gives a subset of columns.
### Background on Sparse Coding
We consider the sparse coding problem in which we are given measurements \(y\in\mathbb{R}^{d}\) generated as \(Az+\epsilon\), where \(A\in\mathbb{R}^{d\times p}\) is a ground-truth dictionary, \(z\in\mathbb{R}^{p}\) is a \(k\)-sparse vector distributed according to a probability measure \(\mathbb{P}_{z}\), and \(\epsilon\in\mathbb{R}^{d}\) is a noise term with i.i.d. entries. The goal is to use the measurements \(y\) to reconstruct a dictionary \(B\) that is as close as possible to the ground-truth dictionary \(A\).
In the case where \(B\) has the same dimensions as \(A\), one may want to formulate this notion of "closeness" (or recovery error) as \(\left\|A-B\right\|_{F}^{2}\). However, directly using the Frobenius norm of \((A-B)\) is too limited, as it is sufficient to recover the columns of \(A\) up to permutations and sign flips. Therefore, a common choice of
recovery error (Gribonval et al., 2014; Arora et al., 2015) is the following:
\[\min_{P\in\Pi}\left\|A-BP\right\|_{F}^{2} \tag{2.1}\]
where \(\Pi\) is the set of orthogonal matrices whose entries are \(0\) or \(\pm 1\).
In the over-realized setting, when \(B\in\mathbb{R}^{d\times p^{\prime}}\) with \(p^{\prime}>p\), Equation (2.1) no longer makes sense as \(A\) and \(B\) do not have the same size. In this case, one can generalize Equation (2.1) to measure the distance between each column of \(A\) and the column closest to it in \(B\) (up to change of sign). This notion of recovery was studied by Sulam et al. (2020), and we use the same formulation in this work:
\[d_{R}(A,B)\triangleq\frac{1}{p}\sum_{i=1}^{p}\min_{j\in[p^{\prime}],c\in\{-1, 1\}}\left\|A_{i}-cB_{j}\right\|^{2} \tag{2.2}\]
Note that Equation (2.2) introduced the coefficient \(1/p\) in the recovery error and thus corresponds to the _average_ distance between \(A_{i}\) and its best approximation in \(B\). Also, Equation (2.2) only allows sign changes, even though for reconstructing \(Az\), it is sufficient to recover the columns of \(A\) up to arbitrary scaling. In our experiments we enforce \(A\) and \(B\) to have unit column norms so a sign change suffices; in theory one can always modify the \(B\) matrix to have correct norm so it also does not change our results.
Given access to only measurements \(y\), the algorithm cannot directly minimize the recovery error \(d_{R}(A,\cdot)\). Instead, sparse coding algorithms often seek to minimize the following surrogate loss:
\[\ell(B)=\mathbb{E}_{y}\left[\min_{\hat{z}\in\mathbb{R}^{p^{\prime}}}\left\|y- B\hat{z}\right\|^{2}+h(\hat{z})\right] \tag{2.3}\]
where \(h\) is a sparsity-promoting penalty function. Typical choices of \(h\) include hard sparsity (\(h(\hat{z})=0\) if \(\hat{z}\) is \(k\)-sparse and \(h(\hat{z})=\infty\) otherwise) as well as the \(\mathcal{L}_{1}\) penalty \(h(\hat{z})=\left\|\hat{z}\right\|_{1}\). While hard sparsity is closer to the assumption on the data-generating process, it is well-known that optimizing under exact sparsity constraints is NP-hard in the general case (Natarajan, 1995). When \(h(\hat{z})=\left\|\hat{z}\right\|_{1}\) is used, the learning problem is also known as basis pursuit denoising (Chen and Donoho, 1994) or Lasso (Tibshirani, 1996).
Equation (2.3) is the population loss one wishes to minimize when learning a dictionary \(B\). In practice, sparse coding algorithms must work with a finite number of measurements \(y_{1},y_{2},\ldots,y_{n}\) obtained from the data-generating process and instead minimize the empirical loss \(\tilde{\ell}(B)\):
\[\tilde{\ell}(B)=\frac{1}{n}\sum_{i=1}^{n}\min_{\hat{z}\in\mathbb{R}^{p^{ \prime}}}\left\|y_{i}-B\hat{z}\right\|^{2}+h(\hat{z}) \tag{2.4}\]
### Sparse Coding via Orthogonal Matching Pursuit
Most existing approaches for optimizing Equation (2.4) can be categorized under the general alternating minimization approach described in Algorithm 1.
```
Input: Data \(y\), Dictionary \(B^{(t)}\in\mathbb{R}^{d\times p^{\prime}}\) Decoding Step: Solve \(B^{(t)}\hat{z}=y\) for \(k\)-sparse \(\hat{z}\) Update Step: Update \(B^{(t)}\) to \(B^{(t+1)}\) by performing a gradient step on loss computed using \(B^{(t)}\hat{z}\) and \(y\)
```
**Algorithm 1** Alternating Minimization Framework
At iteration \(t\), Algorithm 1 performs a decoding/compressed sensing step using the current learned dictionary \(B^{(t)}\) and the input data \(y\). As mentioned in Section 1.2, there are several well-studied algorithms for this decoding step. Because we are interested in enforcing a hard-sparsity constraint, we restrict our attention to algorithms that are guaranteed to produce a \(k\)-sparse representation in the decoding step.
We thus focus on Orthogonal Matching Pursuit (OMP) (Mallat & Zhang, 1993; Rubinstein et al., 2008), which is a simple greedy algorithm for the decoding step. The basic procedure of OMP is to iteratively expand a subset \(T\subset[p^{\prime}]\) of atoms (until \(|T|=k\)) by considering the correlation between the unselected atoms in the current dictionary \(B^{(t)}\) and the residual \(\left(y-B_{T}^{(t)}\operatorname*{argmin}_{\hat{z}\in\mathbb{R}^{|T|}}\left\|y -B_{T}^{(t)}\hat{z}\right\|^{2}\right)\) (i.e., the least squares solution using atoms in \(T\)). A more precise description of the algorithm can be found in Rubinstein et al. (2008). Moving forward, we will use \(g_{\text{OMP}}(y,B,k)\) to denote the \(k\)-sparse vector \(\hat{z}\in\mathbb{R}^{p^{\prime}}\) obtained by running the OMP algorithm on an input dictionary \(B\) and a measurement \(y\).
### Conditions on the Data-Generating Process
For the data-generating process \(y\sim Az+\epsilon\), it is in general impossible to successfully perform the decoding step in Algorithm 1 even with access to the ground-truth dictionary \(A\). As a result, several conditions have been identified in the literature under which it is possible to provide guarantees on the success of decoding the sparse representation \(z\). We recall two of the most common ones (Candes & Tao, 2005).
**Definition 2.1**.: [_Restricted Isometry Property (RIP)_] We say that a matrix \(A\in\mathbb{R}^{d\times p}\) satisfies \((s,\delta_{s})\)-RIP if the following holds for all \(s\)-sparse \(x\in\mathbb{R}^{p}\):
\[(1-\delta_{s})\|x\|^{2}\leq\|Ax\|^{2}\leq(1+\delta_{s})\|x\|^{2} \tag{2.5}\]
**Definition 2.2**.: [\(\mu\)_-Incoherence_] A matrix \(A\in R^{d\times p}\) with unit norm columns is \(\mu\)-incoherent if:
\[|\langle A_{i},A_{j}\rangle|\leq\mu\quad\text{for all }i\neq j \tag{2.6}\]
These two properties are closely related. For example, as a consequence of the Gershgorin circle theorem, \((\delta_{s}/s)\)-incoherent matrices must satisfy \((s,\delta_{s})\)-RIP.
Given the prominence of RIP and incoherence conditions in the compressed sensing and sparse coding literature, there has been a large body of work investigating families of matrices that satisfy these conditions. We refer the reader to Baraniuk et al. (2008) for an elegant proof that a wide class of random matrices in \(\mathbb{R}^{d\times p}\) (i.e. subgaussian) satisfy \((k,\delta)\)-RIP with high probability depending on \(\delta\), \(k\), \(p\), and \(d\). For an overview of deterministic constructions of such matrices, we refer the reader to Bandeira et al. (2012) and the references therein.
## 3 Main Results
Having established the necessary background, we now present our main results. Our first result shows that minimizing the population reconstruction loss with a hard-sparsity constraint can lead to learning a dictionary \(B\) that is far from the ground truth. We specifically work with the loss defined as:
\[L(B,k)=\mathbb{E}_{y}\left[\min_{\|\hat{z}\|_{0}\leq k}\|y-B\hat{z}\|^{2}\right] \tag{3.1}\]
Note that in the definition of \(L(B,k)\), we are considering an NP-hard optimization problem (exhaustively searching over all \(k\)-sparse supports). We could instead replace this exhaustive optimization with an alternative least-squares-based approach (so long as it is better than random), and our proof techniques for Theorem 3.2 would still work. We consider this version only to simplify the presentation.
We now show that, under appropriate settings, there exists a dictionary \(B\) whose population loss \(L(B,k)\) is smaller than that of \(A\), while \(d_{R}(A,B)\) is bounded away from \(0\) by a term related to the noise in the data-generating process.
**Assumption 3.1**.: Let \(A\in\mathbb{R}^{d\times p}\) be an arbitrary matrix with unit-norm columns satisfying \((2k,\delta)\)-RIP for \(k=o\left(\frac{d}{\log p}\right)\) and \(\delta=o(1)\), and suppose \(\sigma^{2}_{\min}(A)=\Omega(p/d)\). We assume each measurement \(y\) is generated as \(y\sim Az+\epsilon\), where \(z\) is a random vector drawn from an arbitrary probability measure \(\mathbb{P}_{z}\) on \(k\)-sparse vectors in \(\mathbb{R}^{p}\), and \(\epsilon\sim\mathcal{N}(0,\sigma^{2}\mathbf{I}_{d})\) for some \(\sigma>0\).
**Theorem 3.2**.: _[Overfitting to Reconstruction Loss] Consider the data-generating model in Assumption 3.1 and define \(\Lambda(z)\) to be:_
\[\Lambda(z)=\inf\{t\ |\ \mathbb{P}_{z}(z\geq t)\leq 1/d\}. \tag{3.2}\]
_Then for \(q=\Omega(p^{2}\max(d\sigma^{2},\Lambda(z)^{2})/\sigma^{2})\), there exists a \(B\in R^{d\times q}\) such that \(L(B,k)\leq L(A,k)-\Omega(k\sigma^{2})\) and \(d_{R}(A,B)=\Omega(\sigma^{2})\)._
**Proof Sketch.** The key idea is to first determine how much the loss can be decreased by expanding from \(k\)-sparse combinations of the columns of \(A\) to \(2k\)-sparse combinations, i.e., lower bound the gap between \(L(A,k)\) and \(L(A,2k)\). After this, we can construct a dictionary \(B\) whose columns form an \(\epsilon\)-net (with \(\epsilon=\sigma^{2}\)) for all 2-sparse combinations of columns of \(A\). Any \(2k\)-sparse combination of columns in \(A\) can then be approximated as a \(k\)-sparse combination of columns in \(B\), which is sufficient for proving the theorem.
_Remark 3.3_.: Before we discuss the implications of Theorem 3.2, we first verify that Assumption 3.1 is not vacuous, and in fact applies to many matrices of interest. This follows from a result of Rudelson & Vershynin (2008), which shows that after appropriate rescaling, rectangular matrices with i.i.d. subgaussian entries satisfy the singular value condition in Assumption 3.1. Furthermore, such matrices will also satisfy the RIP condition so long as \(k\) is not too large relative to \(d\) and \(p\), as per Baraniuk et al. (2008) as discussed in the previous section.
Theorem 3.2 shows that learning an appropriately over-realized dictionary fails to recover the ground truth _independent_ of the distribution of \(z\). This means that even if we let the norm of the signal \(Az\) in the data-generating process be arbitrarily large, with sufficient over-realization we may still fail to recover the ground-truth dictionary \(A\) by minimizing \(L(B,k)\).
We also observe that the amount of over-realization necessary in Theorem 3.2 depends on how well \(z\sim\mathbb{P}_{z}\) can be bounded with reasonably high probability. If \(z\) is almost surely bounded (as is frequently assumed), we can obtain the following cleaner corollary of Theorem 3.2.
**Corollary 3.4**.: _Consider the same settings as Theorem 3.2 with the added stipulation that \(\mathbb{P}_{z}(\|z\|\leq C)=1\) for a universal constant \(C\). Then for \(q=\Omega(p^{2}d)\), there exists a \(B\in R^{d\times q}\) such that \(d_{R}(A,B)=\Omega(\sigma^{2})\) and \(L(B,k)\leq L(A,k)-\Omega(k\sigma^{2})\)._
The reason that we can obtain a smaller population loss than the ground truth in Theorem 3.2 is because we can make use of the extra capacity in \(B\) to overfit the noise \(\epsilon\) in the data-generating process. To prevent this, our key idea is to perform the decoding step \(B\hat{z}=y\) on a subset of the dimensions of \(y\) - which we refer to as the _unmasked_ part of \(y\) - and then evaluate the loss of \(B\) using the complement of that subset (the _masked_ part of \(y\)). Intuitively, because each coordinate of the noise \(\epsilon\) is independent, a dictionary \(B\) that well-approximates the noise in the unmasked part of \(y\) will have no benefit in approximating the noise in the masked part of \(y\).
We can formalize this as the following masking objective:
\[L_{mask}(B,k,M) =\mathbb{E}_{y}\left[\left|\left[y\right]_{[d]\setminus M}-\left[ B\hat{z}\right]_{[d]\setminus M}\right|^{2}\right] \tag{3.3}\] \[\text{where }\hat{z}([y]_{M}) =g_{\text{OMP}}(y,B,k) \tag{3.4}\]
In defining \(L_{mask}\), we have opted to use \(g_{\text{OMP}}\) in the inner minimization step, as opposed to the exhaustive argmin in the definition of \(L\). Similar to the discussion earlier, we could have instead used any other approach based on least squares to decode \(\hat{z}\) (including the exhaustive approach), so long as we have guarantees on the
probability of failing to recover the true code \(z\) given access to the ground-truth dictionary \(A\). This choice of using OMP is mostly to tie our theory more closely with our experiments.
Now we present our second main result which shows, in contrast to Theorem 3.2, that optimizing \(L_{mask}\) prevents overfitting noise (albeit in a different but closely related setting).
**Assumption 3.5**.: Let \(A\in\mathbb{R}^{d\times p}\) be an arbitrary matrix such that there exists an \(M\subset[d]\) with \(P_{M}A\) being \(\mu\)-incoherent with \(\mu\leq C/(2k-1)\) for a universal constant \(C<1\). We assume each measurement \(y\) is generated as \(y\sim Az+\epsilon\), where \([z]_{\mathrm{supp}(z)}\sim\mathcal{N}(0,\sigma_{z}^{2}\mathrm{I}_{k})\) with \(\mathrm{supp}(z)\) drawn from an arbitrary probability distribution over all size-\(k\) subsets of \([d]\), and \(\epsilon\sim\mathcal{N}(0,\sigma^{2}\mathrm{I}_{d})\) for some \(\sigma>0\).
**Theorem 3.6**.: _[Benefits of Masking] Consider the data-generating model in Assumption 3.5. For any non-empty mask \(M\subset[d]\) such that \(P_{M}A\) satisfies the \(\mu\)-incoherence condition in the assumption, we have_
\[\lim_{\sigma_{z}\to\infty}\Big{(}L_{mask}(A,k,M)-\min_{B}L_{mask}(B,k,M) \Big{)}=0 \tag{3.5}\]
_That is, as the expected norm of the signal \(Az\) increases, there exist minimizers \(B\) of \(L_{mask}\) such that \(d_{R}(A,B)\to 0\)._
**Proof Sketch.** The proof proceeds by expanding out \(L_{mask}(B,k,M)\) and using the fact that \([B\hat{z}]_{[d]\setminus M}\) is independent of \([\epsilon]_{[d]\setminus M}\) to obtain a quantity that closely resembles the prediction risk considered in analyses of linear regression. From there we show that the Bayes risk is lower bounded by the risk of a regularized least squares solution with access to a support oracle. We then rely on a result of Cai & Wang (2011) to show that \(g_{\mathrm{OMP}}([y]_{M})\) recovers the support of \(z\) with increasing probability as \(\sigma_{z}\to\infty\), and hence its risk converges to the aforementioned prediction risk.
_Remark 3.7_.: As before, so long as the mask \(M\) is not too small, matrices with i.i.d. subgaussian entries will satisfy the assumptions on \(A\) in Assumption 3.5. In particular, the set of ground truth dictionaries satisfying Assumptions 3.1_and_3.5 is non-empty.
Comparing Theorem 3.6 to Theorem 3.2, we see that approximate minimizers of \(L_{mask}\) can achieve arbitrarily small recovery error, so long as the signal \(Az\) is large enough; whereas for \(L\), there always exist minimizers whose recovery error is bounded away from \(0\). We note that having the expected norm of the signal be large is effectively necessary to hope for recovering the ground truth in our setting, as in the presence of Gaussian noise there is always some non-zero probability that the decoding step can fail. Full proofs of Theorems 3.2 and 3.6 can be found in Section A of the Appendix.
## 4 Experiments
In this section, we examine whether the separation between the performance of sparse coding with or without masking (demonstrated by Theorems 3.2 and 3.6) manifests in practice. To do so, we need to make a few concessions from the theoretical settings introduced in Sections 2.1 and 2.3. Firstly, we cannot directly optimize the expectations in \(L\) and \(L_{mask}\) as defined in Equations (3.1) and (3.3), so we instead optimize the corresponding empirical versions defined in the same vein as Equation (2.4). Another issue is that the standard objective \(L\) requires solving the optimization problem \(\min_{\left\|\hat{z}\right\|_{0}\leq k}\left\|y-B\hat{z}\right\|^{2}\), which is NP-hard in general. In order to experiment with reasonably large values of \(d,p,\text{ and }p^{\prime}\) and to be consistent with the decoding step in \(L_{mask}\), we thus approximately solve the aforementioned optimization problem using OMP.
The approaches for optimizing \(L\) and \(L_{mask}\) given \(n\) samples from the data-generating process are laid out in Algorithms 2 and 3, in which we use \(\mathrm{Proj}_{\mathbb{S}^{d-1}}B\) to denote the result of normalizing all of the columns of \(B\). We also use \(M^{c}\) as a shorthand in Algorithm 3 to denote \([d]\setminus M\).
We point out that Algorithm 3 introduces some features that were not present in the theory of the masking objective; namely, in each iteration, we randomly sample a new mask of a pre-fixed size. This is because if we were to run gradient descent using a single, fixed mask \(M\) at each iteration, as we don't differentiate
through the OMP steps, the gradient with respect to \(B^{(t)}\) computed on the error \(\left\lVert[y]_{[d]\setminus M}-[Bz]_{[d]\setminus M}\right\rVert^{2}\) would be non-zero only for those rows of \(B\) corresponding to the indices \([d]\setminus M\). To avoid this issue, we sample new masks in each iteration so that each entry of \(B\) can be updated. There are alternative approaches that can achieve this; i.e. deterministically cycling through different masks, but we found them to have similar performance.
While we will analyze the performance of Algorithms 2 and 3 across several different experimental setups over the next few subsections, we describe the following facets shared across all setups. We generate a dataset of \(n=1000\) samples \(y_{i}=Az_{i}+\epsilon_{i}\), where \(A\in\mathbb{R}^{d\times p}\) is a standard Gaussian ensemble with normalized columns, the \(z_{i}\) have uniformly random \(k\)-sparse supports whose entries are i.i.d. \(\mathcal{N}(0,1)\), and the \(\epsilon_{i}\) are mean zero Gaussian noise with some fixed variance (which we will vary in our experiments). We also normalize the \(z_{i}\) so as to constrain ourselves to the bounded-norm setting of Corollary 3.4. In addition to the \(1000\) samples constituting the dataset, we also assume access to a held-out set of \(p^{\prime}\) samples from the data-generating process for initializing the dictionary \(B^{(0)}\in\mathbb{R}^{d\times p^{\prime}}\).
For training, we use batch versions of Algorithms 2 and 3 in which we perform gradient updates with respect to the mean losses computed over \(\{y_{1},...,y_{B}\}\) with \(B=200\) as the batch size. For the actual gradient step, we use Adam (Kingma and Ba, 2014) with its default hyperparameters of \(\beta_{1}=0.9,\beta_{2}=0.999\) and a learning rate of \(\eta=0.001\), as we found Adam trains _significantly_ faster than SGD (and we ran into problems when using large learning rates for SGD). We train for \(500\) epochs (passes over the entire dataset) for both Algorithms 2 and 3. For Algorithm 3, we always use a mask size of \(d-\lfloor d/10\rfloor\), which we selected based off of early experiments. We ensured that, even for this fairly large mask size, the gradient norms for both \(L\) and \(L_{mask}\) were of the same order in our experiments and that \(500\) epochs were sufficient for training.
We did not perform extensive hyperparameter tuning, but we found that the aforementioned settings performed better than the alternative choices we tested for both algorithms across all experimental setups. Our implementation is in PyTorch (Paszke et al., 2019), and all of our experiments were conducted on a single P100 GPU.
### Scaling Over-realization
We first explore how the choice of \(p^{\prime}\) for the learned dictionary \(B\) affects the empirical performance of Algorithms 2 and 3 when the other parameters of the problem remain fixed. Theorem 3.2 and Corollary 3.4 indicate that the performance of Algorithm 2 should suffer as we scale \(p^{\prime}\) relative to \(d\) and \(p\).
In order to test whether this is actually the case in practice, we consider samples generated as described above with \(A\in\mathbb{R}^{d\times p}\) for \(d=100,p=200,\text{ and }\left\lVert z\right\rVert_{0}=k=5\) fixed, while scaling the number of atoms \(p^{\prime}\) in \(B\) from \(p^{\prime}=p\) (exactly realized) to \(p^{\prime}=n\) (over-realized and overparameterized). We choose \(\epsilon_{i}\sim\mathcal{N}(0,1/d)\), which is a high noise regime as the expected norm of the noise \(\epsilon_{i}\) will be comparable to that of the signal \(Az_{i}\). To make it computationally feasible to run several trials of our experiments, we consider the \(p^{\prime}\) values \(\{200,400,600,800,1000\}\) and do not consider more fine-grained interpolation between \(p\) and \(n\).
For the training process, we consider two different initialization of \(B^{(0)}\). In the first case, we initialize \(B^{(0)}\) to have columns corresponding to the aforementioned set of held-out \(p^{\prime}\) (normalized) samples from the data-generating process, as this is a standard initialization choice that has been known to work well in practice (Arora et al., 2015). However, this initialization choice in effect corresponds to a dataset of \(n+p^{\prime}\) samples, and it is fair to ask whether this initialization benefit is worth the sample cost relative to a random initialization. Our initial experiments showed that this was indeed the case, i.e. random initialization with access to \(p^{\prime}\) additional samples did not perform better, so we focus on this sample-based initialization. That being said, we did not find the ordering of the performances of Algorithms 2 and 3 sensitive to the initializations we considered, only the final absolute performance in terms of \(d_{R}(A,B)\).
In addition to this purely sample-based initialization, we also consider a "local" initialization of \(B^{(0)}\) to the ground truth \(A\) itself concatenated with \(p^{\prime}-p\) normalized samples. This is obviously not intended to be a practical initialization; the goal here is rather to analyze the extent of overfitting to the noise \(\epsilon_{i}\) in the dataset for both algorithms. Namely, we expect that Algorithm 2 will move further away from the ground truth than Algorithm 3.
The results for training using these initializations for both algorithms and then computing the final dictionary recovery errors \(d_{R}(A,B)\) are shown in Figure 1. We use cosine distance when reporting the error \(d_{R}(A,B)\) since the learned dictionary \(B\) also has normalized columns, so Euclidean distance only changes the scale of the error curves and not their shapes.
For both choices of initialization, we observe that Algorithm 3 outperforms Algorithm 2 as \(p^{\prime}\) increases, with this gap only becoming more prominent for larger \(p^{\prime}\). Furthermore, we find that recovery error actually _worsens_ for Algorithm 2 for every choice of \(p^{\prime}>p\) for both initializations in our setting. While this is possibly unsurprising for initializing at the ground truth, it is surprising for the sample-based initialization which does not start at a low recovery error. On the other hand, training using Algorithm 3 improves the recovery error from initialization when using sample-based initialization for every choice of \(p^{\prime}\) except \(p^{\prime}=n\), which again corresponds to the overparameterized regime in which it is theoretically possible to memorize every sample as an atom of \(B\).
Additionally, we also see that the performance of Algorithm 3 is much less sensitive to the level of over
Figure 1: Comparison of Algorithm 2 (Baseline) and Algorithm 3 (Masking) under the settings of Sections 4.1). Each curve represents the mean of 5 training runs, with the surrounded shaded area representing one standard deviation.
realization in \(B\). When training from local initialization, Algorithm 3 retains a near-constant level of error/overfitting as we scale \(p^{\prime}\). Similarly, when training from sample initialization, performance does not degrade as we scale \(p^{\prime}\), and in fact improves initially with a modest level of over-realization.
This improvement up to a certain amount of over-realization (in our case \(p^{\prime}=2p\)) is seen even in the performance of Algorithm 2 for sample initialization (although note that while the recovery error is better for \(p^{\prime}=2p\) compared to \(p^{\prime}=p\), training still makes the error worse than initialization for Algorithm 2). A similar phenomenon was observed in Sulam et al. (2020) in the setting where \(y_{i}=Az_{i}\) (no noise), and we find it interesting that the phenomenon is (seemingly) preserved even in the presence of noise. We do not investigate the optimal level of over-realization any further, but believe it would be a fruitful direction for future work.
### Scaling All Parameters
The experiments of Section 4.1 illustrate that for the fixed choices of \(d\), \(p\), and \(k\) that we used, scaling the over-realization of \(B\) leads to rapid overfitting in the case of Algorithm 2, while Algorithm 3 maintains good performance. To verify that this is not an artifact of the choices of \(d,p,\text{ and }k\) that we made, we also explore what happens when over-realization is kept at a fixed ratio to the other setting parameters while they are scaled.
For these experiments, we consider \(A\in\mathbb{R}^{d\times p}\) for \(d\in\{100,150,200,250\}\) and scale \(p\) as \(p=2d\) and \(k\) as \(k=\lfloor d/20\rfloor\) to (approximately) preserve the ratio of atoms and sparsity to dimension from the previous subsection. We choose to scale \(p^{\prime}\) as \(p^{\prime}=2p\) since that was the best-performing setting (for the baseline) from the experiments of Figure 1. We keep the noise variance at \(1/d\) to stay in the relatively high noise regime.
As before, we consider a sample-based initialization as well as a local initialization near the ground truth dictionary \(A\). The results for both Algorithms 2 and 3 under the described parameter scaling are shown in Figure 2. Once again we find that Algorithm 3 has superior recovery error, with this gap mostly widening as the parameters are scaled. However, unlike the case of fixed \(d,p,\text{ and }k\), this time the performance of Algorithm 3 also degrades with the scaling. This is to be expected, as increasing \(p\) leads to more ground truth atoms that need to be recovered well in order to have small \(d_{R}(A,B)\).
### Analyzing Different Noise Levels
The performance gaps shown in the plots of Figures 1 and 2 are in the high noise regime, and thus it is fair to ask whether (and to what extent) these gaps are preserved at lower noise settings. We thus revisit the settings of Section 4.1 (choosing \(d,p,\text{ and }k\) to be the same) and fix \(p^{\prime}=1000\) (the maximum over-realization we consider). We then vary the variance of the noise \(\epsilon_{i}\) from \(1/d^{2}\) to \(1/d\) linearly, which corresponds to the standard deviations of the noise being \(\{0.01,0.0325,0.055,0.0775,0.1\}\).
Figure 2: Comparison of Algorithm 2 (Baseline) and Algorithm 3 (Masking) under the settings of Section 4.2 (all parameter scaling).
Results are shown for the sample-based initialization as well as the local initialization in Figure 3. Here we see that when the noise variance is very low, there is virtually no difference in performance between Algorithms 2 and 3. Indeed, when the variance is \(1/d^{2}\) we observe that both algorithms are able to near-perfectly recover the ground truth, even from the sample-based initialization.
However, as we scale the noise variance, the gap between the performance of the two algorithms resembles the behavior seen in the experiments of Sections 4.1 and 4.2.
## 5 Conclusion
In summary, we have shown in Sections 3 and 4 that applying the standard frameworks for sparse coding to the case of learning over-realized dictionaries can lead to overfitting the noise in the data. In contrast, we have also shown that by carefully separating the data used for the decoding and update steps in Algorithm 1 via masking, it is possible to alleviate this overfitting problem both theoretically and practically. Furthermore, the experiments of Section 4.3 demonstrate that these improvements obtained from masking are not at the cost of worse performance in the low noise regime, indicating that a practitioner may possibly use Algorithm 3 as a drop-in replacement for Algorithm 2 when doing sparse coding.
Our results also raise several questions for exploration in future work. Firstly, in both Theorem 3.6 and our experiments we have constrained ourselves to the case of sparse signals that follow Gaussian distributions. It is natural to ask to what extent this is necessary, and whether our results can be extended (both theoretically and empirically) to more general settings (we expect, at the very least, that parts of Assumptions 3.1 and 3.5 can be relaxed). Additionally, we have focused on sparse coding under hard-sparsity constraints and using orthogonal matching pursuit, and it would be interesting to study whether our ideas can be used in other sparse coding settings.
Beyond these immediate considerations, however, the intent of our work has been to show that there is still likely much to be gained from applying ideas from recent developments in areas such as self-supervised learning to problems of a more classical nature such as sparse coding. Our work has only touched on the use of a single such idea (masking), and we hope that future work looks into how other recently popular ideas can potentially improve older algorithms.
Finally, we note that this work has been mostly theoretical in nature, and as such do not anticipate any direct misuses or negative impacts of the results.
Figure 3: Comparison of Algorithm 2 (Baseline) and Algorithm 3 (Masking) under the settings of Section 4.3 (noise scaling).
## Acknowledgements
Rong Ge, Muthu Chidambaram, and Chenwei Wu are supported by NSF Award DMS-2031849, CCF-1845171 (CAREER), CCF-1934964 (Tripods), and a Sloan Research Fellowship. Yu Cheng is supported in part by NSF Award CCF-2307106.
|
2308.07425 | N-Body Oscillator Interactions of Higher-Order Coupling Functions | We introduce a method to identify phase equations that include $N$-body
interactions for general coupled oscillators valid far beyond the weak coupling
approximation. This strategy is an extension of the theory from [Park and
Wilson, SIADS 20.3 (2021)] and yields coupling functions for $N\geq2$
oscillators for arbitrary types of coupling (e.g., diffusive, gap-junction,
chemical synaptic). These coupling functions enable the study of oscillator
networks in terms of phase-locked states, whose stability can be determined
using straightforward linear stability arguments. We demonstrate the utility of
our approach with two examples. First, we use $N=3$ diffusively coupled complex
Ginzburg-Landau (CGL) model and show that the loss of stability in its splay
state occurs through a Hopf bifurcation \yp{as a function of non-weak diffusive
coupling. Our reduction also captures asymptotic limit-cycle dynamics in the
phase differences}. Second, we use $N=3$ realistic conductance-based thalamic
neuron models and show that our method correctly predicts a loss in stability
of a splay state for non-weak synaptic coupling. In both examples, our theory
accurately captures model behaviors that weak and recent non-weak coupling
theories can not. | Youngmin Park, Dan Wilson | 2023-08-14T19:36:27Z | http://arxiv.org/abs/2308.07425v2 | # \(N\)-Body Oscillator Interactions of Higher-Order Coupling Functions
###### Abstract
We introduce a method to identify phase equations that include \(N\)-body interactions for general coupled oscillators valid far beyond the weak coupling approximation. This strategy is an extension of the theory from [Park and Wilson, SIADS 20.3 (2021)] and yields coupling functions for \(N\geq 2\) oscillators for arbitrary types of coupling (e.g., diffusive, gap-junction, chemical synaptic). These coupling functions enable the study of oscillator networks in terms of phase-locked states, whose stability can be determined using straightforward linear stability arguments. We demonstrate the utility of our approach with two examples. First, we use a diffusely coupled complex Ginzburg-Landau (CGL) model with \(N=3\) and show that the loss of stability in its splay state occurs through a Hopf bifurcation viewing the non-weak diffusive coupling as the bifurcation parameter. Our reduction also captures asymptotic limit-cycle dynamics in the phase differences. Second, we use \(N=3\) realistic conductance-based thalamic neuron models and show that our method correctly predicts a loss in stability of a splay state for non-weak synaptic coupling. In both examples, our theory accurately captures model behaviors that weak and recent non-weak coupling theories cannot.
## 1 Introduction
Oscillatory phenomena exist in many biological [5, 72, 60, 26, 73], chemical [29, 15], and physical systems [45]. Numerical models that capture the important behaviors of these systems often involve complex interactions of large numbers of variables, reducing the visibility of important mechanisms. As such, phase reduction is often used for understanding the aggregate behavior of interacting oscillators in a reduced order setting [29, 26, 17, 49].
The many techniques developed for phase reduction often include specific assumptions that improve tractability at the cost of biological relevance. The Kuramoto model is an exceptionally well-studied model, owing to its elegant simplicity, and has proven invaluable towards understanding higher-order interactions and stable synchronous network states [32]. However, the Kuramoto model was originally derived in the case of infinite, homogeneous, globally coupled oscillators [14] near the Hopf bifurcation [33], limiting its use for finite populations away from the Hopf. Moreover, its analysis is often limited to studying questions around synchrony as opposed to other phase-locking phenomena, due to the often-taken limit of infinite oscillators.
When a finite number of oscillators is considered, other features may be exploited, each with their own limitations. When the network exhibits symmetries, it is possible to
enumerate all phase-locked states with weak or strong coupling [21], but this method is not suited to work in the case of asymmetries [24]. In networks of neurons, the pulse-like shape of action potentials allows for the use of pulse coupling [13, 7, 6, 52, 42]. This approach yields analytically tractable results for weak or strong and possibly asymmetric coupling, but the number of oscillators is often limited to pairs. The study of network behavior can be made tractable by using piecewise smooth models, but coupling functions require particular assumptions such as linear coupling [11, 10], weak coupling [9, 1], and Laplacian coupling [44]. In addition, the analysis of phase-locked states is often restricted to understanding the stability of a synchronous network state [10, 12] (although some do consider the stability of splay states [9]).
The most relevant reduction for the present study is the theory of weakly coupled oscillators, which allows for a general form of the vector field and coupling function so long as the coupling strength is weak [18, 28, 47, 49, 1, 48]. The weak assumption is a severe limitation because it cannot be used to accurately capture the dynamical behavior of coupled oscillator networks in many biological networks, e.g., cortical networks [53, 8], subcortical networks [62], and pacemaker networks [5, 20]. Indeed, recent studies have pushed beyond the weak coupling regime by deriving correction terms in higher orders of the coupling strength, but these too have their limitations. Higher order phase correction terms considered in [55, 19] require the underlying limit cycle to be strongly attracting, limiting their applicability when Floquet multipliers are close to unity [70]. Recently developed isostable coordinates have proven invaluable towards developing more robust phase reductions, e.g., [71, 50]. However, these methods have only been applied to pairs of oscillators without heterogeneity.
In networks consisting of more than 2 oscillators, \(N\)-body interactions on simplicial complexes become relevant. Much recent work has been done to develop phase reductions in this direction, but the study of \(N\)-body interactions have been limited to simple models such as the Kuramoto model [59, 61, 58, 4] or the Ginzburg-Landau equation [32]. Finally, note that these studies begin with higher-order interactions as an assumption, in contrast to [71], where it is shown that higher-order interactions emerge as a function of higher-order corrections to weak coupling theory.
In the present study, we address the gap in the literature by deriving a phase reduction method applicable to networks of coupled oscillators with arbitrary network topology beyond weak coupling. The formulation includes \(N\)-body interactions on simplicial complexes and enables an analysis of phase-locked states using linear stability. The paper is organized as follows. We show the derivation of this new phase reduction method in Section 3 and show some explicit higher-order terms (Section 3.1). We then show how the computed higher-order coupling functions perform by applying the method to the complex Ginzburg-Landau (CGL) model in Section 4, then to the thalamic model in Section 5. Stability of splay states as a function of coupling strength are discussed as part of these results. We conclude the paper with a discussion in Section 6.
All code used to generate figures are available for public use at [https://github.com/youngmp/nbody](https://github.com/youngmp/nbody)[46].
## 2 Background
### Phase and Phase Reduction
Consider a general dynamical system
\[\dot{X}=F(X)+U(X,t), \tag{1}\]
where \(X\in\mathbb{R}^{n}\) is the state, \(F:\mathbb{R}^{n}\to\mathbb{R}^{n}\) is a smooth vector field, and \(U(X,t)\in\mathbb{R}^{n}\) is some additive input. Let \(Y\) be a stable \(T\)-periodic orbit that emerges when taking \(U(X,t)=0\). In situations where the timing of oscillations is of interest, it can be useful to consider the dynamics of Equation (1) in terms of a scalar phase \(\theta(X)\in\mathbb{S}^{1}\) rather than in terms of the state. When \(U(X,t)=0\), the notion of isochrons [22] can be used to define phase in the basin of attraction of the limit cycle. Isochrons can be defined as follows: letting \(\theta_{1}\in[0,1)\) be the phase associated with an initial condition \(a(0)\in Y\), the \(\theta_{1}\) isochron is comprised of the set of all \(b(0)\) for which
\[\lim_{t\to\infty}||a(t)-b(t)||=0, \tag{2}\]
where \(||\cdot||\) can be any vector norm. Isochrons are typically scaled so that \(\frac{d\theta}{dt}\) is a constant for trajectories that evolve under the unperturbed flow of the vector field; in this work, we choose \(\frac{d\theta}{dt}=1\) for convenience.
Working in phase coordinates, by restricting ones attention to a close neighborhood of the periodic orbit and allowing \(U(X,t)\neq 0\), through a change of coordinates one arrives at the standard phase reduction [16, 25, 30]
\[\frac{d\theta}{dt} =\frac{\partial\theta}{\partial X}\cdot\frac{dX}{dt}\] \[=\frac{\partial\theta}{\partial X}\cdot\big{(}F(X)+U(X,t)\big{)}\] \[=1+\mathcal{Z}(\theta)\cdot U(X,t), \tag{3}\]
where \(\mathcal{Z}(\theta)=\frac{\partial\theta}{\partial X}\) evaluated on the periodic orbit at phase \(\theta\), and the 'dot' denotes the dot product. In the third line above, we use the fact that \(\frac{\partial\theta}{\partial X}\cdot F(X)\) was scaled to equal \(1\). Reductions of the form (2) have been used widely a starting point for the analysis of weakly perturbed and weakly coupled oscillatory systems [16, 54, 43, 68].
### Isostable Coordinates
In Equation (2), the gradient of the phase is evaluated on the periodic orbit. As such, it requires the state \(X\) to remain close to \(Y\) for the reduction to remain valid. This is only guaranteed in the limit of weak forcing; in many practical applications, alternative techniques that can accommodate stronger forcing must be used. One common strategy is to augment the phase coordinates with amplitude coordinates. A variety of techniques for considering both phase and amplitude coordinates have been proposed [67, 55, 64, 66, 57].
In this work, we use the phase-isostable coordinate system, which augments the phase dynamics with additional information about level sets of the slowest decaying modes of the
Koopman operator [37, 40]. To illustrate this coordinate system, let \(U(X,t)=0\) and define \(\Delta X=X-Y(\theta)\). To a linear approximation the dynamics of Equation (1) follow
\[\Delta\dot{X}=J\Delta X, \tag{4}\]
where \(J\) is the Jacobian evaluated at \(Y(\theta(t))\). Notice that (4) is linear and time-varying with the Jacobian being \(T\)-periodic. Let \(\Phi\) be the fundamental matrix, i.e., with the relationship \(\Delta X(T)=\Phi\Delta X(0)\) for initial conditions \(\theta(X(0))\approx 0\). Further, let \(w_{j},v_{j}\), and \(\lambda_{j}\) be left eigenvectors, right eigenvectors, and associated eigenvalues, respectively, of \(\Phi\). Floquet exponents can be computed according to \(\kappa_{j}=\log(\lambda_{j})/T\). Let \(\kappa_{1}\) be the slowest decaying nonzero Floquet exponent. If \(\kappa_{1}\) is unique, an associated isostable coordinate can be defined in the basin of attraction of the limit cycle according to [65]
\[\psi_{1}(X)=\lim_{k\to\infty}(w_{1}^{\top}(\eta(t_{\Gamma}^{k},X)-Y_{0})\exp( -\kappa_{1}t_{\Gamma}^{k})), \tag{5}\]
where \(t_{\Gamma}^{k}\) denotes the time of the \(k\)th transversal of the \(\theta=0\) isochron, \(\eta(t,X)\) is the unperturbed flow of the vector field that takes \(X(0)\) to \(X(t)\), \(Y_{0}\) is the intersection of the periodic orbit and the \(\theta=0\) isochron, and \({}^{\top}\) denotes the transpose. In contrast to isochrons defined in (2), which characterize the infinite time convergence to the periodic orbit, the isostable coordinates defined in (5) give a sense of the distance from the periodic orbit, with larger \(|\psi_{1}(X)|\) values corresponding to states that will take longer to approach the periodic orbit. Isostable coordinates can also be used to characterize faster decaying components of the solution, but an explicit definition akin to (5) is not always possible [31]. Instead, faster decaying isostable coordinates can be defined as level sets of appropriately chosen Koopman eigenfunctions. In this work, we will assume that the faster decaying isostable coordinates decay rapidly and are well-approximated by zero. Using Equation (5), it is possible to show directly that when \(U(X,t)=0,\,\frac{d\psi_{1}}{dt}=\kappa_{1}\psi_{1}\) in the basin of attraction of the limit cycle [65].
### Phase-Isostable Reduction
Information about the slowest decaying isostable coordinate can be used to augment standard phase models of the form (3) to increase the accuracy of the reduction in response to larger magnitude inputs. In the analysis in the following sections, we assume that all non-zero Floquet exponents except \(\kappa_{1}\) have a large real component so that the associated isostable coordinates decay rapidly and are well-approximated by zero. Moving forward, for notational convenience, we will simply use \(\psi\) and \(\kappa\) to denote the only non-truncated isostable coordinate and its Floquet exponent. Taking this isostable coordinate into account, one can consider a modified version of (3)
\[\dot{\theta}=1+\mathcal{Z}(\theta,\psi)\cdot U(X,t), \tag{6}\]
where the gradient of the phase is not necessarily evaluated on the periodic orbit, but rather, at a state corresponding to \(X(\theta,\psi)\); note that \(X(\theta,0)=Y(\theta)\). In order to use (6), it is necessary to consider the isostable coordinate dynamics as well as the phase dynamics.
Considering the dynamics given by Equation (1), by the chain rule,
\[\frac{d\psi}{dt} =\frac{\partial\psi}{\partial X}\cdot\frac{dx}{dt}\] \[=\frac{\partial\psi}{\partial X}\cdot(F(x)+U(X,t))\] \[=\kappa\psi+\mathcal{I}(\theta,\psi)\cdot U(X,t), \tag{7}\]
where \(\mathcal{I}(\theta,\psi)=\frac{\partial\psi}{\partial X}\) evaluated at \(X(\theta,\psi)\). In the third line above, the relationship \(\frac{\partial\psi}{\partial X}F(x)=\kappa\psi\) since \(\frac{d\psi}{dt}=\kappa\psi\) when \(U(X,t)=0\). Taken together, Equations (6) and (7) comprise the phase-isostable reduction. For computation and analysis purposes, the gradient of the phase and isostable coordinate is often represented according to a Taylor expansion in the isostable coordinate centered at \(\psi=0\)
\[\dot{\theta} =1+(Z^{(0)}(\theta)+\psi Z^{(1)}(\theta)+\psi^{2}Z^{(2)}(\theta)+ \dots)\cdot U(X,t),\] \[\dot{\psi} =\kappa\psi+(I^{(0)}(\theta)+\psi I^{(1)}(\theta)+\psi^{2}I^{(2) }(\theta)+\dots)\cdot U(X,t), \tag{8}\]
where \(Z^{(j)}\) and \(I^{(j)}\) correspond to the \(j^{th}\) order terms in the expansions. Note that neglecting the terms \(Z^{(1)},Z^{(2)},\dots\) yields the same phase dynamics as the standard phase reduction from Equation (3). Reference [69] discuses strategies for numerically computing the necessary terms of the expansion from Equation (8). Taking this expansion to higher orders of accuracy will generally result in a reduced order model that is valid for inputs much larger in magnitude than can be considered using phase-only reduction. Indeed, previous work has shown that reductions of the form (8) can be used to accurately predict phase locking for two coupled oscillators in situations where standard phase-only reductions of the form (3) fail [71, 50].
## 3 Higher Order Coupling with \(N\)-Body Interactions
We now derive a reduced system of phase equations that captures higher-order interactions between coupled oscillators starting with the ordinary differential equation (ODE)
\[\dot{X}_{i}=F_{i}(X_{i})+\delta b_{i}H_{i}(X_{i})+\varepsilon\left[\sum_{j=1} ^{N}a_{ij}G_{ij}(X_{i},X_{j})\right],\quad i=1,2,\dots,N, \tag{9}\]
where each system admits a \(T\)-periodic limit cycle \(Y_{i}(t)\) when \(\varepsilon=0\) and \(\delta=0\). Above, \(\varepsilon,\delta\) are not necessarily small. We assume general smooth vector fields \(F_{i}:\mathbb{R}^{n_{i}}\rightarrow\mathbb{R}^{n_{i}}\), smooth coupling functions \(G_{ij}:\mathbb{R}^{n_{i}}\times\mathbb{R}^{n_{j}}\rightarrow\mathbb{R}^{n_{i}}\), and a smooth function capturing heterogeneity between oscillators \(H_{i}:\mathbb{R}^{n_{i}}\rightarrow\mathbb{R}^{n_{i}}\), where \(n_{i}\in\mathbb{N}\) for each oscillator \(i\). The scalars \(a_{ij}\) modulate coupling strength between pairs of oscillators, whereas \(\varepsilon\) modulates the overall coupling strength of the network (if needed, the topology of the network can be represented by a matrix with coordinates \(a_{ij}\)). Note that the models comprising each oscillator can be completely different and that the heterogeneity \(H_{i}\) can simply be absorbed into one of the coupling terms \(G_{ij}\). We will thus not explicitly write \(H_{i}\) from this point forward.
We assume that there is only one nontrivial isostable coordinate similar to prior studies [70, 71, 50] and let \(\kappa<0\) be the corresponding Floquet exponent. We reduce (9) to phase-amplitude coordinates using phase-isostable reduction of the form (6) and (7):
\[\dot{\theta}_{i} =1+\varepsilon\mathcal{Z}_{i}(\theta_{i},\psi_{i})\cdot\sum_{j=1}^ {N}a_{ij}G_{ij}(\theta_{i},\psi_{i},\theta_{j},\psi_{j}), \tag{10}\] \[\dot{\psi}_{i} =\kappa\psi_{i}+\varepsilon\mathcal{I}_{i}(\theta_{i},\psi_{i}) \cdot\sum_{j=1}^{N}a_{ij}G_{ij}(\theta_{i},\psi_{i},\theta_{j},\psi_{j}),\]
where for each oscillator \(i\), \(\theta_{i}\) represents the oscillator phase, \(\psi_{i}\) represents the amplitude of a trajectory perturbed away from the underlying limit cycle, \(\mathcal{Z}_{i}\) is the gradient of the phase often referred to as the phase response curve (PRC), and \(\mathcal{I}_{i}\) is the gradient of the isostable coordinate often referred to as the isostable response curve (IRC). We suppress the time dependence of \(\theta_{i}\) to reduce notational clutter.
The functions \(\mathcal{Z}_{i}\) and \(\mathcal{I}_{i}\) can be computed to arbitrarily high accuracy by computing coefficients of the expansions in \(\psi_{i}\) and \(\varepsilon\) as in Equation (7) (see [69, 51]):
\[\mathcal{Z}_{i}(\theta,\psi) \approx Z_{i}^{(0)}(\theta)+\psi Z_{i}^{(1)}(\theta)+\psi^{2}Z_{i}^ {(2)}(\theta)+\cdots, \tag{11}\] \[\mathcal{I}_{i}(\theta,\psi) \approx I_{i}^{(0)}(\theta)+\psi I_{i}^{(1)}(\theta)+\psi^{2}I_{i}^ {(2)}(\theta)+\cdots,\] (12) \[X_{i}(t) \approx Y_{i}(\theta_{i})+\psi_{i}g_{i}^{(1)}(\theta_{i})+\psi_{i}^{2}g_ {i}^{(2)}(\theta_{i})+\cdots,\] (13) \[\psi_{i}(t) \approx\varepsilon p_{i}^{(1)}(t)+\varepsilon^{2}p_{i}^{(2)}(t)+ \varepsilon^{3}p_{i}^{(3)}(t)+\cdots, \tag{14}\]
where \(Z_{i}^{(k)}\), \(I_{i}^{(k)}\), and \(g_{i}^{(k)}\) are the higher-order correction terms to the infinitesimal (linear) PRC, infinitesimal (linear) IRC, and Floquet eigenfunction, respectively. To proceed with the derivation, we assume that we have performed such computations for a given system and have solutions \(Z_{i}^{(k)}\), \(I_{i}^{(k)}\), and \(g_{i}^{(k)}\) for each \(i=1,\ldots,N\) and \(k\in\mathbb{N}\), for instance, using methods described in [69].
Next, we expand the coupling function \(G_{ij}\) in powers of \(\varepsilon\). Let us fix a particular pair of oscillators \(i\) and \(j\). We use the Floquet eigenfunction approximation for each oscillator,
\[\Delta x_{i}\approx\psi_{i}g_{i}^{(1)}(\theta_{i})+\psi_{i}^{2}g_{i}^{(2)}( \theta_{i})+\cdots, \tag{15}\]
where \(\Delta x_{i}\equiv X_{i}(t)-Y_{i}(\theta_{i}(t))\) is the difference between the limit cycle \(Y_{i}\) and trajectory \(X_{i}\). \(\Delta x_{j}\) has an identical expression in terms of \(j\) instead of \(i\). We view the coupling function as the map \(G_{ij}:\mathbb{R}^{n_{i}+n_{j}}\rightarrow\mathbb{R}^{n_{i}}\), where \(G_{ij}(\Xi_{ij})=[G_{ij,1}(\Xi_{ij}),G_{ij,2}(\Xi_{ij}),\ldots G_{ij,n}(\Xi_{ ij})]^{\top}\in\mathbb{R}^{n_{i}}\), \(G_{ij,m}:\mathbb{R}^{n_{i}+n_{j}}\rightarrow\mathbb{R}\), and \(\Xi_{ij}=[X_{i}^{\top},X_{j}^{\top}]^{\top}\in\mathbb{R}^{n_{i}+n_{j}}\), an \((n_{i}+n_{j})\times 1\) column vector. Define \(\Lambda_{ij}=[Y_{i}(\theta_{i})^{\top},Y_{j}(\theta_{j})^{\top}]^{\top}\in \mathbb{R}^{n_{i}+n_{j}}\) and \(\Delta\Xi_{ij}=[\Delta x_{i}^{\top},\Delta x_{j}^{\top}]^{\top}\in\mathbb{R}^ {n_{i}+n_{j}}\). Both are \((n_{i}+n_{j})\times 1\) column vectors, so that the relation \(\Xi_{ij}=\Lambda_{ij}+\Delta\Xi_{ij}\) is well-defined.
We apply the standard definition of higher-order derivatives using the "vec" operator from (see for example, [36, 69]) to obtain the multivariate Taylor expansion of \(G_{ij,m}\) for each \(m=1,2,\ldots,n\):
\[G_{ij,m}(\Lambda_{ij}+\Delta\Xi_{ij})=G_{ij,m}(\Lambda_{ij})+G_{ij,m}^{(1)}( \Lambda_{ij})\Delta\Xi_{ij}+\sum_{k=2}^{\infty}\frac{1}{k!}\left[\stackrel{{ k}}{{\otimes}}\Delta\Xi_{ij}^{\top}\right]\text{vec} \left(G_{ij,m}^{(k)}(\Lambda_{ij})\right), \tag{16}\]
where, temporarily treating \(\Lambda_{ij}\) as a vector of dummy variables,
\[G^{(k)}_{ij,m}(\Lambda_{ij})=\frac{\partial\text{vec}\left(G^{(k-1)}_{ij,m}( \Lambda_{ij})\right)}{\partial\Lambda_{ij}^{\top}}\in\mathbb{R}^{(n_{i}+n_{j}) ^{(k-1)}\times(n_{i}+n_{j})}. \tag{17}\]
The vec operator simply reshapes a matrix by stacking its columns, which allows us to avoid calculating high-dimensional tensors. For example, if an \(n\times m\) matrix \(A\) has columns \(a_{i}\) for \(i=1,\ldots,n\) for \(a_{i}\in\mathbb{R}^{m}\), then \(\text{vec}(A)\) is the \(mn\times 1\) column vector \((a_{1}^{\top},a_{2}^{\top},\ldots,a_{n}^{\top})^{\top}\). If \(A\) is a Jacobian matrix, taking partial derivatives yields a tensor, whereas taking partials of \(\text{vec}(A)\) yields a matrix.
We replace \(\Delta\Xi_{ij}\) in (16) with the Floquet eigenfunction expansions (15) and replace each \(\psi_{i}\) with its expansion (14). With these substitutions in place, we collect the expansion of \(G_{ij}\) in powers of \(\varepsilon\). The notation becomes cumbersome, so we summarize this step by writing
\[\begin{split} G_{ij}(\theta_{i},\psi_{i},\theta_{j},\psi_{j})=& \ K^{(0)}_{ij}(\theta_{i},\theta_{j})\\ &+\varepsilon K^{(1)}_{ij}\left(\theta_{i},\theta_{j},p^{(1)}_{i},p^{(1)}_{j}\right)\\ &+\varepsilon^{2}K^{(2)}_{ij}\left(\theta_{i},\theta_{j},p^{(1)}_{ i},p^{(2)}_{i},p^{(1)}_{j},p^{(2)}_{j}\right)\\ &+\varepsilon^{3}K^{(3)}_{ij}\left(\theta_{i},\theta_{j},p^{(1)} _{i},p^{(2)}_{i},p^{(3)}_{i},p^{(1)}_{j},p^{(2)}_{j},p^{(3)}_{j}\right)\\ &+\cdots.\end{split} \tag{18}\]
The \(O(1)\)\(K^{(k)}\) functions are the appropriately collected terms that contain the Floquet eigenfunctions and the partials of \(G_{ij}\) It is straightforward to verify using a symbolic package that the function \(K^{(k)}\) only depends on terms \(p^{(\ell)}_{i}\), \(p^{(\ell)}_{j}\) for \(\ell\leq k\). For additional details, we refer the reader to our code repository [46], where we implement all symbolic manipulation using Sympy [39].
While we now have all the necessary expansions in \(\varepsilon\) to rewrite the phase-amplitude equations in (10) in powers of \(\varepsilon\), there are still two variables for each oscillator, \(\theta_{i}\) and \(\psi_{i}\). Thus, some work remains to reduce everything to a single phase variable. To this end, we proceed with the method suggested by [71, 50], by deriving and solving linear equations for each term \(p^{(k)}_{i}\) in the expansion of \(\psi_{i}\) (14) in terms of \(\theta_{i},\theta_{j}\). We begin by subtracting the moving frame and letting \(\hat{\theta}_{i}=\theta_{i}-t\). Substituting \(\hat{\theta}_{i}\) in (10) yields
\[\dot{\hat{\theta}}_{i} =\varepsilon\sum_{j=1}^{N}a_{ij}\mathcal{Z}_{i}(\hat{\theta}_{i}+ t,\psi_{i})\cdot G_{ij}(\hat{\theta}_{i}+t,\psi_{i},\hat{\theta}_{j}+t,\psi_{j}), \tag{19}\] \[\dot{\psi}_{i} =\kappa\psi_{i}+\varepsilon\sum_{j=1}^{N}a_{ij}\mathcal{I}_{i}( \hat{\theta}_{i}+t,\psi_{i})\cdot G_{ij}(\hat{\theta}_{i}+t,\psi_{i},\hat{ \theta}_{j}+t,\psi_{j}). \tag{20}\]
Now substituting the expansion \(\psi_{i}(t)=\varepsilon p^{(1)}_{i}(t)+\varepsilon^{2}p^{(2)}_{i}(t)+ \varepsilon^{3}p^{(3)}_{i}(t)+\cdots,\) into (20) yields a hierarchy of ODEs in powers of \(\varepsilon\) of \(\psi_{i}\) in terms of \(\hat{\theta}_{i},\hat{\theta}_{j}\). The left-hand side consists of straightforward time derivatives:
\[\dot{\psi}_{i}=\varepsilon\frac{d}{dt}p^{(1)}_{i}+\varepsilon^{2}\frac{d}{dt}p^ {(2)}_{i}+\varepsilon^{3}\frac{d}{dt}p^{(3)}_{i}+\cdots,\]
and the right-hand side includes many cross-multiplication terms:
\[\kappa_{i}\psi_{i}+ \varepsilon\sum_{j=1}^{N}a_{ij}\mathcal{I}_{i}(\hat{\theta}_{i}+t, \psi_{i})\cdot G_{ij}(\hat{\theta}_{i}+t,\psi_{i},\hat{\theta}_{j}+t,\psi_{j})\] \[=\kappa_{i}\left[\varepsilon p_{i}^{(1)}(t)+\varepsilon^{2}p_{i}^ {(2)}(t)+\cdots\right]\] \[\quad+\varepsilon\sum_{j=1}^{N}a_{ij}\left\{\left[I_{i}^{(0)}( \hat{\theta}_{i}+t)+\psi_{i}I_{i}^{(1)}(\hat{\theta}_{i}+t)+\psi_{i}^{2}I_{i}^ {(2)}(\hat{\theta}_{i}+t)+\cdots\right]\right.\] \[\quad\quad\quad\quad\quad\quad\cdot\left[K_{ij}^{(0)}(\hat{ \theta}_{i}+t,\hat{\theta}_{j}+t)\right.\] \[\quad\quad\quad\quad\quad\left.+\,\varepsilon K_{ij}^{(1)}(\hat{ \theta}_{i}+t,\hat{\theta}_{j}+t,p_{i}^{(1)},p_{j}^{(1)})\right.\] \[\quad\quad\quad\quad\quad\left.+\,\varepsilon^{2}K_{ij}^{(2)}( \hat{\theta}_{i}+t,\hat{\theta}_{j}+t,p_{i}^{(1)},p_{i}^{(2)},p_{j}^{(1)},p_{ j}^{(2)})\right.\] \[\quad\quad\quad\quad\left.+\,\left.\varepsilon^{3}K_{ij}^{(3)}( \hat{\theta}_{i}+t,\hat{\theta}_{j}+t,p_{i}^{(1)},p_{i}^{(2)},p_{i}^{(3)},p_{ j}^{(1)},p_{j}^{(2)},p_{j}^{(3)})+\cdots\right]\right\}.\]
Re-collecting in powers of \(\varepsilon\) yields a hierarchy of scalar ODEs, which we show up to order \(\varepsilon^{3}\) below (21). For notational clarity, we suppress the explicit \(\theta_{i}\)-, \(\theta_{j}\)- and \(p_{i}^{(k)}\)-dependence of the functions \(I^{(k)}\), \(K^{(k)}\), and \(p_{i}^{(k)}\), and the time dependence of \(p_{i}^{(k)}\).
\[O(\varepsilon): \quad\frac{dp_{i}^{(1)}}{dt}=\kappa_{i}p_{i}^{(1)}+\sum_{j=1}^{N} a_{ij}I_{i}^{(0)}\cdot K_{ij}^{(0)},\] \[O(\varepsilon^{2}): \quad\frac{dp_{i}^{(2)}}{dt}=\kappa_{i}p_{i}^{(2)}+\sum_{j=1}^{N} a_{ij}\left(I_{i}^{(0)}\cdot K_{ij}^{(1)}+p_{i}^{(1)}I_{i}^{(1)}\cdot K_{ij}^{(0)} \right),\] \[O(\varepsilon^{3}): \quad\frac{dp_{i}^{(3)}}{dt}=\kappa_{i}p_{i}^{(3)}+\sum_{j=1}^{N} a_{ij}\left(I_{i}^{(0)}\cdot K_{ij}^{(2)}+p_{i}^{(1)}I_{i}^{(1)}\cdot K_{ij}^{(1)}+p_{ i}^{(2)}I_{i}^{(1)}\cdot K_{ij}^{(0)}+\left(p_{i}^{(1)}\right)^{2}I_{i}^{(2)} \cdot K_{ij}^{(0)}\right),\] \[\vdots \tag{21}\]
All ODEs are first-order inhomogeneous differential equations with forcing terms that depend on lower-order solutions. As such, we can solve each ODE explicitly.
We note that the dependence of the forcing function on \(p_{i}^{(k)}\) and \(p_{j}^{(k)}\) is polynomial because the expansion in \(\Delta\Xi\) followed by its substitution with the expansion in \(\psi\) and the subsequent substitution in \(\varepsilon\) are simply multinomial expansions. This observation allows us to write the equations more compactly:
\[\frac{dp_{i}^{(k)}}{dt}=\kappa_{i}p_{i}^{(k)}+\sum_{j=1}^{N}a_{ij}\sum_{E_{k-1} }f_{i,j,\alpha,\beta,\gamma,\delta}^{(k)}(\hat{\theta}_{i}+t,\hat{\theta}_{j}+ t)\left(p_{i}^{(\alpha)}\right)^{\beta+1}\left(p_{j}^{(\gamma)}\right)^{\delta+1}, \quad k=1,2,\ldots,\]
where \(E_{k}:=\{\alpha,\beta,\gamma,\delta\in\mathbb{N}:\alpha+\beta+\gamma+\delta=k\}\) is an index set, the functions \(f_{i,j,\alpha,\beta,\gamma,\delta}^{(k)}\) contain all other terms that do not explicitly depend on \(p_{i}^{(\alpha)}\) and \(p_{j}^{(\gamma)}\). Strictly speaking,
the function \(p_{i}^{(0)}\) is the zero function, but when using the notation above, we abuse notation and assume \(p_{i}^{(0)}=1\). The solution to the above linear equation is given by
\[p_{i}^{(k)}(t)=\sum_{j=1}^{N}a_{ij}\int_{t_{0}}^{t}e^{\kappa_{i}(t-s)}\sum_{E_{k -1}}f_{i,j,\alpha,\beta,\gamma,\delta}^{(k)}(\hat{\theta}_{i}+s,\hat{\theta}_{ j}+s)\left(p_{i}^{(\alpha)}\right)^{\beta+1}\left(p_{j}^{(\gamma)}\right)^{ \delta+1}\,\mathrm{d}s+e^{\kappa t}C,\quad k=1,2,\ldots,\]
where \(C\) is a constant of integration. To discard transients, we ignore the constant of integration and take \(t_{0}\to-\infty\). For convenience, we also make the change of variables \(s\to t-s\). Then the solution becomes
\[p_{i}^{(k)}(t)=\sum_{j=1}^{N}a_{ij}\int_{0}^{\infty}e^{\kappa_{i}s}\sum_{E_{k- 1}}f_{i,j,\alpha,\beta,\gamma,\delta}^{(k)}(\hat{\theta}_{i}+t-s,\hat{\theta} _{j}+t-s)\left(p_{i}^{(\alpha)}\right)^{\beta+1}\left(p_{j}^{(\gamma)}\right)^ {\delta+1}\,\mathrm{d}s. \tag{22}\]
Recalling that \(p_{i}^{(k)}\) are coefficients of the \(\varepsilon\)-expansion of \(\psi_{i}\), it follows that each \(\psi_{i}\) can be written directly in terms of \(\theta_{1},\ldots,\theta_{N}\), thus eliminating the \(\psi_{i}\) variables. Moreover, each \(p_{i}^{(k)}\) can be written in terms of lower-order \(p_{i}^{(\alpha)}\), i.e., (22) is a recursive equation.
Assumption 1We assume that the timescale of \(\theta_{i}\) differs sufficiently from the timescale of \(p_{i}^{(k)}\) such that we can rewrite (22) as
\[p_{i}^{(k)}(t)=\sum_{j=1}^{N}\sum_{E_{k-1}}a_{ij}\left(p_{i}^{(\alpha)}\right) ^{\beta+1}\left(p_{j}^{(\gamma)}\right)^{\delta+1}g_{ij,\alpha,\beta,\gamma, \delta}^{(k)}(\theta_{i},\theta_{j}), \tag{23}\]
where
\[g_{ij,\alpha,\beta,\gamma,\delta}^{(k)}(\theta_{i},\theta_{j})=\int_{0}^{ \infty}e^{\kappa_{i}s}f_{i,j,\alpha,\beta,\gamma,\delta}^{(k)}(\theta_{i}-s, \theta_{j}-s)\,\mathrm{d}s.\]
This formulation has two benefits. First, it significantly simplifies numerical calculations because the integral may be written as a 1-d convolution by taking \(\theta_{i}=\theta_{j}+\hat{a}\) for each \(\hat{a}\in[0,2\pi)\). Then
\[\int_{0}^{\infty}e^{\kappa_{i}s}f_{i,j,\alpha,\beta,\gamma,\delta }^{(k)}(\theta_{i}-s,\theta_{j}-s)\,\mathrm{d}s =\int_{-\infty}^{\theta_{j}}e^{\kappa_{i}(\theta_{j}-s)}f_{i,j, \alpha,\beta,\gamma,\delta}^{(k)}(s+\hat{a},s)\,\mathrm{d}s\] \[\int_{-\infty}^{\theta_{j}}e^{\kappa_{i}(\theta_{j}-s)}f_{i,j, \alpha,\beta,\gamma,\delta}^{(k)}(s+\hat{a},s)\,\mathrm{d}s =[H(-\theta_{j})e^{\kappa_{i}\theta_{j}}]*f_{\hat{a}}(\theta_{j}),\]
where
\[f_{\hat{a}}(t)=f_{i,j,\alpha,\beta,\gamma,\delta}^{(k)}(t+\hat{a},t).\]
Second, it enables us to show the existence of \(k\)-body interactions for a given \(k-1\). To this end, consider \(k\leq N-1\) for a given \(N\in\mathbb{N}\) and assume that \(p_{i}^{k-1}\) contains \(k\)-body interactions, where
\[p_{i}^{(k-1)}=\sum_{i,j_{1},\ldots,j_{k}}q_{i,j_{1},\ldots,j_{k}}, \tag{24}\]
where \(j_{\ell}=1,\ldots,N\) for \(\ell=1,\ldots,k\) (note \(\ell\) does not necessarily refer to oscillator \(\theta_{\ell}\) in order, but rather \(\theta_{j_{\ell}}\), which could be any oscillator with \(j_{\ell}\neq i\)). The term \(q_{i,j_{1},\ldots,j_{k}}\) captures
higher-order interaction terms. For example, for \(k=1\), each term in \(q_{i,j_{1}}\) is a function of \((\theta_{i},\theta_{j_{1}})\). For \(k=2\), each term in \(q_{i,j_{1},j_{2}}\) is a function of \((\theta_{i},\,\theta_{j_{1}},\theta_{j_{2}})\), etc).
Note that the functions \((p_{i}^{(\alpha)})^{\beta+1}\) and \((p_{i}^{(\gamma)})^{\delta+1}\) in (23) each depend on oscillators \(i,j_{1},\ldots,j_{k}\) no matter the choice of \(\alpha,\beta,\gamma,\delta\), so both functions take the form in (24). In particular, let us assume that we've combined the terms from both functions and call the resulting function \(\hat{q}_{i,j_{1},\ldots,j_{k}}\). Note further that the function \(g_{ij,\alpha,\beta,\gamma,\delta}^{(k)}(\theta_{i},\theta_{j})\) depends on \((\theta_{i},\theta_{j})\) no matter the choice of \(\alpha,\beta,\gamma,\delta\). Then \(p_{i}^{(k)}\) may be written
\[p_{i}^{(k)}(t)=\sum_{j_{k+1}=1}^{N}a_{ij}\sum_{j_{1},\ldots,j_{k}}\hat{q}_{i,j _{1},\ldots,j_{k}}g_{ij}^{(k)}(\theta_{i},\theta_{j_{k+1}}).\]
The summation index has been changed from \(j=1,\ldots,N\) to \(j_{k+1}=1,\ldots,N\) because the order in which \(\theta_{j_{k}+1}\) is written is irrelevant. More importantly, note that the right-hand side now contains the product of terms that depend on \(\theta_{i},\theta_{j_{1}},\ldots,\theta_{j_{k+1}}\). That is, the right-hand side contains \((k+1)\)-body interactions.
It is possible to explicitly continue this argument for each \(N\in\mathbb{N}\), but we skip these details here. Note that while we may continue to calculate \(p_{i}^{(k)}\) for \(k\geq N\), \(p_{i}^{(k)}\) will contain at most \(N\)-body interactions. Similarly, if we truncate the sum in \(k\) up to some \(k<N\), then there will be at most \((k+1)\)-body interactions.
We now return to the derivation and expand the phase equation (19) in \(\varepsilon\):
\[\dot{\hat{\theta}}_{i} =\varepsilon\sum_{j=1}^{N}a_{ij}\mathcal{Z}_{i}(\hat{\theta}_{i} +t,\psi_{i})\cdot G_{ij}(\hat{\theta}_{i}+t,\hat{\theta}_{j}+t)\] \[=\varepsilon\sum_{j=1}^{N}a_{ij}\left[Z_{i}^{(0)}+\psi_{i}Z_{i}^{ (1)}+\psi_{i}^{2}Z_{i}^{(2)}+\psi_{i}^{3}Z_{i}^{(3)}+\cdots\right]\cdot\left[ K_{ij}^{(0)}+\varepsilon K_{ij}^{(1)}+\varepsilon^{2}K_{ij}^{(2)}+ \varepsilon^{3}K_{ij}^{(3)}+\cdots\right].\]
Substituting the expansion for \(\psi_{i}\) and collecting in powers of \(\varepsilon\) yields a virtually identical right-hand side as (21) with \(Z\) in place of \(I\):
\[\dot{\hat{\theta}}_{i} =\varepsilon\sum_{j=1}^{N}a_{ij}Z_{i}^{(0)}\cdot K_{ij}^{(0)}\] \[\quad+\varepsilon^{2}\sum_{j=1}^{N}a_{ij}\left(Z_{i}^{(0)}\cdot K _{ij}^{(1)}+p_{i}^{(1)}Z_{i}^{(1)}\cdot K_{ij}^{(0)}\right)\] \[\quad+\varepsilon^{3}\sum_{j=1}^{N}a_{ij}\left(Z_{i}^{(0)}\cdot K _{ij}^{(2)}+p_{i}^{(1)}Z_{i}^{(1)}\cdot K_{ij}^{(1)}+p_{i}^{(2)}Z_{i}^{(1)} \cdot K_{ij}^{(0)}+\left(p_{i}^{(1)}\right)^{2}Z_{i}^{(2)}\cdot K_{ij}^{(0)} \right),\] \[\quad\vdots\]
This differential equation is a system of nonautonomous ODEs for the phase dynamics of each oscillator. Organizing in terms of powers of \(\varepsilon^{k}\), We can write the phase equation as
\[\dot{\hat{\theta}}_{i}=\sum_{k=1}^{\infty}\sum_{j=1}^{N}\varepsilon^{k}a_{ij} \left[\sum_{E_{k-1}}h_{i,j,\alpha,\beta,\gamma,\delta}^{(k)}(\hat{\theta}_{i}+ t,\hat{\theta}_{j}+t)\left(p_{i}^{(\alpha)}(t)\right)^{\beta+1}\left(p_{j}^{( \gamma)}(t)\right)^{\delta+1}\right], \tag{25}\]
where \(h^{(k)}_{i,j,\alpha,\beta,\gamma,\delta}\) is defined for \(Z\) just as \(f^{(k)}_{i,j,\alpha,\beta,\gamma,\delta}\) is defined for \(I\). It is straightforward to show that the \(k\)th term in (25) contains \((k+1)\)-body interactions using the same argument we used to show \((k+1)\)-body interactions in (22). Then by collecting each \(k\)-body interaction term with constants \(a_{ij}\) into the function \(h_{i,j_{1},\ldots,j_{k-1}}\), we arrive at the nonautonomous phase equation,
\[\dot{\hat{\theta}}_{i}=\sum_{k=1}^{\infty}\sum_{j_{1},\ldots,j_{k}}\varepsilon^ {k}\hat{h}^{(k)}_{i,j_{1},\ldots,j_{k}}(\hat{\theta}_{i}+t,\hat{\theta}_{j_{1} }+t\ldots,\hat{\theta}_{j_{k}}+t). \tag{26}\]
**ASSUMPTION 2** We assume that first-order averaging is sufficient to take the coupling strength \(\varepsilon\) well beyond first order. This assumption almost certainty places upper bounds on values of \(\varepsilon\), but it is exceptionally convenient for numerical calculations. As our results will show, higher-order averaging is not necessary for the reduced equations to capture nonlinear effects. However, if needed, we will utilize higher-order averaging from, e.g., [34, 35] in future work.
Applying first-order averaging to (26) yields the autonomous phase equation
\[\dot{\theta}_{i}=\sum_{k=1}^{\infty}\sum_{j_{1},\ldots,j_{k}}\varepsilon^{k} \mathcal{H}^{(k)}_{i,j_{1},\ldots,j_{k}}(\theta_{i},\theta_{j_{1}},\ldots, \theta_{j_{k}}), \tag{27}\]
where
\[\mathcal{H}^{(k)}_{i,j_{1},\ldots,j_{k}}(\theta_{i},\theta_{j_{1}},\ldots, \theta_{j_{k}})=\frac{1}{T}\int_{0}^{T}\hat{h}^{(k)}_{i,j_{1},\ldots,j_{k}}( \theta_{i}+s,\theta_{j_{1}}+s,\ldots,\theta_{j_{k}}+s)\,\mathrm{d}s.\]
To obtain phase difference equations for the network, we perform the change of coordinates \(s^{\prime}=\theta_{i}+s\) and let \(\phi_{i}=\theta_{i}-\theta_{1}\) (without loss of generality). Then,
\[\mathcal{H}^{(k)}_{i,j_{1},\ldots,j_{k}}(\phi_{j_{1}}-\phi_{i},\ldots,\phi_{j _{k}}-\phi_{i})=\frac{1}{T}\int_{0}^{T}\hat{h}^{(k)}_{i,j_{1},\ldots,j_{k}}( \phi_{j_{1}}-\phi_{i}+s^{\prime},\ldots,\phi_{j_{k}}-\phi_{i}+s^{\prime})\, \mathrm{d}s^{\prime}.\]
Then the phase difference dynamics of \(N\) strongly coupled heterogeneous oscillators up to \(N\)-body interactions taking amplitude dynamics into account is given by,
\[\dot{\phi}_{i}=\sum_{k=1}^{\infty}\varepsilon^{k}\sum_{j_{1},\ldots,j_{k}} \left[\mathcal{H}^{(k)}_{i,j_{1},\ldots,j_{k}}(\phi_{j_{1}}-\phi_{i},\ldots, \phi_{j_{k}}-\phi_{i})-\mathcal{H}^{(k)}_{1,j_{1},\ldots,j_{k}}(\phi_{j_{1}}, \ldots,\phi_{j_{k}})\right],\quad i=2,\ldots,N. \tag{28}\]
This equation is a generalized version of the two-body interactions in [50] and a generalization beyond the second-order coupling terms in [71]. Since we often do not examine particular terms in the inner summation, we rewrite the right-hand side as a single \(\mathcal{H}\) function for each oscillator \(i\) and order \(k\) for notational convenience:
\[\dot{\phi}_{i}=\sum_{k=1}^{\infty}\varepsilon^{k}\mathcal{H}^{(k)}_{i}(\phi_{ 2},\ldots,\phi_{N}),\quad i=2,\ldots,N. \tag{29}\]
### Explicit Higher-Order Terms
Higher-order interactions in the reduced equation 29 are more obvious if we unravel the right-hand side of (29) for a few powers of \(\varepsilon\). The \(O(\varepsilon)\) term is simply the two-body interaction from weak coupling theory:
\[O(\varepsilon):\quad\sum_{j=1}^{N}\frac{a_{ij}}{T}\int_{0}^{T}h_{i,j,0,0,0,0}^{ (1)}(s,\theta_{j}-\theta_{i}+s)\mathrm{d}s. \tag{30}\]
The \(O(\varepsilon^{2})\) term is the three-body interaction from [71] but with implicit heterogeneity:
\[\begin{split} O(\varepsilon^{2}):\quad\sum_{j=1}^{N}\sum_{ \ell=1}^{N}\sum_{E_{1}}\frac{a_{ij}}{T}\int_{0}^{T}\Big{[}h_{i,j,1,0,0,0}^{(2)} (s,\theta_{j}-\theta_{i}+s)a_{i\ell}g_{i\ell,1,0,0,0}^{(2)}(s,\theta_{\ell}- \theta_{i}+s)\\ +h_{i,j,0,0,1,0}^{(2)}(s,\theta_{j}-\theta_{i}+s)a_{j\ell}g_{j \ell,0,0,1,0}^{(2)}(s,\theta_{\ell}-\theta_{i}+s)\Big{]}\,\mathrm{d}s.\end{split} \tag{31}\]
The \(O(\varepsilon^{3})\) term is the novel four-body interaction for \(N\) general heterogeneous non-weakly coupled oscillators:
\[\begin{split} O(\varepsilon^{3}):\quad\sum_{j=1}^{N}\frac{a_{ ij}}{T}\int_{0}^{T}\left[h_{i,j,2,0,0,0}^{(3)}(s,\theta_{i}-\theta_{j}+s)p_{i}^{(2) }+h_{i,j,1,1,0,0}^{(3)}(s,\theta_{i}-\theta_{j}+s)\left(p_{i}^{(1)}\right)^{2 }\right.\\ \left.+h_{i,j,0,0,2,0}^{(3)}(s,\theta_{i}-\theta_{j}+s)p_{j}^{(2) }+h_{i,j,0,0,1,1}^{(3)}\left(p_{j}^{(1)}\right)^{2}\right]\,\mathrm{d}s.\end{split} \tag{32}\]
To more explicitly see four-body interactions, we use the multinomial theorem [3] to expand\((p_{i}^{(k)})^{m}\) for each \(k,m\):
\[\left(\sum_{\ell=1}^{N}a_{i\ell}g_{i\ell}^{(k)}(\theta_{i},\theta_{\ell}) \right)^{m}=\sum_{k_{1}+k_{2}+\cdots+k_{N}=m}\binom{m}{k_{1},k_{2},\ldots,k_{m }}\prod_{u=1}^{N}\left(a_{iu}g_{iu}^{(k)}(\theta_{i},\theta_{u})\right)^{k_{u}}.\]
We can then unravel (32) using the definition of \(p_{i,j}^{(2)}\) (22):
\[\begin{split}\sum_{j=1}^{N}\frac{a_{ij}}{T}\int_{0}^{T}\Bigg{[}h _{i,j,2,0,0,0}^{(2)}(s,\theta_{i}-\theta_{j}+s)\sum_{\ell=1}^{N}a_{i\ell}g_{ i\ell}^{(2)}(s,\theta_{\ell}-\theta_{i}+s)\\ +h_{i,j,1,1,0,0}^{(2)}(s,\theta_{i}-\theta_{j}+s)\sum_{\ell=1}^{ N}\sum_{u=1}^{N}a_{i\ell}g_{i\ell}^{(1)}(s,\theta_{\ell}-\theta_{i}+s)a_{iu}g_{iu}^{(1 )}(s,\theta_{u}-\theta_{i}+s)\\ +h_{i,j,0,0,2,0}^{(2)}(s,\theta_{i}-\theta_{j}+s)\sum_{\ell=1}^{ N}a_{j\ell}g_{j\ell}^{(2)}(s,\theta_{\ell}-\theta_{j}+s)\\ +h_{i,j,0,0,1,1}^{(2)}(s,\theta_{i}-\theta_{j}+s)\sum_{\ell=1}^{ N}\sum_{u=1}^{N}a_{j\ell}g_{j\ell}^{(1)}(s,\theta_{\ell}-\theta_{j}+s)a_{ju}g_{ ju}^{(1)}(s,\theta_{u}-\theta_{j}+s)\Bigg{]}\,\mathrm{d}s,\end{split}\]
where
\[g_{i\ell}^{(2)}(\theta_{i},\theta_{j})=\sum_{u=1}^{N}\int_{0}^{ \infty}e^{\kappa_{i}s^{\prime}}\left[f_{i,\ell,1,0,0,0}(\theta_{i}-s^{\prime}, \theta_{j}-s^{\prime})a_{iu}g_{iu}^{(1)}(\theta_{i},\theta_{u})\right.\\ +\left.f_{i,\ell,0,0,1,0}(\theta_{i}-s^{\prime},\theta_{j}-s^{ \prime})a_{ju}g_{ju}^{(1)}(\theta_{j},\theta_{u})\right]\,\mathrm{d}s^{\prime}.\]
The four-body interactions are now apparent. Additional higher-order terms may be computed using our Python implementation [46].
## 4 Complex Ginzburg-Landau Model
We now apply our results to diffusively coupled complex Ginzburg-Landau (CGL) models. The ODE form of this model has been studied extensively [71, 50], making it an ideal preliminary test for our results. Let \(X_{i}=(x_{i},y_{i})^{\top}\) and \(N=3\). The network is given by
\[\dot{X}_{i}=F(X_{i})+\sum_{j=1}^{3}G(X_{i},X_{j}),\]
where
\[F(X_{i})=\begin{pmatrix}\sigma x_{i}(\mu-R_{i})-y_{i}(1+\rho(R_{i}-\mu))\\ \sigma y_{i}(\mu-R_{i})+x_{i}(1+\rho(R_{i}-\mu))\end{pmatrix},\quad G(X_{i},X_ {j})=\begin{pmatrix}(x_{j}-x_{i})-d(y_{j}-y_{i})\\ (y_{j}-y_{i})+d(x_{j}-x_{i})\end{pmatrix},\]
and \(R_{i}=x_{i}^{2}+y_{i}^{2}\). We assume all-to-all coupling, where pairwise strengths are given by,
\[a_{ij}=\begin{cases}1/N&\text{if }i\neq j\\ 0&\text{else}\end{cases},\]
Calculating the \(\mathcal{H}\) functions (29) is straightforward because the Nyquist frequency of the underling function is especially low and requires only a dozen Fourier terms at \(O(\varepsilon^{3})\). Using a Fourier truncation in this scenario greatly reduces the time complexity and memory requirements for the averaging calculation behind the \(\mathcal{H}\) functions (see Appendix A.1 for additional details).
We show comparisons between the full and reduced versions of the CGL model in Figure 1. The top row shows phase estimates of the full model for \(\varepsilon=0.005\) (A) and \(\varepsilon=0.06\) (C), where later shades correspond to later times. The bottom row shows the \(O(\varepsilon)\) (blue), \(O(\varepsilon^{2})\) (orange), and \(O(\varepsilon^{3})\) (black) phase models exhibiting qualitatively similar dynamics at \(\varepsilon=0.005\) (E) and \(\varepsilon=0.06\) (G), respectively. Corresponding time traces are shown to the right of each portrait, e.g., panel B corresponds to A, and F corresponds to E.
At \(\varepsilon=0.005\), the full model tends towards an asymptotically stable splay state when initialized near synchrony with phases \((\phi_{2},\phi_{3})=(0.05,0.25)\) (A,B, where \(\phi_{i}\in[0,2\pi)\)). With the same initial values, all \(O(\varepsilon)\) (blue), \(O(\varepsilon^{2})\) (orange), and \(O(\varepsilon^{3})\) (black) phase models coincide with each other and with the full model as expected (E,F). For greater values of \(\varepsilon\), the full model splay state loses stability and the phase differences converge towards a limit cycle attractor (C,D). Only the \(O(\varepsilon^{3})\) (black) phase model exhibits a similar limit cycle
Figure 2: Real (A) and imaginary (B) parts of the eigenvalues of the Jacobian matrix evaluated at the splay state in the reduced CGL models. Blue: \(O(\varepsilon)\), orange: \(O(\varepsilon^{2})\), black: \(O(\varepsilon^{3})\). Parameters are identical to those used in Figure 1.
Figure 1: Comparison of the full (top row) and reduced (bottom row) CGL models. All panels show the corresponding nullclines of the \(O(\varepsilon^{3})\) reduced model. A: Phase difference estimate of the full model dynamics at \(\varepsilon=0.005\). Lighter shades indicate later times. E: The corresponding reduced models (\(O(\varepsilon)\) blue, \(O(\varepsilon^{2})\) orange, and \(O(\varepsilon^{3})\) black). Arrows indicate movement in forward time. Note that all phase models coincide. B, F: Corresponding plots over time of the full model and reduced model, respectively. C, D, full and reduced model dynamics, respectively for \(\varepsilon=0.06\). D, H: corresponding plots over time of the full and reduced models, respectively. Parameters: \(d=0.9\), \(\sigma=0.1\), \(\rho=0.15\), and \(\mu=1\).
oscillation, while the \(O(\varepsilon)\) (blue) phase model dynamics does not change, and the \(O(\varepsilon^{2})\) (orange) phase model simply converges to synchrony due to changes in the underlying basin of attraction (G,H).
Note that even though both \(O(\varepsilon^{2})\) and \(O(\varepsilon^{3})\) terms contain 3-body interactions, only the \(O(\varepsilon^{3})\) phase model reproduces limit cycle behavior. Thus, this example demonstrates that \(N\)-body interactions _per se_ are not always sufficient to capture the dynamics of the original model and that additional correction terms may be necessary.
The stability of the splay state is straightforward to calculate using the reduced model because we only the eigenvalues of the Jacobian matrix evaluated at the splay state \((\phi_{2},\phi_{3})=(2\pi/3,4\pi/3)\) need to be known. By using the Fourier expansion of the \(\mathcal{H}\) functions, only derivatives of sinusoids are required to compute the Jacobian, and this derivative can be taken rapidly without the need for estimates such as finite differences. The result of this analysis is shown in 2. The left and right panels show the real and imaginary parts of the eigenvalues, respectively. While \(O(\varepsilon^{2})\) (orange) does eventually lose stability, it occurs at a 4-fold greater coupling strength than the full or \(O(\varepsilon^{3})\) models.
## 5 Thalamic Model
We now apply the method to a set of \(N=3\) synaptically coupled conductance-based thalamic neuron models from [56]. These results extend our previous work where we applied a strongly coupled phase reduction method for \(N=2\) thalamic models [50].
\begin{table}
\begin{tabular}{l|l} Parameter & Value \\ \hline \(C\) & \(1\,\mathrm{\SIUnitSymbolMicro F}/\mathrm{cm}^{2}\) \\ \(E_{k}\) & \(-90\,\mathrm{mV}\) \\ \(E_{\mathrm{Na}}\) & \(50\,\mathrm{mV}\) \\ \(E_{\mathrm{t}}\) & \(0\,\mathrm{mV}\) \\ \(E_{\mathrm{L}}\) & \(-70\,\mathrm{mV}\) \\ \(E_{\mathrm{syn}}\) & \(0\,\mathrm{mV}\) (Figure 3) or \(-100\,\mathrm{mV}\) (Figure 5) \\ \(g_{\mathrm{L}}\) & \(0.05\,\mathrm{mS}/\mathrm{cm}^{2}\) \\ \(g_{\mathrm{K}}\) & \(5\,\mathrm{mS}/\mathrm{cm}^{2}\) \\ \(g_{\mathrm{Na}}\) & \(3\,\mathrm{mS}/\mathrm{cm}^{2}\) \\ \(g_{\mathrm{syn}}\equiv\varepsilon\) & \(0\,\mathrm{mS}/\mathrm{cm}^{2}\) to \(0.027\,\mathrm{mS}/\mathrm{cm}^{2}\) \\ \(\alpha\) & 3 \\ \(\beta\) & 2 \\ \(\sigma_{T}\) & \(0.8\) \\ \(V_{\mathrm{T}}\) & \(20\,\mathrm{mV}\) \\ \(I_{\mathrm{app}}\) & \(0.8\,\mathrm{\SIUnitSymbolMicro A}/\mathrm{cm}^{2}\) (Figure 3) or \(0.6\,\mathrm{\SIUnitSymbolMicro A}/\mathrm{cm}^{2}\) (Figure 5) \\ \end{tabular}
\end{table}
Table 1: Thalamic model parameter values.
The thalamic model is given by the equations,
\[\frac{dV_{i}}{dt} =-(I_{\rm L}(V_{i})+I_{\rm Na}(V_{i})+I_{\rm K}(V_{i})+I_{\rm T}(V_{i })+\frac{g_{\rm syn}}{N}\sum_{j=1}^{N}a_{ij}w_{j}(V_{i}-E_{\rm syn})-I_{\rm app} )/C,\] \[\frac{dh_{i}}{dt} =(h_{\infty}(V_{i})-h_{i})/\tau_{h}(V_{i}),\] \[\frac{dr_{i}}{dt} =(r_{\infty}(V_{i})-r_{i})/\tau_{r}(V_{i}),\] \[\frac{dw_{i}}{dt} =\alpha(1-w_{i})/(1+\exp((V_{i}-V_{\rm T})/\sigma_{T}))-\beta w_{ i},\]
where \(i=1,\ldots,3\), and \(a_{ii}=0\) and \(a_{ij}=1\) otherwise. Remaining equations are in Appendix B and all parameters are shown in Table 1. Given neuron \(i\), the coupling term in the voltage variable \(V_{i}\) is given by the average excitatory effect of the synaptic variables \(w_{j}\) without self coupling. The synaptic conductance parameter \(g_{\rm syn}\) sets the overall coupling strength and is identical to the coupling strength parameter \(\varepsilon\).
We compare the reduced and full versions of the thalamic model in Figure 3, where the parameters are chosen as in Table 1 with with \(E_{\rm syn}=0\,\)mV and \(I_{\rm app}=0.8\,\mu\)A/cm\({}^{2}\). for this figure are in table 1 The top row shows phase estimates of the full model for \(\varepsilon=0.005\) (A) and \(\varepsilon=0.027\) (C), where lighter shades correspond to later times. The bottom row shows the \(O(\varepsilon)\) (blue) and \(O(\varepsilon^{2})\) (black) phase models exhibiting qualitatively similar phase dynamics at \(\varepsilon=0.005\) (E) and \(\varepsilon=0.016\) (G), respectively. Corresponding time traces are shown to the right of each portrait, e.g., panel B corresponds to A, and F corresponds to E.
At \(\varepsilon=0.005\), the full model phase differences tend towards an asymptotically stable splay state when initialized near synchrony with phases \((\phi_{2},\phi_{3})=(0.4,1)\) (A,B, where \(\phi_{i}\in[0,2\pi)\)). With the same initial values, all \(O(\varepsilon)\) (blue), \(O(\varepsilon^{2})\) (orange), and \(O(\varepsilon^{3})\) (black) phase models coincide with the full model at as expected (E,F).
For greater values of \(\varepsilon\), phase differences in the full model asymptotically tend towards a limit cycle oscillation (C,D) and the \(O(\varepsilon^{2})\) reduced model tends towards synchrony. While the asymptotic dynamics differ, we nevertheless capture the loss of stability in the splay state. Real and imaginary parts of the Jacobian evaluated at the splay state is shown in Figure 3.
To further demonstrate the utility of our method, we show the phase reduction of the thalamic model for a different set of synaptic parameters: \(E_{\rm syn}=0\) and \(\varepsilon<0\). This choice is less biologically relevant because it corresponds to an excitatory chemical synapse with a negative conductance, but the goal of this example is to show that the reduced model can capture additional nonlinear dynamics in a model more complex than the CGL model.
Comparisons between the full and reduced versions of the thalamic model for this new parameter set are shown in Figure 5. The top row shows phase estimates of the full model for \(\varepsilon=-0.008\) (A) and \(\varepsilon=0.0025\) (C), where lighter shades correspond to later times. The bottom row shows the \(O(\varepsilon)\) (blue) and \(O(\varepsilon^{2})\) (black) phase models exhibiting qualitatively similar phase dynamics at \(\varepsilon=-0.0014\) (E) and \(\varepsilon=0.0025\) (G), respectively. Corresponding time traces are shown to the right of each portrait, e.g., panel B corresponds to A, and F corresponds to E.
The right column of Figure 5 (panels C,D and G,H) serves as a sanity check, where \(\varepsilon>0\) puts us back in a biologically realistic regime. The full and reduced models all tend towards synchrony, however, the \(O(\varepsilon^{2})\) model (black) captures the transient dynamics more accurately than the \(O(\varepsilon)\) model.
At \(\varepsilon=-0.008\), the full model phase differences exhibit a loss of stability in the splay state and the asymptotic dynamics tend towards a limit cycle (A,B). The \(O(\varepsilon^{2})\) reduced model captures this behavior (E,F, black), whereas the \(O(\varepsilon)\) reduced model does not (E,F, blue). Note the considerable nonlinearity in the phase difference dynamics as a function of \(\varepsilon\). Despite the small size of \(\varepsilon\), the \(O(\varepsilon)\) and \(O(\varepsilon^{2})\) dynamics differ substantially.
This example is affected by a nearby saddle-node on an invariant cycle (SNIC) bifurcation, which occurs just below the applied current value of \(I_{\rm app}=0.6\,\mu\)A/cm\({}^{2}\), and reminds us that "small" \(\varepsilon\) is relative. The first example of the thalamic model (Figure 3) uses much greater values of \(\varepsilon=0.005\) and \(\varepsilon=0.027\), the latter being an order of magnitude greater than in the current example (Figure 5). The proximity to a SNIC also highlights the short
Figure 3: Comparison of the full (top row) and reduced (bottom row) thalamic models. All panels show the corresponding nullclines of the \(O(\varepsilon^{2})\) reduced model. A: Phase difference estimate of the full model dynamics. Lighter shades indicate later times. E: The reduced models (\(O(\varepsilon)\) blue and \(O(\varepsilon^{2})\) black) at the approximate corresponding coupling strength. Note that the \(O(\varepsilon)\) model remains at the splay state. Arrows indicate movement in forward time. B, F: Corresponding plots over time of the full model and reduced model, respectively. C, D, full and reduced model dynamics, respectively. D, H: corresponding plots over time of the full and reduced models, respectively. Parameters as in Table 1 with \(E_{\rm syn}=0\,\)mV and \(I_{\rm app}=0.8\,\mu\)A/cm\({}^{2}\)
comings with our second assumption, where we use first order averaging. The reciprocal of our period is \(1/T=1/44\,\mathrm{ms}\approx 0.023\), which places an approximate upper bound on the coupling strength \(\varepsilon\), which must be much smaller than \(1/T\)[35].
Nevertheless, we can compute changes in the stability of the splay state as in the previous examples. The real and imaginary parts of the eigenvalues of the Jacobian matrix evaluated at the splay state is shown in Figure 6. The \(O(\varepsilon^{2})\) model captures the loss in stability while the \(O(\varepsilon)\) model does not.
## 6 Discussion
In summary, we derived coupling functions that capture higher-order \(N\)-body interactions. We applied this method to two systems, the CGL model and a thalamic neuron model. While we did not consider heterogeneity, the formulation allows for the vector fields to be entirely different, so long as the oscillator periods are similar in the absence of coupling. Despite only considering pairwise interactions in the original systems, we found that higher-order terms were necessary to reproduce the dynamics of the original system. In the CGL model, even though the \(O(\varepsilon^{2})\) reduced model contained 3-body interactions, it was the \(O(\varepsilon^{3})\) reduced model (which also contains 3-body interactions) that could capture additional nonlinearities, suggesting that in general, \(N\)-body interactions _per se_ are not always sufficient for accuracy. In the thalamic model, we considered two examples, the first at \(I_{\mathrm{app}}=0.8\,\mu\mathrm{A}/\mathrm{cm}^{2}\)\(E_{\mathrm{syn}}=-100\) and the second at \(I_{\mathrm{app}}=0.6\,\mu\mathrm{A}/\mathrm{cm}^{2}\)\(E_{\mathrm{syn}}=0\). In the first example, we captured the loss in stability of the splay state using the \(O(\varepsilon^{2})\) model. In the second example, we observed limit cycle behavior in the phase difference of the full model by taking \(g_{\mathrm{syn}}\equiv\varepsilon<0\) and exploring beyond a biologically relevant parameter regime. The \(O(\varepsilon^{2})\) reduced model captured this limit cycle behavior. The difference in \(\varepsilon\) values between the full and reduced models to capture similar behaviors was notably different from the previous examples. This difference is likely due to the existence of a SNIC bifurcation just below the parameter value \(I_{\mathrm{app}}=0.6\,\mu\mathrm{A}/\mathrm{cm}^{2}\).
Our method is both a generalization of existing methods that consider higher-order phase interactions and a general framework from which to study higher-order effects. For example,
Figure 4: Real (A) and imaginary (B) parts of the eigenvalues of the Jacobian matrix evaluated at the splay state in the reduced thalamic models. Blue: \(O(\varepsilon)\), orange: \(O(\varepsilon^{2})\), black: \(O(\varepsilon^{3})\). Parameters are identical to those used in Figure 3.
a higher-order reduced model is derived using the Haken-Kelso-Bunz (HKB) equation in [38]. The higher-order terms are the lowest-order Fourier terms of our \(\mathcal{H}\) functions, thus the same questions of existence can be answered with our method and further explored with additional Fourier terms and multi-body interactions. Larger networks of the HKB equation that consider interactions well beyond dyadic [74] fit comfortably within the limitations of our method (see Section 6.1 below for details). Similarly, there is no restriction to applying our method to questions of coordinated movement, e.g., [27], or studies of coupled population dynamics [41].
Our method may aid in addressing questions of synchrony and phase-locking in general finite populations of coupled oscillators with heterogeneity where order parameters are typically used. For example, the heterogeneous systems and coupling functions considered in [2] can not exhibit synchrony and a "bounded synchronization" measurement [23] is necessary. Our method could provide a far more detailed understanding of the bounded synchronization state alongside other possible phase-locked states. Moreover, similar questions could
Figure 5: Comparison of the full (top row) and reduced (bottom row) thalamic models. All panels show the corresponding nullclines of the \(O(\varepsilon^{2})\) reduced model. A: Phase difference estimate of the full model dynamics. Lighter shades indicate later times. E: The reduced models (\(O(\varepsilon)\) blue and \(O(\varepsilon^{2})\) black) at the approximate corresponding coupling strength. Note that the \(O(\varepsilon)\) model remains at the splay state. Arrows indicate movement in forward time. B, F: Corresponding plots over time of the full model and reduced model, respectively. C, D, full and reduced model dynamics, respectively. D, H: corresponding plots over time of the full and reduced models, respectively. Parameters as in Table 1 with \(E_{\text{syn}}=-100\,\text{mV}\) and \(I_{\text{app}}=0.6\,\mu\text{A}/\text{cm}^{2}\)
be asked in much more realistic and complex neurobiological models.
### Limitations
We begin with limits related to our implementation. As mentioned in the text, if the \(\mathcal{H}\) functions have a low Nyquist frequency, then a Fourier truncation can be used to greatly reduce the time complexity and memory requirements of our method (see Appendix A.1). In particular, knowledge of the exact types of functions that appear for each order is a significant part of this computational efficiency. However, the current implementation has only up to order \(O(\varepsilon^{3})\) implemented for the Fourier method. While it is clear that there is a pattern in the types of separable and non-separable functions that appear in the Fourier terms as a function of higher orders, we have not precisely determined a formula for this pattern at the time of this writing. Once the pattern is fully understood, it will be possible to determine significantly higher-order interaction terms using the Fourier truncation.
Limitations related to the reduction method outside of our implementation center around the two key assumptions that make the derivation of this method possible. First, we assume that the timescale of phase \(\theta_{i}-t\) (i.e., the phase equation after subtracting the moving frame) differs sufficiently from the timescale of the functions \(p_{i}^{(k)}\) (the expansion terms of the isostable coordinate \(\psi_{i}\)), such that \(\theta_{i}\) terms can be taken out of a time integral. This assumption is reasonable for at least moderate values of \(\varepsilon\) up to \(\varepsilon=0.06\) in our examples, where the phase difference variables \(\phi_{i}\) tend to vary relatively slowly. However, additional work must be done to more carefully examine this assumption for use in greater \(\varepsilon\).
Second, we use _first order_ averaging, which is technically valid for small \(\varepsilon\) comparable to those used in weak coupling theory. This limitation is especially apparent in the last example, where the thalamic model is near a SNIC bifurcation and the reciprocal of the period \((1/44\,\mathrm{ms}\approx 0.023)\) places an approximate upper bound on the coupling strength \(\varepsilon\), as \(\varepsilon\) must be much smaller than \(1/T\)[35]. This is a good example of where higher-order averaging methods [34, 35] could be used. In addition, we have observed phase drift in the full model (data not shown) in a manner that may not be possible to capture in the current formulation. For example, with \(N=3\) homogeneous oscillators and some values of
Figure 6: Real (A) and imaginary (B) parts of the eigenvalues of the Jacobian matrix evaluated at the splay state in the reduced thalamic models. Blue: \(O(\varepsilon)\), orange: \(O(\varepsilon^{2})\), black: \(O(\varepsilon^{3})\). Parameters are identical to those used in Figure 5.
\(\varepsilon\), two oscillators synchronize and the third exhibits a phase drift, effectively resulting in a 2 oscillator system with a drift in the remaining phase difference. In our formulation, a single phase difference equation can not exhibit drift without heterogeneity. If this effect is due to resonance, it could be captured with higher-order averaging.
## 7 Acknowledgments
The authors acknowledge support under the National Science Foundation Grant No. NSF-2140527 (DW).
## Appendix A Numerical Details for Computing Higher-Order Interaction Functions
Recall the non-autonomous version of the phase equation from the text:
\[\dot{\hat{\theta}}_{j}=\sum_{k=1}^{\infty}\sum_{j_{1},\ldots,j_{k}}\varepsilon^ {k}\hat{h}_{j,j_{1},\ldots,j_{k}}^{(k)}(\hat{\theta}_{j}+t,\hat{\theta}_{j_{1} }+t\ldots,\hat{\theta}_{j_{k}}+t).\]
Note that we can abstract and rewrite the phase equation as
\[\dot{\hat{\theta}}_{j}=\sum_{k=1}^{\infty}\varepsilon^{k}\bar{h}_{j}^{(k)}( \hat{\theta}_{1}+t,\ldots,\hat{\theta}_{N}+t),\]
where \(\bar{h}_{j}^{(k)}(\hat{\theta}_{1}+t,\ldots,\hat{\theta}_{N}+t)=\sum_{j_{1}, \ldots,j_{k}}\varepsilon^{k}\hat{h}_{j,j_{1},\ldots,j_{k}}^{(k)}(\hat{\theta }_{j}+t,\hat{\theta}_{j_{1}}+t\ldots,\hat{\theta}_{j_{k}}+t)\) because the right-hand side will always depend on all oscillators. When we apply averaging, assuming we can interchange sums and integrals, we arrive at
\[\dot{\theta}_{j}=\frac{1}{T}\sum_{k=1}^{\infty}\int_{0}^{T}\bar{h}_{j}^{(k)}( \theta_{1}+t,\ldots,\theta_{N}+t)\,\mathrm{d}t.\]
Given a function of \(N\) variables, the average integral is straightforward to calculate, but the integral requires knowledge of the function on a grid. For \(N\) variables and \(K\) points in the integration mesh for each axis, the grid is of size \((K,\ldots,K)\in\mathbb{Z}^{N}\) for some \(K\) sufficiently large. This means evaluating the time-average integral on a grid of size \(K^{N}\), which becomes rapidly intractable for even small \(K\) and \(N\). In the CGL model, \(K=200\) and in the thalamic model, \(K=1000\) or more. Even with small changes to these modest numbers, the number of calculations and memory requirements increase exponentially. Thus, one option is to exploit the periodicity of the underlying functions and express them in terms of their Fourier coefficients, which are significantly more memory-efficient and easier to calculate.
### Option 1: Averaging in Terms of Fourier Coefficients
In cases where the response functions (\(\mathcal{Z}\) and \(\mathcal{I}\)) and Floquet eigenfunctions have low Nyquist frequencies, only a small number of Fourier coefficients are needed to accurately reproduce the functions. While the following method does not apply to all cases, it is
nevertheless an option that can dramatically improve computation times when it does apply, such as in the CGL model.
Given order \(k\), define \(\boldsymbol{\Theta}=(\theta_{1},\ldots,\theta_{N})\) and \(\boldsymbol{1}=(1,\ldots,1)\). Consider the \(N\)-dimensional Fourier expansion of the integrand:
\[\int_{0}^{T}\bar{h}_{j}^{(k)}(\theta_{1}+t,\ldots,\theta_{N}+t)\,\mathrm{d}t= \int_{0}^{T}\sum_{\boldsymbol{m}\in\mathbb{Z}^{N}}c_{j,k,\boldsymbol{m}}e^{i \boldsymbol{m}\cdot(\boldsymbol{\Theta}+\boldsymbol{1}t)}\,\mathrm{d}t,\]
where \(c_{j,k,\boldsymbol{m}}\) are the Fourier coefficients given oscillator \(j\) and order \(k\). Let the coordinates of \(\boldsymbol{m}\) be given by \(\boldsymbol{m}=(m_{1},m_{2},\ldots,m_{N})\). We simplify this integral using straightforward integral properties and orthogonality of the Fourier basis:
\[\begin{split}\int_{0}^{T}\sum_{\boldsymbol{m}\in\mathbb{Z}^{N}}c _{i,k,\boldsymbol{m}}e^{i\boldsymbol{m}\cdot(\boldsymbol{\Theta}+\boldsymbol {1}t)}\,\mathrm{d}t&=\sum_{\boldsymbol{m}\in\mathbb{Z}^{N}}\int_ {0}^{T}c_{i,k,\boldsymbol{m}}e^{i\boldsymbol{m}\cdot(\boldsymbol{\Theta}+ \boldsymbol{1}t)}\,\mathrm{d}t\\ &=\sum_{\boldsymbol{m}\in\mathbb{Z}^{N}}c_{i,k,\boldsymbol{m}}e^{ i\boldsymbol{m}\cdot\boldsymbol{\Theta}}\int_{0}^{T}e^{i\boldsymbol{m}\cdot \boldsymbol{1}t}\,\mathrm{d}t\\ &=\sum_{\boldsymbol{m}\cdot\boldsymbol{1}=0}c_{i,k,\boldsymbol{m }}e^{i\boldsymbol{m}\cdot\boldsymbol{\Theta}}.\end{split} \tag{33}\]
That is, the only integral terms that remain are those such that the index vector \(\boldsymbol{m}\) is orthogonal to the vector \(\boldsymbol{1}\), and the integral of those surviving terms trivially evaluate to the scalar \(1\). Thus, the question of computing the average in time is simply a matter of computing the set of Fourier coefficients and extracting the relevant subset. If the number of oscillators \(N\) and mesh size \(K\) are large, then the desired averaged function in its final form can be taken as above, otherwise the desired functions can be found by the inverse Fourier transform.
Claim: The right-hand side of (33) depends on \(N-1\) variablesWithout loss of generality, suppose \(j=1\). Now consider the change of variables \(s=\theta_{1}+t\) and recall that \(\phi_{i}=\theta_{i}-\theta_{1}\). Then rewriting (33) yields
\[\begin{split}\sum_{\boldsymbol{m}\in\mathbb{Z}^{N}}c_{1,k, \boldsymbol{m}}\int_{0}^{T}e^{i\boldsymbol{m}\cdot(s,\phi_{2}+s,\ldots,\phi_{ N}+s)}\,\mathrm{d}s&=\sum_{\boldsymbol{m}\in\mathbb{Z}^{N}}c_{1,k, \boldsymbol{m}}\int_{0}^{T}e^{i[m_{1}s+m_{2}(\phi_{2}+s)+\cdots+m_{N}(\phi_{N} +s)]}\,\mathrm{d}s\\ &=\sum_{\boldsymbol{m}\in\mathbb{Z}^{N}}c_{1,k,\boldsymbol{m}}e^{ i(m_{2}\phi_{2}+\cdots+m_{N}\phi_{N})}\int_{0}^{T}e^{i\boldsymbol{m}\cdot \boldsymbol{1}}\,\mathrm{d}s\\ &=\sum_{\boldsymbol{m}\cdot\boldsymbol{1}=0}c_{1,k,\boldsymbol{m }}e^{i(m_{2}\phi_{2}+\cdots+m_{N}\phi_{N})},\end{split}\]
where the last lines uses the orthogonality of the Fourier basis to evaluate the integral. Thus, we may easily evaluate (33) for \(N-1\) phase differences by evaluating any one coordinate at zero.
While (33) is computationally efficient, we once again find that evaluating the underlying \(N\)-dimensional function to compute the Fourier coefficients of (33) requires significant amounts of memory. For example, evaluating an \(N\) dimensional function on a mesh size of \(K\) again requires \(N^{K}\) points prior to taking the Fourier coefficient. So we seek an additional reduction in memory.
Using Separability to Reduce Memory RequirementsFortunately, our interaction functions are mostly separable, colloquially speaking. That is, suppose we are given oscillator \(i\), order \(k\), and oscillators \((\theta_{j_{1}},\ldots,\theta_{j_{k}})\). Together, they give us the function in the summation of (26), which we recall to be \(h^{(k)}_{i,j_{1},\ldots,j_{k}}(\theta_{i},\theta_{j_{1}},\ldots,\theta_{j_{k}})\). This function contains a sum of several functions that depend on the variables \((\theta_{i},\theta_{j_{1}},\ldots,\theta_{j_{k}})\). For each term inside \(h^{(k)}_{i,j_{1},\ldots,j_{k}}(\theta_{i},\theta_{j_{1}},\ldots,\theta_{j_{k}})\), at best we can separate the term into \(k\) functions of one variable. At worst, we can separate the term into \(k-1\) functions of two variables. Note that we generally expect such inseparable functions to exist because \(\hat{h}\) depends directly on \(g^{(k)}_{i,j_{\ell},\alpha,\beta,\gamma,\delta}\), which is a function of two variables that can not be separated.
Suppose we pick a term \(\zeta(\theta_{i},\theta_{j_{1}},\ldots,\theta_{j_{k}})\) which is only separable into functions of two variables:
\[\zeta(\theta_{i},\theta_{j_{1}},\ldots,\theta_{j_{k}})=\zeta_{0}(\theta_{i}, \theta_{j_{1}})\zeta_{1}(\theta_{j_{1}},\theta_{j_{2}})\cdots\zeta_{k-1}( \theta_{j_{k-2}},\theta_{j_{k-1}})\zeta_{k}(\theta_{j_{k-1}},\theta_{j_{k}}).\]
Then (replacing \(i\) with \(\ell_{0}\) so that we can let \(i\) stand for the imaginary number)
\[\zeta(\theta_{j_{0}},\theta_{j_{1}},\ldots,\theta_{j_{k}}) =\sum_{(x_{0},x_{1})\in\mathbb{Z}^{2}}c^{(0,1)}_{x_{0}x_{1}}e^{i[ x_{0}\theta_{j_{0}}+x_{1}\theta_{j_{1}}]}\cdots\sum_{(x_{k-1},x_{k})\in\mathbb{Z}^{2}}c ^{(k-1,k)}_{x_{k-1}x_{k}}e^{i[x_{k-1}\theta_{j_{k-1}}+x_{k}\theta_{j_{k}}]}\] \[=\sum_{\mathbf{m}\in\mathbb{Z}^{k}}c^{(0,1)}_{x_{0}x_{1}}\cdots c^{(k -1,k)}_{x_{k-1}x_{k}}e^{i[x_{0}\theta_{j_{0}}+\cdots+x_{k}\theta_{j_{k}}]}\] \[=\sum_{\mathbf{m}\in\mathbb{Z}^{k}}c_{\mathbf{m}}e^{i\mathbf{m}\cdot\mathbf{\Theta}},\]
where \(c_{\mathbf{m}}=c^{(0,1)}_{x_{0}x_{1}}c^{(1,2)}_{x_{1}x_{2}}\cdots c^{(k-1,k)}_{x_ {k-1}x_{k}}\), \(\mathbf{m}=(x_{0},x_{1},\ldots,x_{k})\in\mathbb{Z}^{k}\) and \(\mathbf{\Theta}=(\theta_{j_{0}},\theta_{j_{1}},\ldots,\theta_{j_{k}})\). Now \(c_{\mathbf{m}}\) is significantly more memory efficient to compute because it is a product of coefficients that can be rapidly obtained using only a 2-dimensional fast Fourier transform. We also typically expect to truncate the Fourier coefficients to at most a few dozen terms, e.g., choose \(\mathbf{m}\) such that \(\|\mathbf{m}\|_{1}\leq z\) for \(z\) not too large. The choice of \(z\) depends on the model in question, but as a rule of thumb we start with \(z\approx 5\). The CGL model requires \(z=4\) up to order \(k=3\) and the thalamic model requires \(z=15\) up to order \(k=3\).
According to (33) the time-average integral only requires coefficients such that \(\mathbf{m}\cdot\mathbf{1}=0\). Thus, between truncating each Fourier series and selecting the subset of coefficients that survive the integral, the above sum simplifies to
\[\int_{0}^{T}\zeta(\theta_{\ell}+t,\theta_{j_{1}}+t,\ldots,\theta_{j_{k}}+t)\, \mathrm{d}t\approx\sum_{\begin{subarray}{c}\mathbf{m}\cdot\mathbf{1}=0\\ \|\mathbf{m}\|_{1}\leq 5\end{subarray}}c_{\mathbf{m}}e^{i\mathbf{m}\cdot\mathbf{\Theta}}.\]
### Option 2: Using an ODE Solver as an Adaptive Mesh
In cases where the response functions (\(\mathcal{Z}\) and \(\mathcal{I}\)) and Floquet eigenfunctions have derivatives that greatly exceed their magnitudes, it is not possible to use only a small number of Fourier coefficients. Indeed, the thalamic model has a relatively large Nyquist frequency at order \(O(ve^{4})\). For a uniform mesh, the number of points in the time-average integral grid \(K\) exceeds \(1\times 10^{7}\), which is difficult to compute efficiently. Parallelizing this problem is further hindered by memory constraints.
Because the sharp peaks in the response functions and eigenfunctions tend to be in small regions of phase space, an adaptive mesh greatly reduces the number of integration points \(K\). The most straightforward method is to use an ODE solver by rephrasing the time-average integration as an initial value problem. Given \(k\), and \(\theta_{1},\ldots,\theta_{N}\), rewrite
\[\mathcal{H}=\frac{1}{T}\int_{0}^{T}\bar{h}_{j}^{(k)}(\theta_{1}+t,\ldots, \theta_{N}+t)\,\mathrm{d}t\]
as
\[T\frac{d}{dt}\mathcal{H}(t)=\bar{h}_{j}^{(k)}(\theta_{1}+t,\ldots,\theta_{N}+t ),\quad\mathcal{H}(0)=0.\]
Then the desired time-average is given by \(\mathcal{H}(T)\). If needed, the mesh is provided by the ODE solver. We use the Python [63] implementation of LSODA.
### Option 3: Brute Force and Parallelization
If all else fails, it is possible to brute force calculations of the \(\mathcal{H}\) functions, especially if the network has \(N\leq 3\) oscillators with only a few lower-order terms and a coarse mesh. Our Python implementation will attempt to use CPU parallelization, and if available, CUDA parallelization. Because the memory requirements grow exponentially as a function of mesh size, mesh sizes are typically restricted to 500 points for roughly 16GB of RAM. Running the code on a cluster with more CPUs is recommended (instructions are included in the repository with sample scripts).
## Appendix B Thalamic Model
The remaining equations for the Thalamic model are
\[I_{\mathrm{L}}(V)=g_{\mathrm{L}}(V-E_{L}), I_{\mathrm{Na}}=g_{\mathrm{Na}}hm_{\infty}^{3}(V)(V-E_{\mathrm{Na}}),\] \[I_{\mathrm{K}}=0.75g_{\mathrm{K}}(1-h)^{4}(V-E_{\mathrm{K}}), I_{\mathrm{T}}=g_{\mathrm{T}}rp_{\infty}^{2}(V)(V-E_{\mathrm{T}}),\]
and
\[a_{h}(V)=0.128\exp(-(V+46)/18), b_{h}(V)=4/(1+\exp(-(V+23)/5)),\] \[m_{\infty}(V)=1/(1+\exp(-(V+37)/7)), h_{\infty}(V)=1/(1+\exp((V+41)/4)),\] \[r_{\infty}(V)=1/(1+\exp((V+84)/4)), p_{\infty}(V)=1/(1+\exp(-(V+60)/6.2)),\] \[\tau_{h}(V)=1/(a_{h}(V)+b_{h}(V)), \tau_{r}(V)=28+\exp(-(V+25)/10.5).\]
|
2305.08932 | MIMEx: Intrinsic Rewards from Masked Input Modeling | Exploring in environments with high-dimensional observations is hard. One
promising approach for exploration is to use intrinsic rewards, which often
boils down to estimating "novelty" of states, transitions, or trajectories with
deep networks. Prior works have shown that conditional prediction objectives
such as masked autoencoding can be seen as stochastic estimation of
pseudo-likelihood. We show how this perspective naturally leads to a unified
view on existing intrinsic reward approaches: they are special cases of
conditional prediction, where the estimation of novelty can be seen as
pseudo-likelihood estimation with different mask distributions. From this view,
we propose a general framework for deriving intrinsic rewards -- Masked Input
Modeling for Exploration (MIMEx) -- where the mask distribution can be flexibly
tuned to control the difficulty of the underlying conditional prediction task.
We demonstrate that MIMEx can achieve superior results when compared against
competitive baselines on a suite of challenging sparse-reward visuomotor tasks. | Toru Lin, Allan Jabri | 2023-05-15T18:08:28Z | http://arxiv.org/abs/2305.08932v1 | # MIMEx:
###### Abstract
Exploring in environments with high-dimensional observations is hard. One promising approach for exploration is to use intrinsic rewards, which often boils down to estimating "novelty" of states, transitions, or trajectories with deep networks. Prior works have shown that conditional prediction objectives such as masked autoencoding can be seen as stochastic estimation of pseudo-likelihood. We show how this perspective naturally leads to a unified view on existing intrinsic reward approaches: they are special cases of conditional prediction, where the estimation of novelty can be seen as pseudo-likelihood estimation with different mask distributions. From this view, we propose a general framework for deriving intrinsic rewards - **M**asked **I**nput **M**odeling for **E**xploration (MIMEx) - where the mask distribution can be flexibly tuned to control the difficulty of the underlying conditional prediction task. We demonstrate that MIMEx can achieve superior results when compared against competitive baselines on a suite of challenging sparse-reward visuomotor tasks.
+
Footnote †: Code available at [https://github.com/ToruOwO/mimex](https://github.com/ToruOwO/mimex).
## 1 Introduction
In supervised learning (SL), it is often assumed that data is given, i.e. the data collection process has been handled by humans. In reinforcement learning (RL), however, agents need to collect their own data and update their policy based on the collected data. The problem of _how_ data is collected is known as the exploration problem. Agents face the _exploration-exploitation trade-off_ to balance between maximizing its cumulative rewards (exploitation) and improving its knowledge of the environment by collecting new data (exploration). Exploration is therefore of central importance in the study of RL; it is the only way through which an agent can learn decision-making from scratch.
Simple exploration heuristics like using \(\epsilon\)-greedy policy [20; 21] and adding Gaussian action noise [19; 27; 28] are often too inefficient to solve challenging tasks with sparse reward. Meanwhile, designing dense reward functions is intractable to scale to many tasks across environments, due to the amount of domain knowledge and engineering effort needed. Training agents with intrinsic rewards, i.e. providing exploration bonuses that agents jointly optimize with extrinsic rewards, is a promising way to inject bias into exploration. This exploration bonus is often derived from a notion of "novelty", such that agent receives higher bonus in more "novel" states. In classical count-based methods [32], for example, the exploration bonus is derived from state visitation counts directly. To scale to high dimensional states, recent works often derive the exploration bonus from either (1) _pseudo-counts_[4; 22; 36] (i.e. approximate state visitation counts) or (2) prediction error from a proxy model. In the latter case, the proxy model is responsible for representing the relative novelty of states, for example with respect to learned forward models [23; 31] or randomly initialized networks [8].
While count-based (including pseudo-counts) methods and prediction-error-based methods are typically viewed as distinct approaches for intrinsic reward [1; 2; 18], these methods often boil down to similar formulations in practice. That is, they rely on the same underlying idea of estimating novelty by approximating likelihood, while different in the quantities they model: states, transitions,
trajectories, etc. Recent works have demonstrated how conditional prediction objectives such as masked language modeling are connected to pseudo-likelihood [26; 40], one way of approximating likelihood. Under this view, approaches that estimate novelty can be viewed as modeling different conditional prediction problems, or masked prediction problems with different mask distributions.
From this perspective, we propose **M**asked **I**nput **M**odeling for **E**xploration (MIMEx), a unifying framework for intrinsic reward methods. On a high level, MIMEx allows for flexible trajectory-level exploration via generalized conditional prediction. To evaluate on hard-exploration problems with high-dimensional observations and dynamics, we develop a benchmark suite of eight challenging robotic manipulation tasks that involve realistic visuomotor control with sparse rewards. We show that MIMEx achieves superior results when compared with common baselines, and investigate why it works better than other approaches through extensive ablation studies.
## 2 Preliminaries
### From RL Exploration to Intrinsic Reward
Among approaches for RL exploration, one of the most successful is to jointly optimize an _intrinsic reward_ in addition to the environment reward (a.k.a. the extrinsic reward). At each time step, the agent is trained with reward \(r_{t}=r_{t}^{e}+\beta\cdot r_{t}^{i}\), where \(r_{t}^{e}\) is the environment reward, \(r_{t}^{i}\) is the intrinsic reward, and \(\beta\) is a parameter to control the scale of exploration bonus. The intrinsic reward approach is flexible and general, as it simply requires modifying the reward function. This also means that intrinsic reward function can be learned and computed in a manner compartmentalized from the policy, allowing one to leverage models and objectives that scale differently from policy optimization.
The key principle underlying intrinsic reward approaches is _optimism in the face of uncertainty_[7], which has been empirically observed to be effective for encouraging active exploration [34]. In other words, the intrinsic reward is supposed to encourage agents to visit states that are less frequently visited, or pursue actions that lead to maximum reduction in uncertainty about the dynamics; note that the latter is equivalent to visiting _transitions_ that are less frequently visited.
### Intrinsic Reward and Conditional Prediction
Works in intrinsic reward literature tend to draw on different concepts such as novelty, surprise, or curiosity, but their practical formulation often follows the same principle: to encourage the agent to visit novel states or state transitions, it is desirable for \(r_{t}^{i}\) to be higher in less frequently visited states or state transitions than in frequently visited ones. Deriving intrinsic reward therefore requires knowledge on state density or state transition density.
Recent intrinsic reward methods can be categorized into either pseudo-counts-based or prediction-error-based [2]. In pseudo-counts-based methods [4; 8; 22], state density is modeled directly; in prediction-error-based methods [16; 23], a forward model is learned and prediction error is used as a proxy of state transition density. Under this view, we can unify existing intrinsic reward methods into one framework, where only the underlying conditional prediction problem differs.
### Pseudo-Likelihood from Masked Prediction Error
Prior works [26; 40] have shown how the masked autoencoding objective in models like BERT [9] can be viewed as stochastic maximum pseudo-likelihood estimation on a training set of sequences of random variables \(X=(x_{1},...,x_{T})\). Under this objective, the model approximates the underlying joint distribution among variables, by modeling conditional distributions \(x_{t}|X_{\setminus t}\) resulting from masking at position \(t\). Despite being an approximation, the pseudo-likelihood has been shown to be a useful proxy, e.g. as a scoring function for sampling from language model [26].
More concretely, [40]* define pseudo log-likelihood as:
Footnote *: While [40] was later retracted due to an error, our argument does not depend on the part with error. We provide additional clarification on the validity of our claim in Appendix A.1.
\[\text{PLL}(\theta;D)=\frac{1}{|D|}\sum_{X\in D}\sum_{t=1}^{|X|}\log p(x_{t}|X _{\setminus t}),. \tag{1}\]
where \(D\) is a training set. To more efficiently optimize for the conditional probability of each variable in a sequence given all other variables, they propose to stochastically estimate under a masking
distribution \(\hat{t}_{k}\sim\mathcal{U}(\{1,\ldots,|X|\})\):
\[\frac{1}{|X|}\sum_{t=1}^{|X|}\log p(x_{t}|X_{\backslash t})= \mathbb{E}_{t\sim\mathcal{U}}\left[\log p(x_{t}|X_{\backslash t}) \right]\approx\frac{1}{K}\sum_{k=1}^{K}\log p(x_{\tilde{t}_{k}}|X_{\backslash \tilde{t}_{k}}),\]
While this is defined for categorical variables, for continuous variables we can simply consider reconstruction error on masked prediction as regressing to a unit Normal distribution of variables such as observations, features, and trajectories. From this perspective, we can view different cases of conditional prediction as different forms of maximum pseudo-likelihood estimation and implement them as masked autoencoders.
## 3 Masked Input Modeling for Exploration
The observations outlined in Section 2, in particular the link between masked prediction and pseudo-likelihood, allow us to draw connections between different classes of intrinsic reward methods. Prediction-error-based methods consider the problem of conditionally predicting \(s_{t+1}\) from \((s_{t},a_{t})\) by masking \(s_{t+1}\) from \((s_{t+1},s_{t},a_{t})\) tuples, which can be viewed as approximating the pseudo-likelihood of the next state under a Gaussian prior. Pseudo-counts-based methods approximate the likelihood of \(s_{t}\), which can be achieved by marginalizing the prediction error across maskings of \(s_{t}\). As summarized in Table 1, we can view these approaches as estimating pseudo-likelihood via masked prediction with different mask distributions.
Inspired by this perspective, we propose **M**asked **I**nput **M**odeling for **E**xploration (MIMEx), a unifying framework for intrinsic reward methods. Simply put, MIMEx derives intrinsic reward based on masked prediction on input sequences with arbitrary length. Such a framework naturally lends itself to greater control over the difficulty of the underlying conditional prediction problem; by controlling how much information we mask, we can flexibly control the signal-to-noise ratio of the prediction problem and thus control the variance of the prediction error. Moreover, it provides the setup to fill a space in literature: trajectory-level exploration. Existing approaches framed as conditional prediction often consider one-step future prediction problems, which can saturate early as a useful exploratory signal [8; 14; 23]. While longer time-horizon prediction problems capture more complex behavior, they can suffer from high variance [25]. MIMEx allows for tempering the difficulty of prediction by varying input sequence length; by setting up conditional prediction problems on trajectories, we can obtain intrinsic rewards that consider transition dynamics across longer time horizons and extract richer exploration signals. We can also easily tune the difficulty of the prediction problem, by varying both the input length and the amount of conditioning context given a fixed input length.
The fact that MIMEx sets up a masked prediction learning problem comes with several additional benefits. First, masked autoencoding relies on less domain knowledge compared to methods like contrastive learning, and has proven success across many different input modalities [9; 10; 15; 29; 40]. Moreover, we can leverage standard architectures such as those used in masked language modeling [9] and masked image modeling [15], for which the scalability and stability have been tested. MIMEx can be added to any standard RL algorithm as a lightweight module that encourages exploration.
An overview of our framework is shown in Figure 1. Below, we describe more details of MIMEx.
### Sequence Autoencoders for Exploration
Given a sequence of trajectory information stored in the form of tuples \((o_{k},a_{k},r_{k}^{e})\) for \(k=1,...,t\), we now describe how to derive intrinsic reward \(r_{t}^{i}\) for time step \(t\).
\begin{table}
\begin{tabular}{|c|c|c|} \hline Method & Masked & Prediction Objective \\ \hline Pseudo-counts [4] & subset of state \(s_{T}\) & \(||s_{T}-f(\text{mask}(s_{T}))||^{2}\) with mask on state dims \\ \hline \hline RND & current-step feature \(x_{T}\) & \(||x_{T}-f(\text{mask}(x_{T}))||^{2}\) \\
[8] & & with \(x_{T}=\phi(s_{T})\) under a random network \(\phi\) \\ \hline ICM [23] & next-step feature \(x_{T+1}\) & \(||[x_{T+1},x_{T},a_{T}]-f([\text{mask}(x_{T+1}),x_{T},a_{T}])||^{2}\) \\ [23] & & with \(x_{T}=\phi(s_{T})\) under inverse dynamics encoder \(\phi\) \\ \hline \end{tabular}
\end{table}
Table 1: Examples of existing intrinsic reward methods viewed as masked prediction problems with different mask distributions.
Input sequence generationWe define a parameter \(T\) that represents the length of sequence we use to derive the exploration bonus. From the selected time step \(t\), we extract the sequence of inputs up to \(T\) steps before \(t\), including time step \(t\). When \(t>=T\), we take sequence \((o_{t-T+1},...,o_{t-1},o_{t})\); when \(t<T\), we pad observations at time steps before the initial step with zeros such that the input sequence length is always \(T\). We use a Transformer-based encoder to convert each input observation \(o_{k}\) to a latent embedding \(z_{k}\), obtaining an input sequence \((z_{t-T+1},...,z_{t})\).
Online masked predictionGiven the generated input sequences, we train a masked autoencoder model online using the sequences as prediction target. Specifically, we treat observation embedding \(z_{k}\) at each time step \(k\) as a discrete token, randomly mask out \(X\%\) of the tokens in each sequence using a learned mask token, and pass the masked sequence into a Transformer-based decoder to predict the reconstructed sequence. The random mask is generated with a uniform distribution. The more "novel" an input sequence is, the higher its masked prediction error is expected to be. MIMEx is compatible with a wide range of high-dimensional inputs across different modalities, thanks to the scalability of its architecture and the stability of its domain-agnostic objective.
### Masked Prediction Loss as Intrinsic Reward
We use the scalar prediction loss generated during the online masked prediction as an intrinsic reward, i.e. \(r_{t}^{i}=L_{t}\). The intrinsic reward is used in the same way as described in Section 2, i.e. \(r_{t}=r_{t}^{e}+\beta\cdot r_{t}^{i}\) where \(\beta\) is a parameter to control the scale of exploration bonus. Intuitively, this encourages the agent to explore trajectory sequences that have been less frequently encountered.
### Backbone RL Algorithm
Our exploration framework is agnostic to the backbone RL algorithm since it only requires taking a trajectory sequence as input. In Section 5, we empirically demonstrate how MIMEx can work with either on-policy or off-policy RL algorithms, using PPO [28] and DDPG [19; 30] as examples.
## 4 PixMC-Sparse Benchmark
### Motivation
Exploration can be challenging due to sparsity of rewards, or high-dimensionality of observation and action spaces paired with complex dynamics. Existing benchmarks for exploration algorithms span a spectrum between two ways of controlling the sparsity of reward: (1) long task horizon; (2) high-dimensional observations (e.g. pixels). Either side of the spectrum can contribute to increased sample complexity, along the time dimension or the space dimension. While existing works on deep RL exploration often evaluate on popular benchmarks such as the Arcade Learning Environment [6] and DeepMind Control Suite [37], these benchmarks often involve densely engineered rewards or deep exploration in environments with simple dynamics (e.g. Montezuma's Revenge) [3; 11; 35].
Figure 1: **Overview. The overall schematic of MIMEx, in the case of exploration sequence length \(T=3\). (_Left_) Given a trajectory of high-dimensional observations \(\{o_{k}\}\), the base RL encoder first encodes them into latent embeddings \(\{z_{k}\}\); MIMEx then takes in \(\{z_{k}\}\) and derives the corresponding intrinsic rewards \(\{r_{k}^{i}\}\) for \(k=t-2,t-1,...,t+2\). (_Right_) For each timestep \(k\), MIMEx performs online masked prediction on the sequence of embeddings up to \(T\) steps before \(k\), padding missing timesteps with zeros. The masked prediction loss \(L_{k}\) is used as the raw intrinsic reward, i.e. \(L_{k}=r_{k}^{i}\).**
Identifying a missing space of exploration benchmarks that involve higher-dimensional observation inputs and more complex dynamics, we construct a challenging task suite named PixMC-Sparse (Pixel Motor Control with Sparse Reward). The suite includes eight continuous control tasks, with realistic dynamics and egocentric camera views of a robot as observation. PixMC-Sparse is built on PixMC [41] as an extension to the original suite.
### From PixMC to PixMC-Sparse
The original PixMC task suite does not necessarily pose challenging exploration problems due to densely shaped rewards (see Appendix A.2 for details of the original PixMC reward functions). Therefore, we modified the tasks to reflect more realistic hard-exploration challenges. To summarize, we sparsify the reward signals by removing certain continuous distance-based rewards from each original task. We show details of how each task's reward function is modified in Table 2, and include additional annotated visualization in Appendix A.3.
### Evaluation Protocol
The difficulty level of each hard-exploration task presented in PixMC-Sparse can be tuned by (1) increasing the environment's time or space resolution, or (2) minimizing reward shaping. Either approach reduces the extent to which the benchmark is saturated, and the latter is itself a practical and realistic motivation for studying exploration in RL. In Table 3, we show a sample curriculum to minimize reward shaping for _Cabinet_, _Pick_, and _Move_ tasks, starting from the fully dense reward functions (included in Appendix A.2).
In Figure 2, we present a case study of evaluating exploration algorithms on increasingly sparse _KukaPick_ tasks.
## 5 Experiments
In this section, we show that MIMEx achieves superior results and study why it works better than other approaches. We start by evaluating MIMEx against three baseline methods, on eight tasks from PixMC-Sparse (Section 4) and six tasks from DeepMind Control suite [37]. Then, we present ablation studies of MIMEx on PixMC-Sparse to understand specific factors that contribute
\begin{table}
\begin{tabular}{|c|c|} \hline Task & Removed Dense Reward Terms \\ \hline _FrankMatch_ & distance to goal \\ \hline _KukaReach_ & distance to goal \\ \hline _FrankMatch_ & distance to handle \\ \hline _KukaSubinet_ & finger distance to handle, thumb distance to handle \\ \hline _FrankPick_ & distance to object \\ \hline _KukaPick_ & distance to object \\ \hline _KukaPick_ & finger distance to object, thumb distance to object \\ \hline _FrankMove_ & distance to object \\ \hline _KukaMove_ & finger distance to object, thumb distance to object \\ \hline \end{tabular}
\end{table}
Table 2: We define PixMC-Sparse tasks by removing reward terms from the reward functions of PixMC tasks. In the table, we show details of how each task reward function is modified.
\begin{table}
\begin{tabular}{|c|c|} \hline Task & Example Step to Sparsity Rewards \\ \hline _FrankMatch_ & 1. Remove “distance to handle” term \\ \hline _2. Remove “distance to goal” term \\ \hline _KukaMatch_ & 1. Remove “ “distance to handle” term \\ \hline _FrankMatch_ & 2. Remove “distance to object” term \\ \hline _FrankMatch_ & 1. Remove “distance to object” term \\ \hline _KukaPlace_ & 1. Remove “distance to goal” term \\ \hline _FrankMove_ & 1. Remove “distance to goal” term \\ \hline _KukaMove_ & 1. Remove “distance to goal” term \\ \hline \end{tabular}
\end{table}
Table 3: A sample curriculum to gradually increasing exploration difficulty for each task in PixMC-Sparse.
Figure 2: **Evaluating exploration algorithms with increasingly sparse reward.** We show an example on PixMC-Sparse’s _KukaPick_ task. When using fully dense reward, the base RL agent with random action noise exploration can already achieve success rates of over 80% consistently (_dense_). With one dense reward term removed, the base RL agent completely fails to solve the task (_sparse_). With the addition of our exploration algorithm, MIMEx, some success is observed and the success rates again start to provide useful information for evaluation (_sparse+MIMEx_). With more dense reward terms removed, performance of MIMEx agent is further away from saturation (_sparse+MIMEx_). We can continue to develop and evaluate more powerful exploration algorithms by using increasingly sparse reward.
to MIMEx's performance and obtain insights for its general usage. We report all results with 95% confidence intervals and over 7 random seeds.
### Implementation Details
We describe details of MIMEx implementation, independent of the base RL algorithms. On a high level, implementation of the MIMEx model is similar to that of MAE [15]. The model consists of an **encode & mask** module and a **decode** module.
Given an input sequence \(x\) with shape \((B,L,D)\) where \(B\) is the batch size, \(L\) is the sequence length, and \(D\) is the feature dimension, the **encode & mask** module first projects the last dimension of \(x\) to the encoder embedding size through a fully-connected layer; the corresponding positional embeddings are then added to the time dimension (length \(L\)) of \(x\). We denote this intermediate latent vector as \(z\). We apply random masking with uniform distribution to the latent vector \(z\), following the same protocol used in MAE [15]. The masked latent vector \(z_{m}\) is then passed through a Transformer-based encoder.
The **decode** module first inserts mask tokens to all masked positions of \(z_{m}\), then passes \(z_{m}\) to a Transformer-based decoder. Finally, another fully-connected layer projects the last dimension of \(z_{m}\) back to the input feature dimension \(D\). We calculate the reconstruction loss on only the masked positions, and use the loss as intrinsic reward.
For all environments, we use a batch size of 512, the Adam [17] optimizer, and a MIMEx learning rate of 0.0001. We use a mask ratio of 70% for all tasks; a \(\beta\) (exploration weight) of 0.05 for _Reach_ tasks and 0.05 for _Cabinet, Pick, Move_ tasks. For the encoder, we use an embedding dimension of 128, with 4 Transformer blocks and 4 heads; for the decoder, we use an embedding dimension of 64, with 1 Transformer block and 2 heads. We run each experiment on an NVIDIA A100 GPU.
### Comparison with Baselines
We compare MIMEx to three RL exploration baselines: random action noise (_noise_), intrinsic curiosity module [23] (_icm_) and random network distillation [8] (_rnd_). We show that MIMEx can be flexibly incorporated into both on-policy algorithms and off-policy algorithms: for tasks in PixMC-Sparse, we implement MVP [41] with PPO [28] (an on-policy algorithm) being the core RL algorithm; for tasks in DeepMind Control suite (DMC), we implement DrQv2 [42] with DDPG [30] (an off-policy algorithm) being the core RL algorithm. Note that MVP and DrQv2 are the respective state-of-the-art algorithm on each environment. Results are shown in Figure 3.
For PixMC-Sparse, we find that MIMEx outperforms all baselines on all tasks, in terms of both final task success rate and sample efficiency. Importantly, the performance gap between MIMEx and baseline methods becomes more pronounced on the harder _Pick_ and _Move_ tasks, where exploration also becomes more challenging.
For DMC, we find that MIMEx achieves higher average performance than all baselines on 2 out of 6 tasks (_cartpole_swingup_sparse_, _quadruped_walk_), while having similar average performance as the _noise_ and _rnd_ baselines on the other 4 tasks (_acrobot_swingup_, _finger_turn_easy_, _quadruped_run_, _finger_turn_hard_). We also observe that MIMEx exhibits lower variance compared to similarly performant baselines on all tasks. As discussed in Section 4, we find it difficult to benchmark RL exploration methods on DMC even when using high-dimensional pixel inputs, and recommend using harder exploration task suite such as PixMC-Sparse to better evaluate state-of-the-art exploration algorithms.
### How does MIMEx improve performance?
We have conducted extensive ablation studies to make relevant design choices and understand why MIMEx works better than other approaches. For brevity, here we only present factors that result in significant improvement in task performance. Due to computational constraints, we only show results on 2 tasks from PixMC-Sparse, an easier _KukaReach_ task and a harder _KukaPick_ task. We summarize our findings below.
Trajectory-level explorationTo our knowledge, MIMEx is the first framework that successfully incorporates sequence-level intrinsic reward to solve hard exploration tasks. Intuitively, doing so encourages agent to search for new _trajectories_ rather than one-step transitions, which may lead to more complex exploration behavior. The empirical results from varying the exploration sequence
Figure 3: **Comparison with baselines.** We compare our method (_ours_) against 3 exploration baselines: random action noise (_noise_), intrinsic curiosity module (_icm_), and random network distillation (_rnd_). The top figure shows results on 8 tasks from the PixMC-Sparse suite, and the bottom figure shows results on 6 tasks from the DeepMind Control Suite.
length used in MIMEx, as shown in Figure 4, suggest that using sequences with length greater than 2 (i.e. transitions that are longer than one step) indeed improves exploration efficiency.
Variance reductionHigh variance is a classical curse to RL algorithms [33]. In deriving intrinsic reward using MIMEx, a source of high variance is the random masking step during the mask-autoencode process. We hypothesize that reducing variance in conditional prediction can lead to large performance gain, and empirically test the effect of doing so. In particular, we find that a simple variance reduction technique - applying random mask multiple times and taking the average prediction error over different maskings - can greatly improve agent performance. We show our results in Figure 4.
Model scalabilityRecent advances in generative models have showcased the importance of scale, which we hypothesize also applies to the case of trajectory modeling for deriving intrinsic reward. We show our empirical results that verified this hypothesis in Figure 4. Performance gains are achieved by simply increasing the size of MIMEx's base model (specially, doubling the size of Transformer-based decoder), especially in the harder _KukaPick_ environment.
### How does varying mask distribution affect performance of MIMEx?
As discussed in Section 3, one characteristic of MIMEx is that it enables extremely flexible control over the difficulty of conditional prediction for deriving intrinsic rewards. In theory, this flexibility allows MIMEx to encompass the full spectrum of pseudo-likelihood estimation via masked prediction. We hypothesize that this would enable MIMEx to be both general and powerful: the same implementation of MIMEx can achieve superior performance compared to baselines in a wide range of settings. We are therefore particularly interested in studying how varying mask distribution affects the performance of MIMEx. Below, we show ablation studies on two attributes of MIMEx that can be easily adjusted to change the mask distribution: mask ratio and mask dimension.
Mask ratioOne straightforward way to vary the mask distribution and tune the difficulty of conditional prediction is to simply adjust the mask ratio when applying random mask. Intuitively, when the mask ratio is lower, the amount of information given to the model is larger, and the conditional prediction problem becomes easier. On the two PixMC-Sparse tasks we used, we find that results from using certain fixed mask ratios exhibit high variance across tasks, for example the results from using mask ratios 20% and 50% in Figure 5. We hypothesize that, because the difficulty of exploration varies from task to task, there exists a different "optimal" mask ratio parameter for each task. In theory, MIMEx can be tuned to achieve "optimal" performance by choosing a specific mask ratio for each task. In practice, we use a fixed mask ratio of 70% for our default implementation since we find it achieve a good trade-off between hyperparameter tuning and robustness of performance.
Mask typeAnother degree of freedom of MIMEx lies in the flexibility of choosing which input dimension to apply random masks to. Fully random masking on the last dimension of input (i.e.
Figure 4: **(top) Varying exploration sequence length.**\(\mathbf{l}\)_n_ refers to exploration with sequence length of \(n\) for \(n=2,3,4,5,6\).
**(middle) Varying number of times of masking for intrinsic reward calculation.**_nm1_ is masking once, and _nm5_ is masking 5 times followed by taking the average of all prediction errors.
**(bottom) Scaling MIMEx decoder.** Compared with the _smaller_ model, the _larger_ model uses a decoder that doubles in depth, number of output heads, and embedding dimensions.
feature dimension) will enable the most fine-grained control over the underlying conditional prediction problem. We hypothesize that this greater degree of freedom might allow MIMEx to achieve better performance on certain tasks, but might also require more substantial effort to find the optimal parameter. In Figure 5, we compare the performance of (1) masking only on the time dimension with a mask ratio of 70% and (2) masking on the feature dimension with mask ratios of 80%, 90%, and 95% on our chosen tasks. Empirically, we observe similar results between these two types of masking with the chosen mask ratios, so opted to use the more computationally efficient time-only masking in our default implementation.
## 6 Related Works
Undirected exploration [38] approaches add noise to individual actions (e.g. random action noise), action policy (e.g. \(\epsilon\)-greedy), or even network weights [12]. These methods are often surprisingly strong baselines for hard-exploration problems [35].
Classical count-based methods provide another effective way to perform exploration [32]. To bring count-based exploration to high-dimensional state spaces, [4] and [22] propose pseudo-count as a useful surrogate of novelty, using CTS [5] and PixelCNN [39] as the density model respectively. [36] is another example of count-based exploration, but uses hash code instead of pseudo-count for state visitations.
[8] takes a different approach by approximating novelty with prediction error on random features. The intuition is the same - prediction error should be higher for less visited states. The closest to our work are perhaps works that generate prediction errors by learning a forward dynamics model. For example, [31; 23] can be seen as special cases of our framework in that they only perform one-step prediction. [16; 24] use reduction of uncertainty under the learned model as intrinsic reward; the former uses Bayesian neural network and the latter uses ensemble disagreement. In practice, these methods perform worse because it is usually hard to get a good estimate of the reduction in uncertainty with all the stochasticity inherent in modern neural networks and optimization tools.
## 7 Limitations and Future Work
Inspired by a unified perspective on intrinsic reward approaches, we present Masked Input Modeling for Exploration (MIMEx), a general framework for deriving intrinsic rewards. By using a masked autoencoding objective on variable-length input sequences, MIMEx enables flexible control over the difficulty of its underlying pseudo-likelihood estimation. By leveraging a scalable architecture and objective function, MIMEx can be easily scaled up and stabilized to solve challenging sparse-reward tasks, achieving superior performance compared to several competitive baselines.
One limitation of MIMEx that is that it can potentially hurt performance on easier tasks that do not require exploration, as in the case of exploration bonus approaches in general [35]. Moreover, while MIMEx can improve sample efficiency on hard-exploration tasks like those in PixMC-Sparse, to tractably solve much harder problems, we might need stronger bias from other sources - for example expert demonstrations. Improving exploration from either angle will be important steps for future work. Looking forward, we hope to study how MIMEx can be leveraged to transfer knowledge from pretrained representations to the exploration problem, particularly given its scalability and flexibility.
Figure 5: **(left) Varying mask ratio. Results from masking out 20%, 50%, and 70% during the random masking step of MIMEx. (right) Varying mask type and ratio. Results from (1) masking only on the time dimension with a mask ratio of 70% (70%) and (2) masking on the feature dimension with mask ratios of 80%, 90%, and 95% (80%(_a_), 90%(_a_), 95%(_a_)).**
## Acknowledgements
We thank Ilija Radosavovic, Fangchen Liu, Qiyang Li, Jitendra Malik, and Alexei Efros for helpful discussions and support. TL is funded by an NSF Graduate Research Fellowship.
|
2308.09408 | Complementation and Lebesgue type decompositions of linear operators and
relations | In this paper a new general approach is developed to construct and study
Lebesgue type decompositions of linear operators $T$ in the Hilbert space
setting. The new approach allows to introduce an essentially wider class of
Lebesgue type decompositions than what has been studied in the literature so
far. The key point is that it allows a nontrivial interaction between the
closable and the singular components of $T$. The motivation to study such
decompositions comes from the fact that they naturally occur in the
corresponding Lebesgue type decomposition for pairs of quadratic forms. The
approach built in this paper uses so-called complementation in Hilbert spaces,
a notion going back to de Branges and Rovnyak. | Seppo Hassi, Henk de Snoo | 2023-08-18T09:20:04Z | http://arxiv.org/abs/2308.09408v2 | # Complementation and Lebesgue type decompositions of linear operators
###### Abstract.
In this paper a new general approach is developed to construct and study Lebesgue type decompositions of linear operators \(T\) in the Hilbert space setting. The new approach allows to introduce an essentially wider class of Lebesgue type decompositions than what has been studied in the literature so far. The key point is that it allows a nontrivial interaction between the closable and the singular components of \(T\). The motivation to study such decompositions comes from the fact that they naturally occur in the corresponding Lebesgue type decomposition for pairs of quadratic forms. The approach built in this paper uses so-called complementation in Hilbert spaces, a notion going back to de Branges and Rovnyak.
Key words and phrases:Hilbert space, linear operator, operator range, complementation, Lebesgue type decomposition 2020 Mathematics Subject Classification: Primary 46C07, 47A65; Secondary 47A05, 47A06, 47A07 Part of this paper was completed during the Workshop "Spectral theory of differential operators in quantum theory" at the Erwin Schrodinger International Institute for Mathematics and Physics (ESI) (Wien, November 7-11, 2022). The ESI support is gratefully acknowledged.
form \(\mathfrak{t}\): \(\mathfrak{t}[h,k]=(Qh,Qk)\), \(h,k\in\operatorname{dom}\mathfrak{t}=\operatorname{dom}Q\); a detailed study of representing maps will appear in [16]. A key fact in the connection of quadratic forms is that the components \(\mathfrak{t}_{1}\) and \(\mathfrak{t}_{2}\) generate a nonnegative contraction \(K\) acting in the range space \(\mathfrak{K}\) of \(Q\), such that \(\mathfrak{t}_{1}[h,k]=((I-K)^{\frac{1}{2}}h,(I-K)^{\frac{1}{2}}k)\), \(\mathfrak{t}_{2}[h,k]=((K^{\frac{1}{2}}h,K^{\frac{1}{2}}k)\), and then one can prove the following general formula
\[(\mathfrak{t}_{1}:\mathfrak{t}_{2})[h,k]=(((I-K):K)Qh,Qk),\quad h,k\in \operatorname{dom}\mathfrak{t}=\operatorname{dom}Q,\]
where ":" stands for the parallel sum of the involved components; see [16]. Recall from [11, Proposition 2.10] that the forms \(\mathfrak{t}_{1}\) and \(\mathfrak{t}_{2}\) are mutually singular precisely when \(\mathfrak{t}_{1}:\mathfrak{t}_{2}=0\), while \((I-K):K=0\) if and only if \(K\) is an orthogonal projection, which is equivalent to the intersection \(\operatorname{ran}\left(I-K\right)\cap\operatorname{ran}K\neq\{0\}\) being nontrivial; further details can be found in [16].
The new approach developed here to analyse this phenomenon on the side of linear operators and their Lebesgue type decompositions allowing such an interaction between the closable and singular parts of \(T\) is built in this paper by using the notion of complementation going back to de Branges and Rovnyak. This leads to several new results on Lebesgue type decompositions of unbounded operators and linear relations. In particular, the results generalize recent results obtained in the case of orthogonal operator range decompositions in [12, 15]. For instance, among the set of all such Lebesgue type decompositions there is still a unique decomposition, whose regular part in this new setting continues to be maximal; it is called the Lebesgue decomposition of linear operators, cf. [15].
Let \(T\in\mathbf{L}(\mathfrak{H},\mathfrak{K})\), that is, \(T\) is a linear operator or relation from a Hilbert space \(\mathfrak{H}\) to a Hilbert space \(\mathfrak{K}\). Denote by \(T^{**}\) the closure of \(T\) (as a graph in the Cartesian product \(\mathfrak{H}\times\mathfrak{K}\)); moreover, \(\operatorname{mul}T^{**}\) stands for the linear space of all \(g\in\mathfrak{K}\) for which \(\{0,g\}\in T^{**}\). For \(T^{**}\) there are two extreme cases: the _closable_ case is defined by the equality \(\operatorname{mul}T^{**}=\{0\}\), that is, the closure \(T^{**}\) is an operator, and the _singular_ case is defined by the equality \(T^{**}=\operatorname{dom}T^{**}\times\operatorname{mul}T^{**}\), that is, \(T^{**}\) is the Cartesian product of closed linear subspaces of \(\mathfrak{H}\) and \(\mathfrak{K}\). In general, a linear relation is neither closable nor singular. However, every \(T\in\mathbf{L}(\mathfrak{H},\mathfrak{K})\) has a sum decomposition \(T=T_{1}+T_{2}\) of the form
\[T=\big{\{}\{f,g\}\in\mathfrak{H}\times\mathfrak{K}:\,g=g_{1}+g_{2},\,\{f,g_{1 }\}\in T_{1},\,\{f,g_{2}\}\in T_{2}\big{\}},\]
where \(T_{1},T_{2}\in\mathbf{L}(\mathfrak{H},\mathfrak{K})\) with \(\operatorname{dom}T=\operatorname{dom}T_{1}=\operatorname{dom}T_{2}\), while \(T_{1}\) is closable and \(T_{2}\) is singular. Such a sum decomposition \(T=T_{1}+T_{2}\) is called an _orthogonal Lebesgue type decomposition_ of \(T\) if the Hilbert space \(\mathfrak{K}\) is the orthogonal sum of the closed linear subspaces \(\mathfrak{X}\) and \(\mathfrak{Y}\) of \(\mathfrak{K}\), such that \(\operatorname{ran}T_{1}\subset\mathfrak{X}\) and \(\operatorname{ran}T_{2}\subset\mathfrak{Y}\). The usual Lebesgue decomposition of \(T\) is an example of an orthogonal Lebesgue type decomposition. In the case of an orthogonal Lebesgue type decomposition there is only trivial interaction between the summands \(T_{1}\) and \(T_{2}\) as their ranges are orthogonal. Orthogonal Lebesgue type decompositions have been studied in [12, 15], extending earlier work of Izumino [18, 19, 20].
In this paper more general, pseudo-orthogonal, decompositions \(T=T_{1}+T_{2}\) will be introduced. The decomposition of the space \(\mathfrak{K}\) will be based on a pair of complemented operator range spaces \(\mathfrak{X}\) and \(\mathfrak{Y}\) with inner products \((\cdot,\cdot)x\) and \((\cdot,\cdot)_{\mathfrak{Y}}\) which are contained contractively in \(\mathfrak{K}\); such spaces were introduced by de Branges and Rovnyak, see [4, 6, 7, 9]. It is assumed that \(\mathfrak{X}\) and \(\mathfrak{Y}\) are generated
by nonnegative contractions \(X,Y\in\mathbf{B}(\mathfrak{K})\) for which
\[\|h\|_{\mathfrak{K}}^{2}=\|Xh\|_{\mathfrak{X}}^{2}+\|Yh\|_{\mathfrak{D}}^{2}, \quad h\in\mathfrak{K},\]
and this is equivalent to the condition \(X+Y=I\); moreover, this condition automatically leads to \(\mathfrak{K}=\mathfrak{X}+\mathfrak{D}\). For the sum decomposition \(T=T_{1}+T_{2}\) which satisfies \(\operatorname{dom}T=\operatorname{dom}T_{1}=\operatorname{dom}T_{2}\), while \(T_{1}\) closable and \(T_{2}\) singular, one now requires that \(\operatorname{ran}T_{1}\subset\mathfrak{X}\) and \(\operatorname{ran}T_{2}\subset\mathfrak{D}\), in which case the decomposition of any \(g\in\operatorname{ran}T\) as \(g=g_{1}+g_{2}\) with \(g_{1}\in\operatorname{ran}T_{1}\) and \(g_{2}\in\operatorname{ran}T_{2}\) leads to the inequality
\[\|g\|_{\mathfrak{D}}^{2}\leq\|g_{1}\|_{\mathfrak{X}}^{2}+\|g_{1}\|_{\mathfrak{ D}}^{2}.\]
The further condition is that instead of this inequality there is the Pythagorean equality
\[\|g\|_{\mathfrak{D}}^{2}=\|g_{1}\|_{\mathfrak{X}}^{2}+\|g_{1}\|_{\mathfrak{D}} ^{2}.\]
In this case one speaks of a _pseudo-orthogonal Lebesgue type decomposition_ of \(T\). In the orthogonal case the closed linear subspaces \(\mathfrak{X}\) and \(\mathfrak{D}\) are isometrically contained in \(\mathfrak{K}\) and \(\operatorname{ran}T_{1}\perp\operatorname{ran}T_{2}\), so that the Pythagorean equality is automatically satisfied. The new feature with complemented operator range spaces \(\mathfrak{X}\) and \(\mathfrak{D}\) which are contractively contained in the original Hilbert space \(\mathfrak{K}\) is that there is an overlapping space \(\mathfrak{X}\cap\mathfrak{D}\); moreover \(\mathfrak{X}\cap\mathfrak{D}\) contains the intersection \(\operatorname{ran}X\cap\operatorname{ran}Y\). This overlapping space has consequences for the interaction of the components \(T_{1}\) and \(T_{2}\): it may now happen that \(\overline{\operatorname{ran}}T_{1}\cap\overline{\operatorname{ran}}T_{2}\) is nontrivial, or even that \(\operatorname{ran}T_{1}\cap\operatorname{ran}T_{2}\) is nontrivial.
It will be shown that the pseudo-orthogonal Lebesgue type decompositions of a linear relation \(T\in\mathbf{L}(\mathfrak{H},\mathfrak{K})\) are all of the form
\[T=T_{1}+T_{2}\quad\text{with}\quad T_{1}=(I-K)T,\quad T_{2}=KT,\]
where \(K\in\mathbf{B}(\mathfrak{K})\) is a nonnegative contraction for which \((I-K)T\) is closable and \(KT\) is singular. The pseudo-orthogonal Lebesgue type decomposition is orthogonal precisely if \(K\) is an orthogonal projection. The usual Lebesgue decomposition \(T=T_{\mathrm{reg}}+T_{\mathrm{sing}}\) is orthogonal and it is uniquely defined by the property that \(T_{\mathrm{reg}}\) is the largest closable part of \(T\) among all pseudo-orthogonal decompositions of \(T\). Furthermore, there is a characterization of the situation where the Lebesgue decomposition is the only pseudo-orthogonal Lebesgue type decomposition of \(T\). In the special case that \(T\) is an operator range relation, i.e.,
\[T=\big{\{}\{\Phi\eta,\Psi\eta\}:\,\eta\in\mathfrak{E}\big{\}},\]
where \(\Phi\in\mathbf{B}(\mathfrak{E},\mathfrak{H})\), \(\Psi\in\mathbf{B}(\mathfrak{E},\mathfrak{K})\), and \(\mathfrak{E}\), \(\mathfrak{H}\), and \(\mathfrak{K}\) are Hilbert spaces, the pseudo-orthogonal Lebesgue type decompositions of \(T\) translate into the so-called pseudo-orthogonal Lebesgue type decompositions of the operator \(\Psi\) in terms of \(\Phi\); for the orthogonal case and the notion of Radon-Nikodym derivative, see [15].
The contents of the paper are now described. A short introduction to pairs of complemented operator range spaces can be found in Section 2. This section is modeled on the relevant appendix in [6]. General pseudo-orthogonal decompositions of a linear relation are introduced in Section 3, where also some material on the occurence of overlapping in a pseudo-orthogonal decomposition can be found. For a relation \(T\in\mathbf{L}(\mathfrak{H},\mathfrak{K})\) and a nonnegative contraction \(R\in\mathbf{B}(\mathfrak{K})\) there are a number of criteria in Section 4 under which the product relation \(RT\) is regular or singular. In Section 5 the previous characterizations are used to study the pseudo-orthogonal Lebesgue type decompositions of a linear relation \(T\). The particular case where \(T\) is
an operator range relation is briefly reviewed in Section 6; see [15] for the orthogonal case and the corresponding Radon-Nikodym derivatives.
The Lebesgue decomposition for measures and the associated Radon-Nikodym derivatives for their absolutely continuous parts have seen many generalizations to more abstract settings. At this stage it suffices to mention the work of Dye [8] and Henle [17]. The second half of the seventies saw the work of Ando [1] for pairs of nonnegative operators and the work of Simon [31] for nonnegative forms. This lead to many papers devoted to related contexts, such as \(C^{*}\)-algebras and the theory of positive maps; see, for instance, the references in [2, 10, 11, 24], and note also, more recently, [5, 39], and e.g. the construction of Lebesgue decomposition of non-commutative measures in multi-variable setting into absolutely continuous and singular parts via Lebesgue decompositions for quadratic forms and via reproducing kernel space techniques; see [22, 23]. Shortly after the papers of Ando and Simon appeared the work of Jorgensen [21] and Ota [25, 26, 27], which was devoted to the decompositions of linear operators. This context (linear operators and also linear relations) was taken up in [13] and later in [12, 15]. The Lebesgue type decompositions in those papers were orthogonal, whereas in the present paper the pseudo-orthogonal case is dealt with. Decomposition results in the context of forms, based on a Hilbert space decomposition similar to the de Branges-Rovnyak decomposition (as worked out at the end of Section 2), will appear in [16] and include the results in Simon [31]. In the pseudo-orthogonal decompositions the notion of overlapping spaces appears in a natural way. Furthermore, the pseudo-orthogonal situation for a pair of nonnegative bounded operators (as in [1]) and for a pair of forms on a linear space (as in [11]) can also be treated in the context of the associated linear relations; this is connected to the recent work by Z. Sebestyen, Z. Szucs, Z. Tarcsay, and T. Titkos [29, 30, 32, 33, 34, 35, 36, 37, 38].
## 2. Pseudo-orthogonal decompositions
This section provides a short review of pseudo-orthogonal decompositions of a Hilbert space. These decompositions involve linear subspaces of such a Hilbert space, which are generated by a pair of nonnegative contractions. In such decompositions there is in general an overlapping of the summands; [4, 6, 7, 9]. At the end of this section there is a brief discussion of an analogous overlapping decomposition; see [16].
This review begins with the notion of an operator range space. Let \(\mathfrak{K}\) be a Hilbert space and let \(A\in\mathbf{B}(\mathfrak{K})\) be a nonnegative contraction. Provide the range \(\mathfrak{A}=\operatorname{ran}A^{\frac{1}{2}}\), a subspace of \(\mathfrak{K}\), with the inner product
\[(A^{\frac{1}{2}}\varphi,A^{\frac{1}{2}}\psi)_{\mathfrak{A}}=(\pi\varphi,\pi \psi)_{\mathfrak{K}},\quad\varphi,\psi\in\mathfrak{K}, \tag{2.1}\]
where \(\pi\) is the orthogonal projection in \(\mathfrak{K}\) onto \(\overline{\operatorname{ran}}A^{\frac{1}{2}}=(\ker A^{\frac{1}{2}})^{\perp}\). Note that it follows from (2.1) that the mapping
\[\varphi\mapsto A^{\frac{1}{2}}\varphi,\quad\varphi\in\overline{\operatorname {ran}}A^{\frac{1}{2}}, \tag{2.2}\]
is unitary from \(\overline{\operatorname{ran}}A^{\frac{1}{2}}\) onto \(\mathfrak{A}\). Clearly \(\mathfrak{A}\) with this inner product is a Hilbert space. Since \(A\) is a contraction, one has
\[\|A^{\frac{1}{2}}\varphi\|_{\mathfrak{K}}=\|A^{\frac{1}{2}}\pi\varphi\|_{ \mathfrak{K}}\leq\|\pi\varphi\|_{\mathfrak{K}},\quad\varphi\in\mathfrak{K},\]
and, hence, the identity (2.1) shows that
\[\|A^{\frac{1}{2}}\varphi\|_{\mathfrak{A}}\geq\|A^{\frac{1}{2}}\varphi\|_{ \mathfrak{K}},\quad\varphi\in\mathfrak{H}. \tag{2.3}\]
It is a consequence of (2.1) that
\[(A^{\frac{1}{2}}\varphi,A\psi)_{\mathfrak{A}}=(A^{\frac{1}{2}}\varphi,\psi)_{ \mathfrak{K}},\quad\varphi,\psi\in\mathfrak{K}, \tag{2.4}\]
which shows that the linear space \(\operatorname{ran}A\) is dense in the Hilbert space \(\mathfrak{A}\). Moreover, (2.4) leads to the useful identities
\[(A\varphi,A\psi)_{\mathfrak{A}}=(A\varphi,\psi)_{\mathfrak{K}}\quad\text{and} \quad(A^{\frac{1}{2}}\varphi,AA^{\frac{1}{2}}\psi)_{\mathfrak{A}}=(A^{\frac{1} {2}}\varphi,A^{\frac{1}{2}}\psi)_{\mathfrak{K}},\quad\varphi,\psi\in\mathfrak{K}. \tag{2.5}\]
It is clear that \(A\) maps \(\mathfrak{A}=\operatorname{ran}A^{\frac{1}{2}}\) into itself; in fact, it can be seen from (2.5) that \(A\) is nonnegative in \(\mathfrak{A}\) and that \(A\) maps \(\mathfrak{A}\) contractively into itself. Note that if \(A\) is an orthogonal projection in \(\mathfrak{K}\), then \(\pi=A\) and
\[(A^{\frac{1}{2}}\varphi,A^{\frac{1}{2}}\psi)_{\mathfrak{A}}=(A^{\frac{1}{2}} \varphi,A^{\frac{1}{2}}\psi)_{\mathfrak{K}},\quad\varphi,\psi\in\mathfrak{K}, \tag{2.6}\]
so that the inner product \((\cdot,\cdot)_{\mathfrak{A}}\) on \(\operatorname{ran}A^{\frac{1}{2}}=\operatorname{ran}A\) coincides with the inner product of \(\mathfrak{K}\).
The equality (2.1) and the inequality (2.3) can be formalized. Recall that a linear subspace \(\mathfrak{M}\) of a Hilbert space \(\mathfrak{K}\) is called a _contractive operator range space_, when \(\mathfrak{M}\) has an inner product \((\cdot,\cdot)_{\mathfrak{M}}\), such that
1. \(\|\varphi\|_{\mathfrak{K}}\leq\|\varphi\|_{\mathfrak{M}}\), \(\varphi\in\mathfrak{M}\);
2. \(\mathfrak{M}\) with the inner product \((\cdot,\cdot)_{\mathfrak{M}}\) is a Hilbert space.
It is clear that the space \(\mathfrak{A}\) above is an example of a contractive operator range space. In fact, it is the only example; see [15].
The interest in this section is in pairs of nonnegative contractions \(X,Y\in\mathbf{B}(\mathfrak{K})\) with the connecting property \(X+Y=I\). For the convenience of the reader some simple, but useful facts are presented.
**Lemma 2.1**.: _Let \(\mathfrak{K}\) be a Hilbert space and let \(X,Y\in\mathbf{B}(\mathfrak{K})\) be nonnegative contractions with \(X+Y=I\). Then \(XY=YX\in\mathbf{B}(\mathfrak{K})\) is a nonnegative contraction with_
\[\ker\,XY=\ker\,X\oplus\ker\,Y. \tag{2.7}\]
_Moreover, the following identities hold:_
\[\left\{\begin{array}{l}\operatorname{ran}X\cap\operatorname{ran}Y= \operatorname{ran}XY,\\ \operatorname{ran}X^{\frac{1}{2}}\cap\operatorname{ran}Y^{\frac{1}{2}}= \operatorname{ran}X^{\frac{1}{2}}Y^{\frac{1}{2}},\\ \overline{\operatorname{ran}}\,X\cap\overline{\operatorname{ran}}\,Y= \overline{\operatorname{ran}}\,XY.\end{array}\right. \tag{2.8}\]
_Consequently, each of the following statements_
\[\operatorname{ran}X\cap\operatorname{ran}Y=\{0\},\quad\operatorname{ran}X^{ \frac{1}{2}}\cap\operatorname{ran}Y^{\frac{1}{2}}=\{0\},\]
\((\)_or, similarly, with the closures of the ranges\()\) and, in particular \(\operatorname{ran}X\perp\operatorname{ran}Y\) or \(\operatorname{ran}X^{\frac{1}{2}}\perp\operatorname{ran}Y^{\frac{1}{2}}\), is equivalent to the nonnegative contractions \(X\) and \(Y\) being orthogonal projections._
Proof.: The commutativity of \(X\) and \(Y\) and of their square roots is clear. Hence the nonnegativity of the product \(XY\) follows from \(XY=X^{\frac{1}{2}}YX^{\frac{1}{2}}\). Note that \(\ker\,X\) and \(\ker\,Y\) are perpendicular in \(\mathfrak{K}\). It is clear that the right-hand side of (2.7) is contained in the left-hand side. To show the remaining inclusion let \(h\in\ker\,XY\). Then \(h=Yh+Xh\) with \(Yh\in\ker\,X\) and \(Xh\in\ker\,Y\).
To see the first identity in (2.8), let \(g\in\operatorname{ran}X\cap\operatorname{ran}Y\). Then clearly one has \(g=Xh=Yk\) for some \(h,k\in\mathfrak{K}\). Hence \(h=Y(h+k)\in\operatorname{ran}Y\) and \(g\in\operatorname{ran}XY\). The reverse inclusion is clear. For the second identity in (2.8), let \(g\in\operatorname{ran}X^{\frac{1}{2}}\cap\operatorname{ran}Y^{\frac{1}{2}}\). Then one has, similarly, \(g=X^{\frac{1}{2}}h=Y^{\frac{1}{2}}k\) for some \(h,k\in\mathfrak{K}\). Hence
\[h=Yh+X^{\frac{1}{2}}Y^{\frac{1}{2}}k\in\operatorname{ran}Y^{\frac{1}{2}}\quad \text{and}\quad g\in\operatorname{ran}X^{\frac{1}{2}}Y^{\frac{1}{2}}.\]
The reverse inclusion is clear.
By taking orthogonal complements in (2.7) one obtains the third identity in (2.8) for the closures of the ranges.
Let \(\mathfrak{K}\) be a Hilbert space and let \(X,Y\in\mathbf{B}(\mathfrak{K})\) be nonnegative contractions with \(X+Y=I\). Let \(\mathfrak{X}=\operatorname{ran}X^{\frac{1}{2}}\) and \(\mathfrak{Y}=\operatorname{ran}Y^{\frac{1}{2}}\) be the corresponding operator range spaces; cf. (2.1). Then the Hilbert space has a decomposition of the form
\[\mathfrak{K}=\mathfrak{X}+\mathfrak{Y}. \tag{2.9}\]
This can be seen as follows. By definition one has \(\mathfrak{X}\subset\mathfrak{K}\) and \(\mathfrak{Y}\subset\mathfrak{K}\), so that the right-hand side of (2.9) is contained in the left-hand side. For the converse, observe that for all \(h\in\mathfrak{K}\) one has \(h=Xh+Yh\) with \(Xh\in\mathfrak{X}\) and \(Yh\in\mathfrak{Y}\), which gives \(\mathfrak{K}\subset\mathfrak{X}+\mathfrak{Y}\). The intersection \(\mathfrak{L}=\mathfrak{X}\cap\mathfrak{Y}\) is called the _overlapping space_ of the Hilbert spaces \(\mathfrak{X}\) and \(\mathfrak{Y}\) with respect to the decomposition (2.9). It is characterized in the following lemma.
**Lemma 2.2**.: _Let \(\mathfrak{K}\) be a Hilbert space and let \(X,Y\in\mathbf{B}(\mathfrak{K})\) be nonnegative contractions with \(X+Y=I\). The overlapping space \(\mathfrak{L}=\mathfrak{X}\cap\mathfrak{Y}\) is an operator range space associated with \(X^{\frac{1}{2}}Y^{\frac{1}{2}}\), whose inner product satisfies_
\[(\varphi,\psi)_{\mathfrak{L}}=(\varphi,\psi)_{\mathfrak{X}}+(\varphi,\psi)_{ \mathfrak{Y}},\quad\varphi,\psi\in\mathfrak{X}\cap\mathfrak{Y}. \tag{2.10}\]
Proof.: The overlapping \(\mathfrak{L}=\mathfrak{X}\cap\mathfrak{Y}\) in (2.9) is a linear space given by \(\mathfrak{L}=\operatorname{ran}X^{\frac{1}{2}}Y^{\frac{1}{2}}\), as follows from Lemma 2.1. To see (2.10) first observe for \(h,k\in\mathfrak{K}\) that the identity \(X+Y=I\) gives
\[(h,k)_{\mathfrak{K}}=(Yh,k)_{\mathfrak{K}}+(Xh,k)_{\mathfrak{K}}=(Y^{\frac{1} {2}}h,Y^{\frac{1}{2}}k)_{\mathfrak{K}}+(X^{\frac{1}{2}}h,X^{\frac{1}{2}}k)_{ \mathfrak{K}}. \tag{2.11}\]
Note that \(Y^{\frac{1}{2}}\overline{\operatorname{ran}}\,X^{\frac{1}{2}}Y^{\frac{1}{2}} \subset\overline{\operatorname{ran}}\,X^{\frac{1}{2}}\) and \(X^{\frac{1}{2}}\overline{\operatorname{ran}}\,X^{\frac{1}{2}}Y^{\frac{1}{2}} \subset\overline{\operatorname{ran}}\,Y^{\frac{1}{2}}\). Hence, if in (2.11) one takes \(h,k\in\overline{\operatorname{ran}}\,X^{\frac{1}{2}}Y^{\frac{1}{2}}\), then it follows that
\[(X^{\frac{1}{2}}Y^{\frac{1}{2}}h,X^{\frac{1}{2}}Y^{\frac{1}{2}}k)_{\mathfrak{ L}}=(X^{\frac{1}{2}}Y^{\frac{1}{2}}h,X^{\frac{1}{2}}Y^{\frac{1}{2}}k)_{ \mathfrak{X}}+(Y^{\frac{1}{2}}X^{\frac{1}{2}}h,Y^{\frac{1}{2}}X^{\frac{1}{2}} k)_{\mathfrak{Y}}.\]
Moreover, it is clear that the last identity holds for all \(h,k\in\mathfrak{K}\). Therefore, the inner product on \(\mathfrak{L}\) satisfies (2.10).
Let \(\mathfrak{K}\) be a Hilbert space and let \(X,Y\in\mathbf{B}(\mathfrak{K})\) be nonnegative contractions with \(X+Y=I\). Provide the Cartesian product \(\mathfrak{X}\times\mathfrak{Y}\) with the inner product generated by \(\mathfrak{X}\) and \(\mathfrak{Y}\), respectively. Define the column operator \(V\) from \(\mathfrak{K}\) to \(\mathfrak{X}\times\mathfrak{Y}\) by
\[V=\operatorname{col}\,(X,Y)=\left\{\left\{h,\binom{Xh}{Yh}\right\}:\,h\in \mathfrak{K}\right\}. \tag{2.12}\]
The operator \(V\) is clearly isometric, since
\[\|Xh\|_{\mathfrak{X}}^{2}+\|Yh\|_{\mathfrak{Y}}^{2}=(Xh,h)_{\mathfrak{K}}+(Yh,h )_{\mathfrak{K}}=((X+Y)h,h),\quad h\in\mathfrak{K}, \tag{2.13}\]
cf. (2.5). Hence, \(V\) is a closed operator and \(\operatorname{ran}V\) is closed. In general, the isometry \(V\) does not map onto \(\mathfrak{X}\times\mathfrak{Y}\).
**Proposition 2.3**.: _Let \(\mathfrak{K}\) be a Hilbert space, let \(X,Y\in\mathbf{B}(\mathfrak{K})\) be nonnegative contractions, and assume that \(X+Y=I\). Let the column operator \(V\) be given by (2.12). Then the adjoint mapping \(V^{*}\) from \(\mathfrak{X}\times\mathfrak{Y}\) to \(\mathfrak{K}\) is a partial isometry, given by_
\[V^{*}\begin{pmatrix}f\\ g\end{pmatrix}=f+g,\quad f\in\mathfrak{X},\ g\in\mathfrak{Y}. \tag{2.14}\]
_Consequently, for all \(f\in\mathfrak{X}\) and \(g\in\mathfrak{Y}\), there is the inequality_
\[\|f+g\|_{\mathfrak{K}}^{2}\leq\|f\|_{\mathfrak{X}}^{2}+\|g\|_{\mathfrak{Y}}^{2}, \tag{2.15}\]
_with equality in (2.15) if and only if \(f=Xh\) and \(g=Yh\) for some \(h\in\mathfrak{K}\)._
Proof.: A simple calculation gives for all \(f\in\mathfrak{X}\), \(g\in\mathfrak{Y}\), and \(h\in\mathfrak{K}\) that
\[\begin{split}\left(V^{*}\begin{pmatrix}f\\ g\end{pmatrix},h\right)_{\mathfrak{K}}&=\left(\begin{pmatrix}f\\ g\end{pmatrix},Vh\right)_{\mathfrak{X}\times\mathfrak{Y}}=\left(\begin{pmatrix} f\\ g\end{pmatrix},\begin{pmatrix}Xh\\ Yh\end{pmatrix}\right)_{\mathfrak{X}\times\mathfrak{Y}}\\ &=(f,Xh)_{\mathfrak{X}}+(g,Yh)_{\mathfrak{Y}}=(f,h)_{\mathfrak{K}}+(g,h)_{ \mathfrak{K}}\\ &=(f+g,h)_{\mathfrak{K}},\end{split}\]
which follows from (2.1); the identity shows (2.14). Since \(V\) is an isometry, \(V^{*}\) is partially isometric and, in particular, \(V^{*}\) is contractive. Moreover, according to (2.9), the mapping \(V^{*}\) is onto. Thus (2.14) implies (2.15). Finally, there is equality in (2.15) if and only if \(\binom{f}{g}\in\operatorname{ran}V\).
The connection between the overlapping space \(\mathfrak{L}=\mathfrak{X}\cap\mathfrak{Y}\) and the range of the isometry \(V\) is now clear.
**Proposition 2.4**.: _The isometry \(V\) satisfies_
\[\left(\operatorname{ran}V\right)^{\perp}=\left\{\begin{pmatrix}-X^{\frac{1}{2 }}Y^{\frac{1}{2}}k\\ X^{\frac{1}{2}}Y^{\frac{1}{2}}k\end{pmatrix}:\,k\in\overline{\operatorname{ ran}}\,XY\right\}. \tag{2.16}\]
_Moreover, \(V\) is surjective if and only if \(X\) and \(Y\) are orthogonal projections._
Proof.: It is clear that \((\operatorname{ran}V)^{\perp}=\ker\,V^{*}\) and (2.14) shows that
\[\ker\,V^{*}=\left\{\begin{pmatrix}\varphi\\ -\varphi\end{pmatrix}:\,\varphi\in\mathfrak{L}\right\}. \tag{2.17}\]
Now apply Lemma 2.2 to obtain the assertion (2.16). In particular, (2.17) and the isometric property of \(V\), cf. (2.13), show that \(V\) is surjective if and only if \(\mathfrak{L}=\{0\}\). The conclusion follows from Lemma 2.2.
Recall that \(X\) and \(Y\) act as nonnegative contractions in \(\mathfrak{X}\) and \(\mathfrak{Y}\), respectively. The next corollary presents the orthogonal projection \(VV^{*}\) as a common dilation in the Hilbert space \(\mathfrak{X}\times\mathfrak{Y}\) for this pair of nonnegative contractions; cf. [28].
**Corollary 2.5**.: _The orthogonal projection \(VV^{*}\) onto \(\operatorname{ran}V\) is given by_
\[VV^{*}\begin{pmatrix}f\\ g\end{pmatrix}=\begin{pmatrix}X&X\\ Y&Y\end{pmatrix}\begin{pmatrix}f\\ g\end{pmatrix},\quad f\in\mathfrak{X},\quad g\in\mathfrak{Y}.\]
The terminology in the following definition will be used in the rest of this paper.
**Definition 2.6**.: _Let \(\mathfrak{X}\) and \(\mathfrak{Y}\) be linear subspaces of the Hilbert space \(\mathfrak{K}\). Then \(\mathfrak{K}\) is said to have a pseudo-orthogonal decomposition \(\mathfrak{K}=\mathfrak{X}+\mathfrak{Y}\) if_
* \(\mathfrak{X}\) _and_ \(\mathfrak{Y}\) _are contractive operator range spaces that are contractively contained in_ \(\mathfrak{K}\)_;_
* _the corresponding nonnegative contractions_ \(X\) _and_ \(Y\) _satisfy_ \(X+Y=I\)_._
Recall that the condition \(X+Y=I\) is equivalent to the condition
\[\|h\|_{\mathfrak{K}}^{2}=\|Xh\|_{\mathfrak{X}}^{2}+\|Yh\|_{\mathfrak{Y}}^{2}, \quad h\in\mathfrak{K},\]
cf. (2.13) and Theorem 2.3. Moreover, if \(X\) and \(Y\) in Definition 2.6 are orthogonal projections, then the definition reduces to the usual orthogonal decomposition as the contractive operator range spaces \(\mathfrak{X}\) and \(\mathfrak{Y}\) are closed linear subspaces of \(\mathfrak{K}\); cf. Lemma 2.1 and (2.6).
At the end of the section a closely related situation will be reviewed for nonnegative contractions \(X,Y\in\mathbf{B}(\mathfrak{K})\) which satisfy \(X+Y=I\). Provide the closed linear subspaces \(\mathfrak{K}_{1}=\overline{\operatorname{ran}}\,X\) and \(\mathfrak{K}_{2}=\overline{\operatorname{ran}}\,Y\) with the inner product inherited from \(\mathfrak{K}\). It is clear that the Hilbert space \(\mathfrak{K}\) has the decomposition
\[\mathfrak{K}=\mathfrak{K}_{1}+\mathfrak{K}_{2}. \tag{2.18}\]
The intersection \(\mathfrak{K}_{1}\cap\mathfrak{K}_{2}\) is called the _overlapping space_ of the Hilbert spaces \(\mathfrak{K}_{1}\) and \(\mathfrak{K}_{2}\) with respect to the decomposition (2.18). It is characterized by
\[\mathfrak{K}_{1}\cap\mathfrak{K}_{2}=\overline{\operatorname{ran}}\,X\cap \overline{\operatorname{ran}}\,Y=\overline{\operatorname{ran}}\,XY;\]
see Lemma 2.1.
Note that the column operator \(W=\operatorname{col}\,(X^{\frac{1}{2}},Y^{\frac{1}{2}})\), defined by
\[Wh:=\left\{\left\{h,\begin{pmatrix}X^{\frac{1}{2}}h\\ Y^{\frac{1}{2}}h\end{pmatrix}\right\}:\,h\in\mathfrak{K}\right\}, \tag{2.19}\]
is a closed isometric mapping from \(\mathfrak{K}\) to \(\mathfrak{K}_{1}\times\mathfrak{K}_{2}\). The mapping \(W\) in (2.19) is closely related to the mapping \(V\) in (2.12). To see this, first observe that the operator matrix
\[U=\begin{pmatrix}X^{\frac{1}{2}}&0\\ 0&Y^{\frac{1}{2}}\end{pmatrix}:\,\begin{pmatrix}\mathfrak{K}_{1}\\ \mathfrak{K}_{2}\end{pmatrix}\to\begin{pmatrix}\mathfrak{X}\\ \mathfrak{Y}\end{pmatrix} \tag{2.20}\]
between the indicated Hilbert spaces is a unitary mapping; compare this with the property (2.2) of the operator \(A\in\mathbf{B}(\mathfrak{K})\) in (2.1). Next observe that \(U\) connects the operators \(W\) and \(V\) via
\[UW=V. \tag{2.21}\]
Hence the following result is a consequence of Proposition 2.3.
**Proposition 2.7**.: _Let \(\mathfrak{K}\) be a Hilbert space and let \(X,Y\in\mathbf{B}(\mathfrak{K})\) be nonnegative contractions with \(X+Y=I\). Let the column operator \(W\) be given by (2.19). Then the adjoint mapping \(W^{*}\) from \(\mathfrak{K}_{1}\times\mathfrak{K}_{2}\) to \(\mathfrak{K}\) is a partial isometry, given by_
\[W^{*}\begin{pmatrix}f\\ g\end{pmatrix}=X^{\frac{1}{2}}f+Y^{\frac{1}{2}}g,\quad f\in\mathfrak{K}_{1},\,g \in\mathfrak{K}_{2}. \tag{2.22}\]
_Consequently, for all \(f\in\mathfrak{K}_{1}\) and \(g\in\mathfrak{K}_{2}\), there is the inequality_
\[\|X^{\frac{1}{2}}f+Y^{\frac{1}{2}}g\|_{\mathfrak{K}}^{2}\leq\|f\|_{\mathfrak{ K}}^{2}+\|g\|_{\mathfrak{K}}^{2},\]
_with equality in (2.22) if and only if \(f=X^{\frac{1}{2}}h\) and \(g=Y^{\frac{1}{2}}h\) for some \(h\in\mathfrak{K}\)._
Furthermore, the isometry \(W\) is not onto in general and the intersection of \(\overline{\operatorname{ran}}\,X\) and \(\overline{\operatorname{ran}}\,Y\) comes into play.
**Proposition 2.8**.: _The isometry \(W\) satisfies_
\[(\operatorname{ran}W)^{\perp}=\ker\,W^{*}=\left\{\begin{pmatrix}-Y^{\frac{1}{2} }k\\ X^{\frac{1}{2}}k\end{pmatrix}:\,k\in\overline{\operatorname{ran}}\,XY\right\}. \tag{2.23}\]
_Moreover, \(W\) is surjetive if and only if \(X\) and \(Y\) are orthogonal projections._
Proof.: The operator \(U\) in (2.20) maps \(\ker\,W^{*}\) in (2.23) onto \(\ker\,V^{*}\) in (2.16). Hence the assertion (2.23) follows from Proposition 2.4. The characterization of surjectivity follows from (2.21) and Proposition 2.4.
Note that \(X\) and \(Y\) act as nonnegative contractions in \(\mathfrak{K}_{1}\) and \(\mathfrak{K}_{2}\), respectively. The following corollary presents the orthogonal projection \(WW^{*}\) as a common dilation in the Hilbert space \(\mathfrak{K}_{1}\times\mathfrak{K}_{2}\) for this pair of nonnegative contractions; cf. [28]. It can be seen as a consequence of Corollary 2.5, since \(WW^{*}=U^{*}VV^{*}U\).
**Corollary 2.9**.: _The orthogonal projection \(WW^{*}\) onto \(\operatorname{ran}W\) is given by_
\[WW^{*}\begin{pmatrix}f\\ g\end{pmatrix}=\begin{pmatrix}X&X^{\frac{1}{2}}Y^{\frac{1}{2}}\\ Y^{\frac{1}{2}}X^{\frac{1}{2}}&Y\end{pmatrix}\begin{pmatrix}f\\ g\end{pmatrix},\quad f\in\mathfrak{K}_{1},\,g\in\mathfrak{K}_{2}.\]
The model involving \(\mathfrak{K}_{1}=\overline{\operatorname{ran}}\,X\) and \(\mathfrak{K}_{2}=\overline{\operatorname{ran}}\,Y\) is connected to the de Branges-Rovnyak model involving \(\mathfrak{X}=\operatorname{ran}\,X^{\frac{1}{2}}\) and \(\mathfrak{Y}=\operatorname{ran}\,Y^{\frac{1}{2}}\) via the unitary mapping (2.20). The present model and the mapping \(W\) in (2.19) and its properties will play a role in the Lebesgue type decompositions of a single semibounded form [16].
## 3. Pseudo-orthogonal decompositions
In this section one can find a brief introduction to sum decompositions of linear operators or relations from a Hilbert space \(\mathfrak{H}\) to a Hilbert space \(\mathfrak{K}\) with respect to a so called pseudo-orthogonal decomposition of \(\mathfrak{K}\). First some preliminary properties about sums of relations are discussed.
Let \(T_{1}\) and \(T_{2}\) belong to \(\mathbf{L}(\mathfrak{H},\mathfrak{K})\). The sum \(T_{1}+T_{2}\in\mathbf{L}(\mathfrak{H},\mathfrak{K})\) is defined by
\[T_{1}+T_{2}=\big{\{}\{f,f^{\prime}+f^{\prime\prime}\}:\,\{f,f^{\prime}\}\in T_ {1},\{f,f^{\prime\prime}\}\in T_{2}\big{\}}. \tag{3.1}\]
With the sum \(T=T_{1}+T_{2}\) it is clear that for the domains one has
\[\operatorname{dom}T=\operatorname{dom}T_{1}\cap\operatorname{dom}T_{2},\]
while it is straightforward to check for the ranges that there is an inclusion
\[\operatorname{ran}T\subset\operatorname{ran}T_{1}+\operatorname{ran}T_{2}.\]
However, for the multivalued parts there is equality
\[\operatorname{mul}T=\operatorname{mul}T_{1}+\operatorname{mul}T_{2}, \tag{3.2}\]
so that \(\operatorname{mul}T_{1}\subset\operatorname{mul}T\) and \(\operatorname{mul}T_{2}\subset\operatorname{mul}T\).
**Definition 3.1**.: _The sum in (3.1) is said to be strict if the sum in (3.2) is direct, i.e.,_
\[\operatorname{mul}T_{1}\cap\operatorname{mul}T_{2}=\{0\}.\]
_In other words the sum in (3.1) is strict precisely when the elements \(f^{\prime}\) and \(f^{\prime\prime}\) in (3.1) are uniquely determined by the sum \(f^{\prime}+f^{\prime\prime}\)._
In particular, the sum \(T=T_{1}+T_{2}\) is strict if either \(T_{1}\) or \(T_{2}\) is an operator. A variation on the theme of sums is given in the following lemma.
**Lemma 3.2**.: _Let \(T\), \(T_{1}\), and \(T_{2}\) belong to \(\mathbf{L}(\mathfrak{H},\mathfrak{K})\). Assume the domain equality \(\operatorname{dom}T=\operatorname{dom}T_{1}=\operatorname{dom}T_{2}\) and the inclusion_
\[T\subset T_{1}+T_{2}. \tag{3.3}\]
_Then there is equality \(T=T_{1}+T_{2}\) in (3.3) if and only if_
\[\operatorname{mul}T=\operatorname{mul}T_{1}+\operatorname{mul}T_{2}. \tag{3.4}\]
_Consequently, there is equality in (3.3) if and only if_
\[\operatorname{mul}T_{1}\subset\operatorname{mul}T\quad\text{and}\quad \operatorname{mul}T_{2}\subset\operatorname{mul}T.\]
Proof.: By assumption one has \(\operatorname{dom}T=\operatorname{dom}\left(T_{1}+T_{2}\right)\) and it follows from the inclusion (3.3) that \(\operatorname{mul}T\subset\operatorname{mul}\left(T_{1}+T_{2}\right)\). Hence, by an observation that goes back to Arens (see [3, Corollary 1.1.3]), there is equality \(T=T_{1}+T_{2}\) if and only if \(\operatorname{mul}\left(T_{1}+T_{2}\right)\subset\operatorname{mul}T\), i.e., (3.4) holds.
The next corollary illustrates a situation that will be of interest in the rest of the paper; cf. [12].
**Corollary 3.3**.: _Let \(T\in\mathbf{L}(\mathfrak{H},\mathfrak{K})\) and let \(X,Y\in\mathbf{B}(\mathfrak{K})\) be nonnegative contractions such that \(X+Y=I\). Then \(\operatorname{dom}T=\operatorname{dom}XT=\operatorname{dom}YT\) and, in addition,_
\[T\subset XT+YT. \tag{3.5}\]
_There is equality \(T=XT+YT\) in (3.5) if and only if_
\[\operatorname{mul}T=X\operatorname{mul}T+Y\operatorname{mul}T.\]
_Consequently, there is equality in (3.5) if and only if_
\[X\operatorname{mul}T\subset\operatorname{mul}T\quad\text{or, equivalently,}\quad Y \operatorname{mul}T\subset\operatorname{mul}T. \tag{3.6}\]
_Moreover, in this case_
\[X\operatorname{mul}T\,\cap\,Y\operatorname{mul}T=XY\operatorname{mul}T; \tag{3.7}\]
_thus the sum \(T=XT+YT\) is strict in the sense of Definition 3.1 if and only if \(\operatorname{mul}T\subset\ker\,XY\)._
Proof.: These assertions follow from Lemma 3.2 except the identity (3.7). To see (3.7) let \(h\in X\operatorname{mul}T\cap Y\operatorname{mul}T\), so that \(h=X\varphi=Y\psi\) where \(\varphi,\psi\in\operatorname{mul}T\). Now it follows from \((I-Y)\varphi=Y\psi\) that \(\varphi=Y(\varphi+\psi)\) with \(\varphi+\psi\in\operatorname{mul}T\). Thus \(h\in XY\operatorname{mul}T\), which shows that \(X\operatorname{mul}T\cap Y\operatorname{mul}T\subset XY\operatorname{mul}T\). The reverse inclusion follows immediately from (3.6).
The interest in this paper is in decompositions \(T=T_{1}+T_{2}\) with linear relations or operators going from a Hilbert \(\mathfrak{H}\) to a Hilbert space \(\mathfrak{K}\), which have a pseudo-orthogonal decomposition \(\mathfrak{K}=\mathfrak{X}+\mathfrak{Y}\). Before the formal definition is given, note that any element \(\{f,g\}\in T\) can be written as
\[\{f,g\}=\{f,g_{1}+g_{2}\},\quad\{f,g_{1}\}\in T_{1},\quad\{f,g_{2}\}\in T_{2},\quad g=g_{1}+g_{2}.\]
If \(\operatorname{ran}T_{1}\subset\mathfrak{X}\) and \(\operatorname{ran}T_{2}\subset\mathfrak{Y}\), then by Proposition 2.3 there is the general inequality
\[\|g\|_{\mathfrak{K}}^{2}\leq\|g_{1}\|_{\mathfrak{X}}^{2}+\|g_{2}\|_{\mathfrak{ Y}}^{2}. \tag{3.8}\]
In the following definition a special class of such sum decompositions is introduced, involving a Pythagorean equality in (3.8).
**Definition 3.4**.: _Let \(T\in\mathbf{L}(\mathfrak{H},\mathfrak{K})\) and assume that \(\mathfrak{K}\) has a pseudo-orthogonal decomposition \(\mathfrak{K}=\mathfrak{X}+\mathfrak{Y}\) with associated nonnegative contractions \(X\) and \(Y\) such that \(X+Y=I\). Let \(T_{1}\) and \(T_{2}\) belong to \(\mathbf{L}(\mathfrak{H},\mathfrak{K})\), then the sum_
\[T=T_{1}+T_{2}\quad\text{with}\quad\operatorname{dom}T=\operatorname{dom}T_{1} =\operatorname{dom}T_{2}, \tag{3.9}\]
_is said to be a pseudo-orthogonal decomposition of \(T\) connected with the pseudo-orthogonal decomposition \(\mathfrak{K}=\mathfrak{X}+\mathfrak{Y}\)\((\)or, equivalently, with the pair of nonnegative contractions \(X\) and \(Y\) in \(\mathbf{B}(\mathfrak{K})\) with \(X+Y=I\)) if_
* \(\operatorname{ran}T_{1}\subset\mathfrak{X}\) _and_ \(\operatorname{ran}T_{2}\subset\mathfrak{Y}\)_;_
* _for every_ \(\{f,g\}\in T\) _with_ \(\{f,g_{1}\}\in T_{1}\)_,_ \(\{f,g_{2}\}\in T_{2}\)_,_ \(g=g_{1}+g_{2}\)_, one has_ \[\|g\|_{\mathfrak{K}}^{2}=\|g_{1}\|_{\mathfrak{X}}^{2}+\|g_{2}\|_{\mathfrak{Y }}^{2}.\]
The definition of pseudo-orthogonal decompositions has an important consequence for the sum (3.9); cf. Definition 3.1.
**Lemma 3.5**.: _Let \(T=T_{1}+T_{2}\) in (3.9) be a pseudo-orthogonal decomposition. Then the sum is strict in the sense of Definition 3.1._
Proof.: Let \(\varphi\in\operatorname{mul}T_{1}\cap\operatorname{mul}T_{2}\). Then \(\{0,\varphi\}\in T_{1}\), \(\{0,-\varphi\}\in T_{2}\), and for the sum one sees that \(g=\varphi-\varphi=0\). It follows that \(\|\varphi\|_{\mathfrak{X}}^{2}+\|-\varphi\|_{\mathfrak{Y}}^{2}=0\). This shows that \(\varphi=0\). Therefore \(\operatorname{mul}T_{1}\cap\operatorname{mul}T_{2}=\{0\}\) and the sum \(T=T_{1}+T_{2}\) is strict.
The pseudo-orthogonal decompositions in Definition 3.4 will now be characterized by means of nonnegative contractions in \(\mathbf{B}(\mathfrak{K})\).
**Theorem 3.6**.: _Let \(T\in\mathbf{L}(\mathfrak{H},\mathfrak{K})\) be a linear relation. Assume that \(K\in\mathbf{B}(\mathfrak{K})\) is a nonnegative contraction which satisfies_
\[\operatorname{mul}T=(I-K)\operatorname{mul}T+K\operatorname{mul}T,\quad \text{direct sum}, \tag{3.10}\]
_and define_
\[T_{1}=(I-K)T\quad\text{and}\quad T_{2}=KT. \tag{3.11}\]
_Then the sum \(T=T_{1}+T_{2}\) in (3.9) is a pseudo-orthogonal decomposition of \(T\), connected with the pair \(I-K\) and \(K\) in the sense of Definition 3.4._
_Conversely, let the sum \(T=T_{1}+T_{2}\) in (3.9) be a pseudo-orthogonal decomposition of \(T\in\mathbf{L}(\mathfrak{H},\mathfrak{K})\) in the sense of Definition 3.4. Then there exists a nonnegative contraction \(K\in\mathbf{B}(\mathfrak{K})\) for which (3.10) and (3.11) are satisfied._
Proof.: Let \(T\in\mathbf{L}(\mathfrak{H},\mathfrak{K})\) and let \(K\in\mathbf{B}(\mathfrak{K})\) be a nonnegative contraction, such that (3.10) holds. By Corollary 3.3 the relations \(T_{1}=(I-K)T\) and \(T_{2}=KT\) in (3.11) satisfy \(\operatorname{dom}T_{1}=\operatorname{dom}T_{2}=\operatorname{dom}T\) and \(T\subset T_{1}+T_{2}\). Again by Corollary 3.3 and the identity in (3.10) there is the decomposition \(T=T_{1}+T_{2}\). Thus the identities in (3.9) are satisfied. Since the sum in (3.10) is direct, the sum \(T=T_{1}+T_{2}\) is strict.
Now let \(\mathfrak{X}\) and \(\mathfrak{Y}\) be the operator range spaces generated by the nonnegative contractions \(X=I-K\) and \(Y=K\), respectively. Clearly, \(\mathfrak{X}\) and \(\mathfrak{Y}\) form a pair of complemented spaces, contractively contained in \(\mathfrak{K}\), and furthermore
\[\operatorname{ran}T_{1}\subset\operatorname{ran}\left(I-K\right)\subset \mathfrak{X}\quad\text{and}\quad\operatorname{ran}T_{2}\subset\operatorname{ ran}K\subset\mathfrak{Y},\]
which gives (a) in Definition 3.4. In order to check the Pythagorean property (b) in Definition 3.4, let \(\{f,g\}\in T\). Then \(\{f,g\}=\{f,g_{1}+g_{2}\}\) with \(\{f,g_{1}\}\in T_{1}\) and
\(\{f,g_{2}\}\in T_{2}\). Since the sum \(T=T_{1}+T_{2}\) is strict, one sees \(g_{1}=(I-K)g\) and \(g_{2}=Kg\). This implies with (2.5) that
\[\|g_{1}\|_{\mathfrak{X}}^{2}+\|g_{2}\|_{\mathfrak{Y}}^{2} =\|(I-K)g\|_{\mathfrak{X}}^{2}+\|Kg\|_{\mathfrak{Y}}^{2}\] \[=((I-K)g,g)_{\mathfrak{K}}+(Kg,g)_{\mathfrak{K}}=\|g\|_{ \mathfrak{K}}^{2},\]
and the Pythagorean property has been shown. Hence the conditions in Definition 3.4 are satisfied.
Conversely, let \(T=T_{1}+T_{2}\) be a pseudo-orthogonal decomposition of \(T\) of the form (3.9). Let \(\{f,g\}\in T\), then by (a) and (b) of Definition 3.4 one has for all \(\{f,g_{1}\}\in T_{1}\) and \(\{f,g_{2}\}\in T_{2}\) with \(g=g_{1}+g_{2}\), that
\[\|g\|_{\mathfrak{K}}^{2}=\|g_{1}\|_{\mathfrak{X}}^{2}+\|g_{2}\|_{\mathfrak{Y} }^{2}.\]
Thanks to this Pythagorean identity and Proposition 2.3 one obtains \(g_{1}=Xg\) and \(g_{2}=Yg\), which shows \(\{f,Xg\}=\{f,g_{1}\}\in T_{1}\) and \(\{f,Yg\}=\{f,g_{2}\}\in T_{2}\). Consequently, one sees the inclusions
\[XT\subset T_{1}\quad\text{and}\quadYT\subset T_{2}. \tag{3.12}\]
By definition, \(\operatorname{dom}T=\operatorname{dom}T_{1}\) and \(\operatorname{dom}T=\operatorname{dom}T_{2}\), and it follows from (3.12) and [3, Proposition 1.1.2] that
\[T_{1}=XT\stackrel{{\frown}}{{+}}(\{0\}\times\operatorname{mul}T _{1})\quad\text{and}\quad T_{2}=YT\stackrel{{\frown}}{{+}}(\{0\} \times\operatorname{mul}T_{2}); \tag{3.13}\]
here "\(\stackrel{{\frown}}{{+}}\)" stands for the componentwise sum (linear spans) of the graphs. Observe that it follows from (3.12) that \(X\operatorname{mul}T\subset\operatorname{mul}T_{1}\) and \(Y\operatorname{mul}T\subset\operatorname{mul}T_{2}\). Now let \(\{0,h\}\in\operatorname{mul}T_{1}\subset\operatorname{mul}T\). Then \(X+Y=I\) gives
\[h-Xh=Yh\quad\text{with}\quad h-Xh\in\operatorname{mul}T_{1}\quad\text{and} \quad Yh\in\operatorname{mul}T_{2}.\]
By Lemma 3.4 it follows that \(h=Xh\) and thus \((\{0\}\times\operatorname{mul}T_{1})\subset XT\). Hence, by (3.13) one sees that \(T_{1}=XT\) and, likewise, \(T_{2}=YT\). Consequently, with \(K=Y\) one obtains a nonnegative contraction \(K\in\mathbf{B}(\mathfrak{K})\) for which (3.10) and (3.11) hold.
Note that the nonnegative contraction \(K\in\mathbf{B}(\mathfrak{K})\) in (3.11) is uniquely determined if the relation \(T\) is minimal in the sense that \(\overline{\operatorname{ran}}\,T=\mathfrak{K}\); cf. [12]. In the case of decompositions of semibounded forms via representing maps the minimality may be assumed without loss of generality (see also [16]).
Let \(T\), \(T_{1}\), and \(T_{2}\) belong to \(\mathbf{L}(\mathfrak{H},\mathfrak{K})\) and assume that (3.9) holds. Let the Hilbert space \(\mathfrak{K}\) have the orthogonal decomposition \(\mathfrak{K}=\mathfrak{X}\oplus\mathfrak{Y}\), where \(\mathfrak{X}\) and \(\mathfrak{Y}\) are closed subspaces of \(\mathfrak{K}\). Then the corresponding nonnegative contractions \(X\) and \(Y\), which satisfy \(X+Y=I\), are orthogonal projections onto \(\mathfrak{X}\) and \(\mathfrak{Y}\). Clearly, the condition (a) of Definition 3.4 implies the condition (b). Therefore the following definition is natural.
**Definition 3.7**.: _Let \(T\in\mathbf{L}(\mathfrak{H},\mathfrak{K})\) and assume that \(\mathfrak{K}\) has an orthogonal decomposition \(\mathfrak{K}=\mathfrak{X}+\mathfrak{Y}\). Then the sum (3.9) is called an orthogonal sum decomposition of \(T\) connected with the orthogonal decomposition \(\mathfrak{K}=\mathfrak{X}+\mathfrak{Y}\) (or, equivalently, with the orthogonal projections \(I-P\) and \(P\)) if \(\operatorname{ran}T\subset\mathfrak{X}\) and \(\operatorname{ran}T_{2}\subset\mathfrak{Y}\)._
The characterization of orthogonal sum decompositions can be given as a corollary of Theorem 3.6; see [12], [15].
**Corollary 3.8**.: _Let \(T\in\mathbf{L}(\mathfrak{H},\mathfrak{K})\) be a linear relation. Assume that \(P\in\mathbf{B}(\mathfrak{K})\) is an orthogonal projection which satisfies_
\[\operatorname{mul}T=(I-P)\operatorname{mul}T+P\operatorname{mul}T, \tag{3.14}\]
_and define_
\[T_{1}=(I-P)T\quad\text{and}\quad T_{2}=PT. \tag{3.15}\]
_Then the sum \(T=T_{1}+T_{2}\) in (3.9) is an orthogonal sum decomposition of \(T\), connected with the pair \(I-P\) and \(P\) in the sense of Definition 3.7._
_Conversely, let the sum \(T=T_{1}+T_{2}\) in (3.9) be an orthogonal sum decomposition of \(T\in\mathbf{L}(\mathfrak{H},\mathfrak{K})\) in the sense of Definition 3.7. Then there exists an orthogonal projection \(P\in\mathbf{B}(\mathfrak{K})\) for which (3.14) and (3.15) are satisfied._
Let \(\mathfrak{H}\) be a Hilbert space and let \(X,Y\in\mathbf{B}(\mathfrak{K})\) be nonnegative contractions for which \(X+Y=I\). The operators \(X\) and \(Y\) induce Hilbert spaces \(\mathfrak{X}\) and \(\mathfrak{Y}\) for which \(\mathfrak{K}=\mathfrak{X}+\mathfrak{Y}\), and Hilbert spaces \(\mathfrak{K}_{1}\) and \(\mathfrak{K}_{2}\) for which \(\mathfrak{K}=\mathfrak{K}_{1}+\mathfrak{K}_{2}\) each with an overlapping space \(\mathfrak{X}\cap\mathfrak{Y}\) and \(\mathfrak{K}_{1}\cap\mathfrak{K}_{2}\), respectively. Then one has the inclusions
\[\left\{\begin{array}{l}\mathfrak{X}\cap\mathfrak{Y}=\operatorname{ran}X^{ \frac{1}{2}}\cap\operatorname{ran}Y^{\frac{1}{2}}=\operatorname{ran}X^{\frac{1 }{2}}Y^{\frac{1}{2}}\subset\overline{\operatorname{ran}}X\frac{1}{2}Y^{\frac{ 1}{2}}=\overline{\operatorname{ran}}XY,\\ \operatorname{ran}X\cap\operatorname{ran}Y=\operatorname{ran}XY\subset \overline{\operatorname{ran}}XY=\overline{\operatorname{ran}}X\cap\overline{ \operatorname{ran}}Y=\mathfrak{K}_{1}\cap\mathfrak{K}_{2},\end{array}\right.\]
see Section 2. For a linear relation \(T\in\mathbf{L}(\mathfrak{H},\mathfrak{K})\) it has been shown in Corollary 3.3 that with \(T_{1}=XT\) and \(T_{2}=YT\) one has \(T=T_{1}+T_{2}\) if and only if the linear subspace \(\operatorname{mul}T\) is invariant under \(X\) or \(Y\). Under these circumstances it is clear that
\[\operatorname{ran}T_{1}\cap\operatorname{ran}T_{2}\subset\operatorname{ran}X \cap\operatorname{ran}Y=\operatorname{ran}XY.\]
In order to give a characterization for the intersection \(\operatorname{ran}T_{1}\cap\operatorname{ran}T_{2}\), it is convenient to introduce the maximal linear subspace \(\mathfrak{M}\) of \(\operatorname{ran}T\), which is mapped back into \(\operatorname{ran}T\) by \(X\) or by \(Y\):
\[\mathfrak{M}=\big{\{}\,\eta\in\operatorname{ran}T:\,X\eta\in\operatorname{ran }T\,\big{\}}=\big{\{}\,\eta\in\operatorname{ran}T:\,Y\eta\in\operatorname{ran} T\,\big{\}}. \tag{3.16}\]
Note that \(\operatorname{mul}T\subset\mathfrak{M}\) if \(T=T_{1}+T_{2}\).
**Theorem 3.9**.: _Let \(T\in\mathbf{L}(\mathfrak{H},\mathfrak{K})\) have a decomposition \(T=T_{1}+T_{2}\), where \(T_{1}=XT\) and \(T_{2}=YT\) for some nonnegative contractions \(X,Y\in\mathbf{B}(\mathfrak{K})\) with \(X+Y=I\). Then \(\operatorname{ran}T_{1}\cap\operatorname{ran}T_{2}\) is given by_
\[\operatorname{ran}T_{1}\cap\operatorname{ran}T_{2}=XY\,\mathfrak{M}, \tag{3.17}\]
_where \(\mathfrak{M}\) is given in (3.16). Consequently, the intersection \(\operatorname{ran}T_{1}\cap\,\operatorname{ran}T_{2}\) is non-trivial if and only if \(\mathfrak{M}\cap\ker\,XY\neq\{0\}\). In particular, if \(X\) or \(Y\) is an orthogonal projection, then \(\operatorname{ran}T_{1}\cap\,\operatorname{ran}T_{2}=\{0\}\)._
Proof.: For the inclusion \((\subset)\) in (3.17), assume that \(\omega\in\operatorname{ran}T_{1}\cap\operatorname{ran}T_{2}\). Then for some \(\varphi,\psi\in\operatorname{ran}T\) one has
\[\omega=X\varphi=Y\psi. \tag{3.18}\]
This shows \(\psi=X\eta\), where \(\eta=\varphi+\psi\); hence, \(\eta\in\operatorname{ran}T\). Since \(\psi=X\eta\in\operatorname{ran}T\), one sees that \(\eta\in\mathfrak{M}\). Moreover, it follows from (3.18) that
\[\omega=Y\psi=YX\eta\in XY\,\mathfrak{M},\]
which gives \(\operatorname{ran}T_{1}\cap\operatorname{ran}T_{2}\subset XY\,\mathfrak{M}\).
For the inclusion \((\supset)\) in (3.17), assume that \(\eta\in\mathfrak{M}\). Then, by (3.16), \(\eta\in\operatorname{ran}T\), \(X\eta\in\operatorname{ran}T\), and \(Y\eta\in\operatorname{ran}T\). It follows that \(XY\eta\in Y\operatorname{ran}T=\operatorname{ran}T_{1}\) and that
\(XY\eta\in X{\rm ran}\,T={\rm ran}\,T_{2}\). Therefore one sees that \(XY\eta\in{\rm ran}\,T_{1}\cap{\rm ran}\,T_{2}\). Thus, \(XY\,\mathfrak{M}\subset{\rm ran}\,T_{1}\cap{\rm ran}\,T_{2}\).
The final statement follows directly from the identity (3.17). In particular, if \(X\) or \(Y\) is an orthogonal projection then \(XY=0\).
There is a similar result for the intersection of \(\overline{{\rm ran}}\,T_{1}\cap\overline{{\rm ran}}\,T_{2}\) in the presence of a minimality condition.
**Lemma 3.10**.: _Let \(T\in{\bf L}(\mathfrak{H},\mathfrak{K})\) have a decomposition \(T=T_{1}+T_{2}\), where \(T_{1}=XT\) and \(T_{2}=YT\) for some nonnegative contractions \(X,Y\in{\bf B}(\mathfrak{K})\) with \(X+Y=I\). Assume in addition that \(\overline{{\rm ran}}\,T=\mathfrak{K}\). Then_
\[\overline{{\rm ran}}\,T_{1}\cap\overline{{\rm ran}}\,T_{2}=\overline{{\rm ran }}\,XY. \tag{3.19}\]
_Consequently, \(\overline{{\rm ran}}\,T_{1}\cap\overline{{\rm ran}}\,T_{2}=\{0\}\) if and only if \(X\) or \(Y\) are orthogonal projections._
Proof.: Assume \(\overline{{\rm ran}}\,T=\mathfrak{K}\). To see (3.19) observe the identities \(\overline{{\rm ran}}\,T_{1}=\overline{{\rm ran}}\,X\), \(\overline{{\rm ran}}\,T_{2}=\overline{{\rm ran}}\,Y\), and \(\overline{{\rm ran}}\,(XY)T=\overline{{\rm ran}}\,XY\). It remains to apply (2.8), which shows that \(\overline{{\rm ran}}\,X\cap\overline{{\rm ran}}\,Y=\overline{{\rm ran}}\,XY\). For the last statement, see also Lemma 2.1.
## 4. Closable and singular relations
Let \(T\in{\bf L}(\mathfrak{H},\mathfrak{K})\) be a linear operator or relation. The relation \(T\) is said to be _closable_ if the closure \(T^{**}\) of \(T\) is the graph of a linear operator; i.e., \({\rm mul}\,T^{**}=\{0\}\). Since \({\rm mul}\,T^{**}=({\rm dom}\,T^{*})^{\perp}\), it is clear that \(T\) is closable if and only if \({\rm dom}\,T^{*}\) is dense in \(\mathfrak{K}\). The relation \(T\) is said to be _singular_ if \(T^{**}={\rm dom}\,T^{**}\times{\rm ran}\,T^{**}\), in which case \({\rm dom}\,T^{**}\) and \({\rm ran}\,T^{**}\) are closed. Clearly \(T\) is singular if and only if \({\rm dom}\,T^{**}\subset{\rm ker}\,\,T^{**}\) or \({\rm ran}\,T^{**}\subset{\rm mul}\,T^{**}\). Equivalently, one sees that \(T^{*}\) is singular precisely when \(T^{*}={\rm dom}\,T^{*}\times{\rm mul}\,T^{*}\), which is equivalent to \({\rm dom}\,T^{*}\subset{\rm ker}\,\,T^{*}\) or \({\rm ran}\,T^{*}\subset{\rm mul}\,T^{*}\). In particular, \(T\) is singular if and only if \(T^{*}\) is singular. These characterizations can be found e.g. in [3], [12].
Let \(T\in{\bf L}(\mathfrak{H},\mathfrak{K})\) and let \(R\in{\bf B}(\mathfrak{K})\) be a nonnegative contraction. The interest is in properties of the product
\[RT=\big{\{}\{f,Rf^{\prime}\}:\,\{f,f^{\prime}\}\in T\big{\}},\]
so that \(RT\in{\bf L}(\mathfrak{H},\mathfrak{K})\) with \({\rm dom}\,RT={\rm dom}\,T\). Recall the general fact that
\[{\rm mul}\,RT=R\,{\rm mul}\,T.\]
In particular, \(RT\) is an operator if and only if
\[{\rm mul}\,T\subset{\rm ker}\,\,R. \tag{4.1}\]
Since \(R\in{\bf B}(\mathfrak{K})\) one has \((RT)^{*}=T^{*}R\) (cf. [3]) and this leads to the inclusions
\[RT^{**}\subset(RT)^{**}\quad\mbox{and}\quad R\,{\rm mul}\,T^{**}\subset{\rm mul }\,(RT)^{**}. \tag{4.2}\]
To proceed, some useful observations about the relation \((RT)^{*}=T^{*}R\) are needed. Define the linear subset \(\mathcal{D}\subset\overline{{\rm ran}}\,R\) by
\[\mathcal{D}=\big{\{}k\in\overline{{\rm ran}}\,R:\,\,Rk\in{\rm dom}\,T^{*} \big{\}}. \tag{4.3}\]
Then it is clear from the decomposition \(\mathfrak{K}={\rm ker}\,\,R\oplus\overline{{\rm ran}}\,R\) that
\[{\rm dom}\,T^{*}R=\big{\{}k\in\mathfrak{K}:\,\,Rk\in{\rm dom}\,T^{*}\big{\}}= {\rm ker}\,\,R\oplus\mathcal{D}. \tag{4.4}\]
It follows from (4.4) and the definition in (4.3), respectively, that
\[R({\rm dom}\,T^{*}R)=R\mathcal{D}={\rm ran}\,R\cap{\rm dom}\,T^{*}. \tag{4.5}\]
The next two lemmas give criteria for the relation \(RT\) to be closable or singular, respectively; see [12] for the case where \(R\) is an orthogonal projection. First the characterization of the closable case will be considered.
**Lemma 4.1**.: _Let \(T\in\mathbf{L}(\mathfrak{H},\mathfrak{K})\) and let \(R\in\mathbf{B}(\mathfrak{K})\) be a nonnegative contraction. Then \(RT\) is closable if and only if_
\[\operatorname{clos}\left\{k\in\overline{\operatorname{ran}}\,R:\,Rk\in \operatorname{dom}T^{*}\right\}=\overline{\operatorname{ran}}\,R. \tag{4.6}\]
_Furthermore, if \(RT\) is closable, then_
\[\operatorname{clos}\left(\operatorname{ran}R\cap\operatorname{dom}T^{*} \right)=\overline{\operatorname{ran}}\,R, \tag{4.7}\]
_and, in particular,_
\[\operatorname{ran}R\subset\overline{\operatorname{dom}}\,T^{*}\quad\text{or, equivalently,}\quad\operatorname{mul}T^{**}\subset\ker\,R. \tag{4.8}\]
_If \(\operatorname{ran}R\) is closed, then the conditions (4.6) and (4.7) are equivalent. Moreover, if \(R\in\mathbf{B}(\mathfrak{K})\) is invertible, then \(RT\) is closable if and only if \(T\) is closable._
Proof.: Recall that \(RT\) is closable if and only if its adjoint \((RT)^{*}\) is densely defined. Thus it follows from \(\operatorname{dom}(RT)^{*}=\operatorname{dom}T^{*}R\) and (4.4) that \(RT\) is closable if and only if \(\mathcal{D}\) is dense in \(\overline{\operatorname{ran}}\,R\), i.e., if and only if (4.6) is satisfied.
Now assume that \(RT\) is closable, i.e., (4.6) holds. Then \(\mathcal{D}\) is dense in \(\overline{\operatorname{ran}}\,R\). As a consequence, also \(R\mathcal{D}\) is dense in \(\overline{\operatorname{ran}}\,R\). Thanks to (4.5) one sees that (4.7) holds.
The assertion \(\operatorname{mul}T^{**}\subset\ker\,R\) in (4.8) follows directly from (4.2). It is clearly equivalent to \(\operatorname{ran}R\subset\overline{\operatorname{dom}}\,T^{*}\). Both assertions can also be seen as consequences of the identity (4.7).
As to the last assertions, it suffices to show that (4.7) implies (4.6) if \(\operatorname{ran}R\) is closed. In this case \(R\) maps \(\overline{\operatorname{ran}}\,R\) bijectively onto itself and it follows from (4.5) that \(\mathcal{D}=R^{-1}(\operatorname{ran}R\cap\operatorname{dom}T^{*})\). Thus if \(\operatorname{ran}R\cap\operatorname{dom}T^{*}\) is dense in \(\overline{\operatorname{ran}}\,R\) then \(\mathcal{D}\) is dense \(\overline{\operatorname{ran}}\,R\). Therefore, (4.7) implies (4.6).
Note that in the special case when \(R\) is an orthogonal projection closability of \(RT\) was characterized in [12, Lemma 2.5, Lemma 3.4] via the condition (4.7).
**Corollary 4.2**.: _With \(T\) and \(R\) as in Lemma 4.1 the following statements hold:_
* _If_ \(RT\) _is closable and_ \(\operatorname{mul}T^{**}\cap\ker\,R=\{0\}\)_, then_ \(T\) _is closable._
* _If_ \(\operatorname{dom}T^{*}\) _is closed, then_ \(RT\) _is closable if and only if_ \(\operatorname{ran}R\subset\operatorname{dom}T^{*}\)_. In this case_ \((RT)^{**}\in\mathbf{B}(\overline{\operatorname{dom}}\,T,\mathfrak{K})\)_._
Proof.: (a) If \(RT\) be closable then \(\operatorname{mul}T^{**}\subset\ker\,R\) by Lemma 4.1. An equivalent statement is \(\operatorname{mul}T^{**}=\operatorname{mul}T^{**}\cap\ker\,R\). Thus (a) is clear.
(b) Assume that \(\operatorname{dom}T^{*}\) is closed. If \(RT\) is closable, then \(\operatorname{ran}R\subset\operatorname{dom}T^{*}\) by Lemma 4.1. Conversely, if \(\operatorname{ran}R\subset\operatorname{dom}T^{*}\) then \(\operatorname{dom}T^{*}R=\mathfrak{K}\) and \(RT\) is closable. Since \(\operatorname{dom}T^{*}R=\mathfrak{K}\) and \((RT)^{**}=(T^{*}R)^{*}\), the domain of \((RT)^{**}\) is closed (see [3]) and hence equal to \(\overline{\operatorname{dom}}\,T\). Thus, \((RT)^{**}\in\mathbf{B}(\overline{\operatorname{dom}}\,T,\mathfrak{K})\) by the closed graph theorem.
Next the characterization of the singular case will be considered.
**Lemma 4.3**.: _Let \(T\in\mathbf{L}(\mathfrak{H},\mathfrak{K})\) and let \(R\in\mathbf{B}(\mathfrak{K})\) be a nonnegative contraction. Then \(RT\) is singular if and only if_
\[\operatorname{ran}R\cap\operatorname{dom}T^{*}\subset\ker\,T^{*}. \tag{4.9}\]
_If \(T\in\mathbf{L}(\mathfrak{H},\mathfrak{K})\) has a dense range, then \(RT\) is singular if and only if_
\[\operatorname{ran}R\cap\operatorname{dom}T^{*}=\{0\}.\]
Proof.: Recall that \(RT\) is singular if and only if the adjoint \((RT)^{*}=T^{*}R\) is singular or, equivalently,
\[\operatorname{dom}T^{*}R\subset\ker\,T^{*}R. \tag{4.10}\]
Since by (4.5) one has \(R(\operatorname{dom}T^{*}R)=R\mathcal{D}\), the condition (4.10) holds if and only if \(R\mathcal{D}\subset\ker\,T^{*}\). By (4.5) this is equivalent to (4.9).
**Remark 4.4**.: The characterization of closability in Lemma 4.1 has an alternative formulation. If the relation \(RT\) is closable then \(\operatorname{dom}T^{*}R\) is dense, which implies \(\operatorname{mul}T^{**}\subset\ker\,R\) (cf. (4.2)), and then
\[\operatorname{dom}T^{*}R =\{k\in\mathfrak{K}:\,Rk\in\operatorname{dom}T^{*}\}\] \[=\operatorname{mul}T^{**}\oplus\{k\in\overline{\operatorname{ dom}}\,T^{*}:\,Rk\in\operatorname{dom}T^{*}\},\]
where now the orthogonal decomposition \(\mathfrak{K}=\overline{\operatorname{dom}}\,T^{*}\oplus\operatorname{mul}T^{**}\) is used. It is easily seen that the closability of \(RT\) is equivalent to
\[\left\{\begin{array}{l}\operatorname{mul}T^{**}\subset\ker\,R,\\ \operatorname{clos}\,\{k\in\overline{\operatorname{dom}}\,T^{*}:\,Rk\in \operatorname{dom}T^{*}\}=\overline{\operatorname{dom}}\,T^{*}.\end{array}\right.\]
## 5. Pseudo-orthogonal Lebesgue type decompositions
In this section the general notion of a pseudo-orthogonal Lebesgue type decomposition for linear operators or relations is developed. In [12] the Lebesgue type decompositions of a linear relation \(T\) were always orthogonal. The new notion allows a nontrivial intersection of the components; cf. Theorem 3.9.
**Definition 5.1**.: _Let the relations \(T\), \(T_{1}\), and \(T_{2}\) belong to \(\mathbf{L}(\mathfrak{H},\mathfrak{K})\). Then the sum decomposition_
\[T=T_{1}+T_{2}\quad\text{with}\quad\operatorname{dom}T=\operatorname{dom}T_{1}= \operatorname{dom}T_{2}, \tag{5.1}\]
_is called a pseudo-orthogonal Lebesgue type decomposition if it is a pseudo-orthogonal decomposition as in Definition 3.4, such that \(T_{1}\) is closable and \(T_{2}\) is singular._
The following characterization of pseudo-orthogonal Lebesgue type decompositions is a straightforward consequence of Theorem 3.6, Lemma 4.1, and Lemma 4.3. Note that now the condition (3.10) is automatically satisfied.
**Theorem 5.2**.: _Let \(T\in\mathbf{L}(\mathfrak{H},\mathfrak{K})\) be a linear relation. Assume that \(K\in\mathbf{B}(\mathfrak{K})\) is a nonnegative contraction which satisfies_
\[\operatorname{clos}\left\{k\in\overline{\operatorname{ran}}\,(I-K):\,(I-K)k \in\operatorname{dom}T^{*}\right\}=\overline{\operatorname{ran}}\,(I-K), \tag{5.2}\]
\[\operatorname{ran}K\cap\operatorname{dom}T^{*}\subset\ker\,T^{*}, \tag{5.3}\]
_and define_
\[T_{1}=(I-K)T\quad\text{and}\quad T_{2}=KT. \tag{5.4}\]
_Then the sum \(T=T_{1}+T_{2}\) as in (5.1) is a pseudo-orthogonal Lebesgue type decomposition of \(T\), connected with the pair \(I-K\) and \(K\) in the sense of Definition 5.1._
_Conversely, let the sum \(T=T_{1}+T_{2}\) in (5.1) be a pseudo-orthogonal Lebesgue type decomposition of \(T\in\mathbf{L}(\mathfrak{H},\mathfrak{K})\) in the sense of Definition 5.1. Then there exists a nonnegative contraction \(K\in\mathbf{B}(\mathfrak{K})\) such that (5.2), (5.3), and (5.4) are satisfied._
Proof.: Let \(K\in\mathbf{B}(\mathfrak{K})\) be a nonnegative contraction and assume that (5.2) and (5.3) hold. Then \(T_{1}=(I-K)T\) is a closable operator and \(T_{2}=KT\) is a singular relation by Lemma 4.1 and Lemma 4.3. Hence \(\operatorname{mul}T_{1}=\{0\}\) so that (3.10) is satisfied. By Theorem 3.6\(T=T_{1}+T_{2}\) is a pseudo-orthogonal decomposition, which is a pseudo-orthogonal Lebesgue type decomposition according to Definition 5.1.
Conversely, let \(T=T_{1}+T_{2}\) be a pseudo-orthogonal Lebesgue type decomposition. Hence, by definition it is a pseudo-orthogonal decomposition, where \(T_{1}\) is closable and \(T_{2}\) is singular. According to Theorem 3.6 there exists a nonnegative contraction \(K\in\mathbf{B}(\mathfrak{K})\) for which the identities in (3.10) (trivially, since \(\operatorname{mul}T_{1}=\{0\}\)) and (5.4) hold. In fact, by Lemma 4.1 and Lemma 4.3 the assertions in (5.2) and (5.3) follow.
The sum decomposition (5.1) in Definition 5.1 is said to be an _orthogonal Lebesgue type decomposition_ if it is an orthogonal decomposition as in Definition 3.7, such that \(T_{1}\) is closable and \(T_{2}\) singular. Hence, the following characterization of orthogonal Lebesgue type decompositions is a direct consequence of Theorem 5.2, Lemma 4.1, and Lemma 4.3; cf. [12].
**Corollary 5.3**.: _Let \(T\in\mathbf{L}(\mathfrak{H},\mathfrak{K})\) be a linear relation. Assume that \(P\in\mathbf{B}(\mathfrak{K})\) is an orthogonal projection which satisfies_
\[\operatorname{clos}\left(\ker\,P\cap\operatorname{dom}T^{*}\right)=\ker\,P, \tag{5.5}\]
\[\operatorname{ran}P\cap\operatorname{dom}T^{*}\subset\ker\,T^{*}, \tag{5.6}\]
_and define_
\[T_{1}=(I-P)T\quad\text{and}\quad T_{2}=PT. \tag{5.7}\]
_Then the sum \(T=T_{1}+T_{2}\) as in (5.1) is an orthogonal Lebesgue type decomposition of \(T\), connected with the pair \(I-P\) and \(P\)._
_Conversely, let the sum \(T=T_{1}+T_{2}\) in (5.1) be an orthogonal Lebesgue type decomposition of \(T\in\mathbf{L}(\mathfrak{H},\mathfrak{K})\). Then there exists an orthogonal projection \(P\in\mathbf{B}(\mathfrak{K})\) such that (5.5), (5.6), and (5.7) are satisfied._
Let \(P_{0}\) stand for the orthogonal projection onto \(\operatorname{mul}T^{**}\). Then it is clear that the conditions (5.5) and (5.6) in Corollary 5.3 are satisfied and it follows that
\[T=T_{\operatorname{reg}}+T_{\operatorname{sing}},\quad T_{\operatorname{reg}}= (I-P_{0})T,\quad T_{\operatorname{sing}}=P_{0}T, \tag{5.8}\]
is an orthogonal Lebesgue type decomposition of \(T\). Here the _regular part_\(T_{\operatorname{reg}}\) is closable and the _singular part_\(T_{\operatorname{sing}}\) is singular. The decomposition in (5.8) is called the _Lebesgue decomposition_ of \(T\); cf. [13, 21, 25, 26, 27]. The Lebesgue decomposition in (5.8) shows the existence of Lebesgue type decompositions of \(T\). Note that \(T_{\operatorname{reg}}\) is bounded if and only if \(\operatorname{dom}T^{*}\) is closed; cf. [12].
Among all Lebesgue type decomposition of a linear relation \(T\in\mathbf{L}(\mathfrak{H},\mathfrak{K})\) the Lebesgue decomposition in (5.8) is distinguished by the maximality property of its regular part \(T_{\operatorname{reg}}\). Recall that for linear relations \(S_{1}\) and \(S_{2}\) from \(\mathfrak{H}\) to \(\mathfrak{K}\) one says that \(S_{1}\) is _dominated_ (_contractively dominated_) by \(S_{2}\), in notation \(S_{1}\prec S_{2}\) (\(S_{1}\prec_{c}S_{2}\)), if \(CS_{2}\subset S_{1}\) for some bounded (contractive) operator \(C\in\mathbf{B}(\mathfrak{K})\). When \(S_{1}\) and
are operators this is equivalent to \(\operatorname{dom}S_{2}\subset\operatorname{dom}S_{1}\) and \(\|S_{1}h\|\leq c\|S_{2}h\|\) for all \(h\in\operatorname{dom}S_{2}\) for some \(0<c\) (\(0<c\leq 1\)); cf. [14] and [12, Definition 8.1, Lemma 8.2]. The next result is a strengthening of the maximality property established earlier for orthogonal Lebesgue type decompositions in [12] to the wider setting of pseudo-orthogonal Lebesgue type decompositions of \(T\).
**Theorem 5.4**.: _Let \(T\in\mathbf{L}(\mathfrak{H},\mathfrak{K})\) and let \(T=T_{1}+T_{2}\) be a pseudo-orthogonal Lebesgue type decomposition of \(T\). Then_
\[T_{1}\prec_{c}T_{\operatorname{reg}},\]
_that is, the regular part \(T_{\operatorname{reg}}\) of the Lebesgue decomposition is the maximal closable part of \(T\), in the sense of domination, among all pseudo-orthogonal Lebesgue type decompositions of \(T\)._
Proof.: In \(T=T_{1}+T_{2}\) one has \(T_{1}=(I-K)T\) for a nonnegative contraction and note that \(I-K\) is also a nonnegative contraction. Hence one concludes \(T_{1}\prec_{c}T\). This domination is preserved by their regular parts, see [12, Theorem 8.3]. Since \(T_{1}\) is closable, it is equal to its regular part and it follows that \(T_{1}\prec_{c}T_{\operatorname{reg}}\).
For a further consideration of Lebesgue type decompositions the class of contractions in \(\mathbf{B}(\mathfrak{K})\) will now be restricted to contractions of the form
\[K=\begin{pmatrix}I&0\\ 0&G\end{pmatrix}:\begin{pmatrix}\operatorname{mul}{T^{**}}\\ \operatorname{dom}T^{*}\end{pmatrix}\to\begin{pmatrix}\operatorname{mul}{T ^{**}}\\ \operatorname{dom}T^{*}\end{pmatrix}, \tag{5.9}\]
where \(G\in\mathbf{B}(\overline{\operatorname{dom}}\,T^{*})\) is a nonnegative contraction. It follows from Theorem 5.2 that \(K\) in (5.9) satisfies (5.2) if and only if
\[\operatorname{clos}\big{\{}k\in\overline{\operatorname{ran}}\,(I-G):\,(I-G)k \in\operatorname{dom}T^{*}\big{\}}=\overline{\operatorname{ran}}\,(I-G), \tag{5.10}\]
and \(K\) satisfies (5.3) if and only if
\[\operatorname{ran}G\cap\operatorname{dom}T^{*}\subset\operatorname{ker}\,T^{ *}. \tag{5.11}\]
Conversely, any \(G\in\mathbf{B}(\overline{\operatorname{dom}}\,T^{*})\) with the properties (5.10) and (5.11) gives via (5.9) a nonnegative contraction \(K\in\mathbf{B}(\mathfrak{K})\) as in Theorem 5.2. The case \(G=0\) corresponds to \(K=P_{0}\), the orthogonal projection onto \(\operatorname{mul}{T^{**}}\), and gives the Lebesgue decomposition, while any orthogonal Lebesgue type decomposition corresponds via (5.9) to an orthogonal projection \(G\) that satisfies (5.10) and (5.11).
Now assume that \(\operatorname{dom}T^{*}\) is not closed. Let \(\mathfrak{L}\subset\overline{\operatorname{dom}}\,T^{*}\setminus\operatorname {dom}T^{*}\) be a closed linear subspace of \(\overline{\operatorname{dom}}\,T^{*}\) and decompose this space accordingly:
\[\overline{\operatorname{dom}}\,T^{*}=(\overline{\operatorname{dom}}\,T^{*} \ominus\mathfrak{L})\oplus\mathfrak{L}.\]
This decomposition will be used in the lemma below. As to the existence of such subspaces \(\mathfrak{L}\), recall that \(\operatorname{dom}T^{*}\) is an operator range space. Hence one has \(\dim(\overline{\operatorname{dom}}\,T^{*}\setminus\operatorname{dom}T^{*})=\infty\); see [9, Corollary to Theorem 2.3]. Therefore one may choose for any \(n\in\mathbb{N}\) an \(n\)-dimensional linear subspace \(\mathfrak{L}\subset\overline{\operatorname{dom}}\,T^{*}\setminus\operatorname {dom}T^{*}\). The following lemmas describe special classes of nonnegative contractions \(K\in\mathbf{B}(\mathfrak{K})\) that illustrate several features discussed earlier.
**Lemma 5.5**.: _Let \(T\in\mathbf{L}(\mathfrak{H},\mathfrak{K})\) and assume that \(\operatorname{dom}T^{*}\) is not closed. Let \(\mathfrak{L}\) be a nontrivial closed linear subspace of \(\overline{\operatorname{dom}}\,T^{*}\setminus\operatorname{dom}T^{*}\). Let \(H\in\mathbf{B}(\mathfrak{L})\) be a
nonnegative contraction, then the operator \(G\) defined by_
\[G=\begin{pmatrix}0&0\\ 0&H\end{pmatrix}:\begin{pmatrix}\overline{\operatorname{dom}}\,T^{*}\ominus \mathfrak{L}\\ \mathfrak{L}\end{pmatrix}\to\begin{pmatrix}\overline{\operatorname{dom}}\,T^{*} \ominus\mathfrak{L}\\ \mathfrak{L}\end{pmatrix}, \tag{5.12}\]
_belongs to \(\mathbf{B}(\overline{\operatorname{dom}}\,T^{*})\) and satisfies the condition (5.11). Assume in addition that \((I-H)^{-1}\in\mathbf{B}(\mathfrak{L})\), then the operator \(G\) in (5.12) satisfies the condition (5.10). Hence \(K\) in (5.9) satisfies the conditions (5.5) and (5.6). Consequently, the sum \(T=T_{1}+T_{2}\) with (5.4) is a pseudo-orthogonal Lebesgue type decomposition of \(T\)._
Proof.: Let \(G\) be as in (5.12). Then \(\operatorname{ran}G\subset\mathfrak{L}\), so that \(\operatorname{ran}G\cap\operatorname{dom}T^{*}=\{0\}\) by the definition of \(\mathfrak{L}\). Hence, the condition (5.11) is automatically satisfied. Furthermore, one sees from the condition \((I-H)^{-1}\in\mathbf{B}(\mathfrak{L})\) that \(I-G\in\mathbf{B}(\overline{\operatorname{dom}}\,T^{*})\) is invertible. Thus the linear space \((I-G)^{-1}\operatorname{dom}T^{*}\) is dense in \(\overline{\operatorname{dom}}\,T^{*}\). Therefore (5.10) is satisfied if \((I-H)^{-1}\in\mathbf{B}(\mathfrak{L})\).
The next lemma goes back to [12].
**Lemma 5.6**.: _Let \(T\in\mathbf{L}(\mathfrak{H},\mathfrak{K})\) and assume that \(\operatorname{dom}T^{*}\) is not closed. Let \(\mathfrak{L}\) be a finite-dimensional linear subspace of \(\overline{\operatorname{dom}}\,T^{*}\setminus\operatorname{dom}T^{*}\). Let \(H\in\mathbf{B}(\mathfrak{L})\) be an orthogonal projection. Then the operator \(K\in\mathbf{B}(\mathfrak{K})\), defined by (5.9) and (5.12), is an orthogonal projection \(K=P\) which satisfies the conditions (5.5) and (5.6). Consequently, the sum \(T=T_{1}+T_{2}\) with (5.7) is an orthogonal Lebesgue type decomposition of \(T\)._
Lemma 5.5 and Lemma 5.6 answer questions about the existence of Lebesgue type decompositions, different from the Lebesgue decomposition: when \(\operatorname{dom}T^{*}\) is not closed there are infinitely many different Lebesgue type decompositions of \(T\), both pseudo-orthogonal and orthogonal. A necessary and sufficient condition for the uniqueness of the Lebesgue decomposition among all pseudo-orthogonal Lebesgue type decompositions (thus including the orthogonal ones) is given in the next theorem.
**Theorem 5.7**.: _Let \(T\in\mathbf{L}(\mathfrak{H},\mathfrak{K})\), then the following statements are equivalent:_
* _the Lebesgue decomposition of_ \(T\) _is the only pseudo-orthogonal Lebesgue type decomposition of_ \(T\)_;_
* \(\operatorname{dom}T^{*}\) _is closed._
Proof.: (i) \(\Rightarrow\) (ii) Assume that \(\operatorname{dom}T^{*}\) is not closed. According to Lemma 5.5 and Lemma 5.6 there exist Lebesgue type decompositions of \(T\), which are pseudo-orthogonal or orthogonal, which are different from the Lebesgue decomposition. This contradiction shows (ii).
(ii) \(\Rightarrow\) (i) Assume that \(\operatorname{dom}T^{*}\) is closed. Let \(T=(I-K)T+KT\) have a Lebesgue type decomposition (5.4), where \(K\) is a nonnegative contraction; cf. Corollary 4.2. Then with the convention (5.9) one has \(\operatorname{ran}G\subset\operatorname{dom}T^{*}\) which, combined with (5.11), leads to \(\operatorname{ran}G\subset\ker T^{*}\) or, equivalently, \(\operatorname{ran}T\subset\ker G\). It follows from (5.9) that
\[K-P_{0}=\begin{pmatrix}0&0\\ 0&G\end{pmatrix}.\]
Therefore the following identities
\[(I-K)T=(I-P_{0})T=T_{\operatorname{reg}}\quad\text{and}\qquad KT=P_{0}T=T_{ \operatorname{sing}}\]
are clear. Thus the pseudo-orthogonal decomposition of \(T\) corresponding to \(K\) coincides with the Lebesgue decomposition.
Theorem 5.7 is a strengthening of the corresponding result in [12] from the case of orthogonal Lebesgue type decompositions to the case of pseudo-orthogonal Lebesgue type decompositions. The uniqueness condition in (ii) is equivalent to the condition that the operator \(T_{\mathrm{reg}}\) is bounded, see [12]. To see this equivalence, recall that \(\mathrm{dom}\,T^{*}\) is closed if and only if \(\mathrm{dom}\,T^{**}\) is closed, while
\[\mathrm{dom}\,T^{**}=\mathrm{dom}\,(T^{**})_{\mathrm{reg}}=\mathrm{dom}\,(T_{ \mathrm{reg}})^{**}.\]
The original statement of such a uniqueness result in the setting of pairs of nonnegative bounded operators goes back to Ando [1]. In [33] there is an extensive treatment of the uniqueness question in the context of forms, including a list of the relevant literature.
Finally, it should be observed that Lemma 5.5 provides some concrete examples for nontrivial intersection of the components in a Lebesgue type decomposition.
**Corollary 5.8**.: _Assume the conditions in Lemma 5.5 and let \(T=T_{1}+T_{2}\) be the corresponding Lebesgue type decomposition. Then the following statements hold._
* _The intersection of_ \(\mathrm{ran}\,T_{1}\) _and_ \(\mathrm{ran}\,T_{2}\) _satisfies_ (5.13) \[\mathrm{ran}\,T_{1}\cap\mathrm{ran}\,T_{2}=\{0\}_{\mathfrak{K}\ominus\mathfrak{ E}}\oplus\ H(I-H)P_{\mathfrak{E}}\mathfrak{M},\] _where_ \(\mathfrak{M}\) _is given by (_3.16_) and_ \(P_{\mathfrak{E}}\) _is the orthogonal projection onto_ \(\mathfrak{E}\)_._
* _If_ \(\overline{\mathrm{ran}}\,T=\mathfrak{K}\)_, then the intersection of_ \(\overline{\mathrm{ran}}\,T_{1}\) _and_ \(\overline{\mathrm{ran}}\,T_{2}\) _satisfies_ (5.14) \[\overline{\mathrm{ran}}\,T_{1}\cap\overline{\mathrm{ran}}\,T_{2}=\{0\}_{ \mathfrak{K}\ominus\mathfrak{E}}\oplus\overline{\mathrm{ran}}\,H.\] _Consequently, if_ \(H\neq 0\) _then_ \(\overline{\mathrm{ran}}\,T_{1}\cap\overline{\mathrm{ran}}\,T_{2}\neq\{0\}\)_._
Proof.: First observe with the matrix representations in (5.9) and (5.12) that
\[(I-K)K=\begin{pmatrix}0&0\\ 0&(I-G)G\end{pmatrix}\quad\text{and}\quad(I-G)G=\begin{pmatrix}0&0\\ 0&(I-H)H\end{pmatrix}. \tag{5.15}\]
(a) The description (5.13) is obtained directly from Theorem 3.9 by using the block formulas in (5.15).
(b) By assumption \(I-H\) is surjective and hence \(\overline{\mathrm{ran}}\,(I-H)H=\overline{\mathrm{ran}}\,H\). Since \(T\) is minimal, the statement in (5.14) follows from Lemma 3.10 again by means of (5.15).
## 6. Pairs of bounded linear operators
Let \(\Phi\in\mathbf{B}(\mathfrak{E},\mathfrak{H})\) and \(\Psi\in\mathbf{B}(\mathfrak{E},\mathfrak{K})\) be bounded linear operators. With these operators one associates the linear relation \(L(\Phi,\Psi)\in\mathbf{L}(\mathfrak{H},\mathfrak{K})\), defined by
\[L(\Phi,\Psi)=\big{\{}\,\{\Phi\eta,\Psi\}:\,\eta\in\mathfrak{E}\,\big{\}}, \tag{6.1}\]
so that \(L(\Phi,\Psi)\) is an operator range relation in the sense of [3], [15]. It follows directly from the definition of \(L(\Phi,\Psi)\) that its domain and range are given by
\[\mathrm{dom}\,L(\Phi,\Psi)=\mathrm{ran}\,\Phi\quad\text{and}\quad\mathrm{ran} \,L(\Phi,\Psi)=\mathrm{ran}\,\Psi, \tag{6.2}\]
while its kernel and multivalued part are given by
\[\mathrm{ker}\,\,L(\Phi,\Psi)=\Phi(\mathrm{ker}\,\,\Psi),\quad\mathrm{mul} \,L(\Phi,\Psi)=\Psi(\mathrm{ker}\,\,\Phi). \tag{6.3}\]
This section gives a brief overview of the decompositions \(\Psi=\Psi_{1}+\Psi_{2}\) with respect to \(\Phi\), with bounded operators \(\Psi_{1}\) and \(\Psi_{2}\), in the present context of a pseudo-orthogonal decomposition of the space \(\mathfrak{K}\), allowing interaction between the components as in Sections 3 and Section 4. These decompositions of \(\Psi\) with respect to \(\Phi\) will be obtained via the corresponding decompositions of the corresponding linear relation \(L(\Phi,\Psi)\); for the orthogonal case, see [15].
The present interest is in sums \(\Psi=\Psi_{1}+\Psi_{2}\) and their interplay with the corresponding linear relations \(L(\Phi,\Psi_{1}+\Psi_{2})\).
**Lemma 6.1**.: _Let \(\Phi\in\mathbf{B}(\mathfrak{E},\mathfrak{H})\), \(\Psi\in\mathbf{B}(\mathfrak{E},\mathfrak{K})\), and assume that \(\Psi=\Psi_{1}+\Psi_{2}\) where \(\Psi_{1},\Psi_{2}\in\mathbf{B}(\mathfrak{E},\mathfrak{K})\). Then there is domain equality_
\[\operatorname{dom}L(\Phi,\Psi)=\operatorname{dom}L(\Phi,\Psi_{1})= \operatorname{dom}L(\Phi,\Psi_{2}),\]
_and inclusion of the relations_
\[L(\Phi,\Psi)\subset L(\Phi,\Psi_{1})+L(\Phi,\Psi_{2}). \tag{6.4}\]
_Moreover, there is equality in (6.4):_
\[L(\Phi,\Psi)=L(\Phi,\Psi_{1})+L(\Phi,\Psi_{2}) \tag{6.5}\]
_if and only if_
\[\Psi(\ker\ \Phi)=\Psi_{1}(\ker\ \Phi)+\Psi_{2}(\ker\ \Phi). \tag{6.6}\]
_The sum in (6.5) is strict (i.e., the sum in (6.6) is direct) if and only if_
\[\Psi_{1}(\ker\ \Phi)\cap\Psi_{2}(\ker\ \Phi)=\{0\}. \tag{6.7}\]
Proof.: From the definition of the relation \(L(\Phi,\Psi)\) in (6.1) it is clear that
\[\operatorname{dom}L(\Phi,\Psi)=\operatorname{dom}L(\Phi,\Psi_{1})= \operatorname{dom}L(\Phi,\Psi_{2})=\operatorname{ran}\Phi,\]
see (6.2). Furthermore, it follows from the definition of the sum in (3.1) that (6.4) holds. Now recall from Lemma 3.2 that there is equality in (6.4) if and only if
\[\operatorname{mul}L(\Phi,\Psi)=\operatorname{mul}L(\Phi,\Psi_{1})+ \operatorname{mul}L(\Phi,\Psi_{2}),\]
which is clearly equivalent to (6.6); cf. (6.3).
**Remark 6.2**.: Let \(\mathfrak{K}\) have a pseudo-orthogonal decomposition \(\mathfrak{K}=\mathfrak{X}+\mathfrak{Y}\) with associated nonnegative contractions \(X\) and \(Y\) such that \(X+Y=I\). Then by Definition 3.4 the decomposition (6.5) of the relation \(L(\Phi,\Psi)\) is pseudo-orthogonal if and only if
* \(\operatorname{ran}\Psi_{1}\subset\mathfrak{X}\) and \(\operatorname{ran}\Psi_{2}\subset\mathfrak{Y}\);
* for each \(\eta\in\mathfrak{E}\) there exist elements \(\eta^{\prime},\eta^{\prime\prime}\in\mathfrak{E}\) with \(\Phi\eta=\Phi\eta^{\prime}=\Phi\eta^{\prime\prime}\) and \(\Psi\eta=\Psi_{1}\eta^{\prime}+\Psi_{2}\eta^{\prime\prime}\) for which \[\|\Psi\eta\|_{\mathfrak{K}}^{2}=\|\Psi_{1}\eta^{\prime}\|_{\mathfrak{X}}^{2}+ \|\Psi_{2}\eta^{\prime\prime}\|_{\mathfrak{Y}}^{2}.\]
Note that (b) implies \(\eta-\eta^{\prime},\eta-\eta^{\prime\prime}\in\ker\ \Phi\) and \(\Psi_{1}\eta^{\prime}+\Psi_{2}\eta^{\prime\prime}=\Psi\eta=\Psi_{1}\eta+\Psi_{ 2}\eta\), so that
\[\Psi_{1}(\eta^{\prime}-\eta)=\Psi_{2}(\eta-\eta^{\prime\prime}). \tag{6.8}\]
By Lemma 3.5 the sum in (6.5) is strict, thus one has (6.7). Therefore (6.8) gives that \(\Psi_{1}\eta^{\prime}=\Psi_{1}\eta\) and \(\Psi_{2}\eta^{\prime\prime}=\Psi_{2}\eta\). Hence (b) implies that
\[\|\Psi\eta\|_{\mathfrak{K}}^{2}=\|\Psi_{1}\eta\|_{\mathfrak{X}}^{2}+\|\Psi_{2 }\eta\|_{\mathfrak{Y}}^{2},\quad\eta\in\mathfrak{E}. \tag{6.9}\]
Note that if (6.9) is satisfied, then (b) holds automatically. Thus the conditions (b) and (6.9) are equivalent.
**Definition 6.3**.: _Let \(\Phi\), \(\Psi\), \(\Psi_{1}\), and \(\Psi_{2}\) be bounded linear operators in \(\mathbf{B}(\mathfrak{E},\mathfrak{H})\) and assume that \(\Psi\) has the decomposition_
\[\Psi=\Psi_{1}+\Psi_{2}\quad\text{with}\quad\Psi(\ker\,\Phi)=\Psi_{1}(\ker\, \Phi)+\Psi_{2}(\ker\,\Phi),\quad\text{direct sum}. \tag{6.10}\]
_Let \(\mathfrak{K}\) have a pseudo-orthogonal decomposition \(\mathfrak{K}=\mathfrak{X}+\mathfrak{Y}\) with associated nonnegative contractions \(X\) and \(Y\) such that \(X+Y=I\). Then the decomposition (6.10) of \(\Psi\) with respect to \(\Phi\) is called pseudo-orthogonal if \(\Phi\), \(\Psi\), \(\Psi_{1}\), and \(\Psi_{2}\) satisfy the conditions_
1. \(\operatorname{ran}\Psi_{1}\subset\mathfrak{X}\) _and_ \(\operatorname{ran}\Psi_{2}\subset\mathfrak{Y}\)_;_
2. \(\|\Psi\eta\|_{\mathfrak{K}}^{2}=\|\Psi_{1}\eta\|_{\mathfrak{X}}^{2}+\|\Psi_{2 }\eta\|_{\mathfrak{Y}}^{2}\) _for all_ \(\eta\in\mathfrak{E}\)_._
It is clear from Remark 6.2 that the decomposition (6.10) of \(\Psi\) with respect to \(\Phi\) is pseudo-orthogonal if and only if the corresponding operator range relation \(L(\Phi,\Psi)\) in (6.1) is pseudo-orthogonal. The pseudo-orthogonal decompositions of \(\Psi\) with respect to \(\Phi\) in Definition 6.3 can now be characterized by means of nonnegative contractions in \(\mathbf{B}(\mathfrak{K})\).
**Theorem 6.4**.: _Let \(\Phi\in\mathbf{B}(\mathfrak{E},\mathfrak{H})\) and \(\Psi\in\mathbf{B}(\mathfrak{E},\mathfrak{K})\). Assume that \(K\in\mathbf{B}(\mathfrak{K})\) is a nonnegative contraction which satisfies_
\[\Psi(\ker\,\Phi)=(I-K)\Psi(\ker\,\Phi)+K\Psi(\ker\,\Phi),\quad\text{direct sum}, \tag{6.11}\]
_and define_
\[\Psi_{1}=(I-K)\Psi\quad\text{and}\quad\Psi_{2}=K\Psi. \tag{6.12}\]
_Then the sum \(\Psi=\Psi_{1}+\Psi_{2}\) as in (6.10) is a pseudo-orthogonal decomposition of \(\Psi\) with respect to \(\Phi\), connected with the pair \(I-K\) and \(K\) in the sense of Definition 6.3._
_Conversely, let \(\Psi=\Psi_{1}+\Psi_{2}\) in (6.10) be a pseudo-orthogonal decomposition of \(\Psi\) with respect to \(\Phi\) in the sense of Definition 6.3. Then there exists a nonnegative contraction \(K\in\mathbf{B}(\mathfrak{K})\) such that (6.11) and (6.12) are satisfied._
Proof.: Let \(\Phi\in\mathbf{B}(\mathfrak{E},\mathfrak{H})\), \(\Psi\in\mathbf{B}(\mathfrak{E},\mathfrak{K})\), and let \(K\in\mathbf{B}(\mathfrak{K})\) be a nonnegative contraction. Define the operators \(\Psi_{1},\Psi_{2}\in\mathbf{B}(\mathfrak{E},\mathfrak{K})\) by (6.12), so that \(\Psi=\Psi_{1}+\Psi_{2}\). Let \(\mathfrak{X}\) and \(\mathfrak{Y}\) be the pair of complemented operator range spaces, contractively contained in \(\mathfrak{K}\), associated with the nonnegative contractions \(X=I-K\) and \(Y=K\). By definition
\[\operatorname{ran}\Psi_{1}=\operatorname{ran}(I-K)\Psi\subset\mathfrak{X} \quad\text{and}\quad\operatorname{ran}\Psi_{2}=\operatorname{ran}K\Psi\subset \mathfrak{Y},\]
so that condition (a) in Definition 6.3 is satisfied. To see (b) in Definition 6.3 observe that
\[\|\Psi_{1}\eta\|_{\mathfrak{X}}^{2}+\|\Psi_{2}\eta\|_{\mathfrak{Y }}^{2} =\|(I-K)\Psi\eta\|_{\mathfrak{X}}^{2}+\|K\Psi\eta\|_{\mathfrak{Y}}^{2}\] \[=((I-K)\Psi\eta,\Psi\eta)_{\mathfrak{K}}+(K\Psi\eta,\Psi\eta)_{ \mathfrak{K}}=\|\Psi\eta\|_{\mathfrak{K}}^{2},\]
so that condition (b) in Remark 6.2 is satisfied. Hence, \(\Psi=\Psi_{1}+\Psi_{2}\) is a pseudo-orthogonal decomposition of \(\Psi\) with respect to \(\Phi\) in the sense of Definition 6.3.
Conversely, assume that \(\Psi=\Psi_{1}+\Psi_{2}\) is a pseudo-orthogonal decomposition with respect to \(\Phi\) as in Definition 6.3. Then \(L(\Phi,\Psi)\) in (6.1) has a pseudo-orthogonal decomposition of the form
\[L(\Phi,\Psi)=L(\Phi,\Psi_{1})+L(\Phi,\Psi_{2}).\]
cf. Remark 6.2. Therefore, by Theorem 3.6 there exists a nonnegative contraction \(K\in\mathbf{B}(\mathfrak{K})\) gives
\[L(\Phi,\Psi_{1})=(I-K)L(\Phi,\Psi)\quad\text{and}\quad L(\Phi,\Psi_{2})=KL(\Phi, \Psi),\]
which reads
\[L(\Phi,\Psi_{1})=L(\Phi,(I-K)\Psi)\quad\text{and}\quad L(\Phi,\Psi_{2})=L(\Phi, K\Psi). \tag{6.13}\]
To verify the identities in (6.12) let \(\eta\in\mathfrak{E}\). Then due to (6.13) there exist \(\eta^{\prime},\eta^{\prime\prime}\in\mathfrak{E}\) such that \(\Phi\eta=\Phi\eta^{\prime}=\Phi\eta^{\prime\prime}\), while \((I-K)\Psi\eta=\Psi_{1}\eta^{\prime}\) and \(K\Psi\eta=\Psi_{2}\eta^{\prime\prime}\). From \(\Psi_{1}\eta^{\prime}+\Psi_{2}\eta^{\prime\prime}=\Psi\eta=\Psi_{1}\eta+\Psi_{ 2}\eta\) it follows that \(\Psi_{1}\eta^{\prime}=\Psi_{1}\eta\) and \(\Psi_{2}\eta^{\prime\prime}=\Psi_{2}\eta\); cf. (6.7) and Remark 6.2. Therefore, the identities in (6.12) hold.
Now consider the case of operator range relations \(L(\Phi,\Psi)\) that are closable or singular. Recall that \(L(\Phi,\Psi)\) is closable if and only if \(\operatorname{mul}L(\Phi,\Psi)^{**}=\{0\}\) or, equivalently, for every sequence \(\eta_{n}\in\mathfrak{E}\) one has
\[\Phi\eta_{n}\to 0\quad\text{and}\quad\Psi(\eta_{n}-\eta_{m})\to 0\quad \Rightarrow\quad\Psi\eta_{n}\to 0. \tag{6.14}\]
Likewise, \(L(\Phi,\Psi)\) is singular if and only if \(\operatorname{ran}L(\Phi,\Psi)^{**}\subset\operatorname{mul}L(\Phi,\Psi)^{**}\) (cf. [12, Proposition 2.8]) or, equivalently, for every sequence \(\eta_{n}\) in \(\mathfrak{E}\) there exists a subsequence, denoted by \(\omega_{n}\), such that
\[\Psi(\eta_{n}-\eta_{m})\to 0\quad\Rightarrow\quad\Phi\omega_{n}\to 0. \tag{6.15}\]
Note that \(L(\Psi,\Phi)=L(\Phi,\Psi)^{-1}\) implies that \(L(\Phi,\Psi)\) is singular if and only if \(L(\Psi,\Phi)\) is singular.
**Remark 6.5**.: The characterizations in (6.14) and (6.15) of closable and singular operator range relations remain valid if the sequences \(\varphi_{n}\) are taken from a dense set \(\mathfrak{R}\subset\mathfrak{E}\). To see this, observe that for any sequence \(\eta_{n}\in\mathfrak{E}\) there exists an approximating sequence \(\eta^{\prime}_{n}\in\mathfrak{R}\), such that
\[\|\eta_{n}-\eta^{\prime}_{n}\|\leq\frac{1}{n},\quad\text{in which case}\quad\big{|}\|L\eta_{n}\|-\|L\eta^{\prime}_{n}\|\big{|}\leq\frac{1}{n}\|L\|,\]
for any \(L\in\mathbf{B}(\mathfrak{E},\mathfrak{L})\), where \(\mathfrak{L}\) is a Hilbert space.
The following simple observations about the adjoint relation \(L(\Phi,\Psi)^{*}\) play a role in the rest of this section. A direct calculation shows that
\[L(\Phi,\Psi)^{*}=\big{\{}\,\{k,h\}\in\mathfrak{K}\times\mathfrak{H}:\,\Psi^{ *}k=\Phi^{*}h\,\big{\}}.\]
Thus, by means of the linear subspaces
\[\mathfrak{D}(\Phi,\Psi)=\{k\in\mathfrak{K}:\,\Psi^{*}k\in\operatorname{ran} \Phi^{*}\},\quad\mathfrak{R}(\Phi,\Psi)=\{h\in\mathfrak{H}:\,\Phi^{*}h\in \operatorname{ran}\Psi^{*}\},\]
the domain and the range of \(L(\Phi,\Psi)^{*}\) are given by
\[\operatorname{dom}L(\Phi,\Psi)^{*}=\mathfrak{D}(\Phi,\Psi),\quad\operatorname{ ran}L(\Phi,\Psi)^{*}=\mathfrak{R}(\Phi,\Psi),\]
and, likewise, the kernel and multivalued part of \(L(\Phi,\Psi)^{*}\) are given by
\[\ker\,L(\Phi,\Psi)^{*}=\ker\,\Psi^{*},\quad\operatorname{mul}L(\Phi,\Psi)^{*} =\ker\,\Phi^{*}.\]
**Definition 6.6**.: _The operator \(\Psi\) is called regular with respect to \(\Phi\) if \(\mathfrak{D}(\Phi,\Psi)\) is dense in \(\mathfrak{K}\), which is the case if and only if the relation \(L(\Phi,\Psi)\) is closable. Likewise, the operator \(\Psi\) is called singular with respect to \(\Phi\) if \(\mathfrak{D}(\Phi,\Psi)\subset\ker\,\Psi^{*}\) or, equivalently, \(\mathfrak{R}(\Phi,\Psi)\subset\ker\,\Phi^{*}\), which is the case if and only if the relation \(L(\Phi,\Psi)\) is singular._
Clearly, an equivalent characterization for singularity is that
\[\operatorname{ran}\Phi^{*}\cap\operatorname{ran}\Psi^{*}=\{0\}, \tag{6.16}\]
(expressing the symmetry between \(\Phi\) and \(\Psi\)); see also [15, Lemma 5.2].
**Remark 6.7**.: Both notions appearing in Definition 6.6 have equivalent characterizations which resemble their measure-theoretic analogs. In particular, \(\Psi\) is regular with respect to \(\Phi\) if and only if \(\Psi\) is almost dominated by \(\Phi\). In this case \(\Psi\) has a Radon-Nikodym derivative with respect to \(\Phi\), which is given by the closed operator
\[R(\Phi,\Psi)=L(\Phi,\Psi)^{**},\]
and then \(\Psi\) can be written as \(\Psi=R(\Phi,\Psi)\Phi\). Likewise, \(\Psi\) is singular with respect to \(\Phi\) (or \(\Phi\) is singular with respect to \(\Psi\)) precisely if for any \(\Xi\in\mathbf{B}(\mathfrak{K})\) one has
\[\Xi\prec\Phi\quad\text{and}\quad\Xi\prec\Psi\quad\Rightarrow\quad\Xi=0.\]
For the definitions and the arguments, see [15, Sections 5, 6].
**Definition 6.8**.: _Let \(\Phi\in\mathbf{B}(\mathfrak{E},\mathfrak{K})\) and \(\Psi\in\mathbf{B}(\mathfrak{E},\mathfrak{K})\). Then \(\Psi\) is said to have a pseudo-orthogonal Lebesgue type decomposition_
\[\Psi=\Psi_{1}+\Psi_{2} \tag{6.17}\]
_with respect to \(\Phi\) if the sum (6.17) is a pseudo-orthogonal decomposition of \(\Psi\) with respect to \(\Phi\) as in Definition 6.3, where \(\Psi_{1}\) is regular with respect to \(\Phi\) and \(\Psi_{2}\) is singular with respect to \(\Phi\)._
The following characterization is now straightforward.
**Theorem 6.9**.: _Let \(\Phi\in\mathbf{B}(\mathfrak{E},\mathfrak{K})\) and \(\Psi\in\mathbf{B}(\mathfrak{E},\mathfrak{K})\). Assume that \(K\in\mathbf{B}(\mathfrak{K})\) is a nonnegative contraction which satisfies_
\[\operatorname{clos}\big{\{}k\in\overline{\operatorname{ran}}\,(I-K):\,(I-K)k \in\mathfrak{D}(\Phi,\Psi)\big{\}}=\overline{\operatorname{ran}}\,(I-K), \tag{6.18}\]
\[\operatorname{ran}K\cap\mathfrak{D}(\Phi,\Psi)\subset\ker\,\Psi^{*}, \tag{6.19}\]
_and define_
\[\Psi_{1}=(I-K)\Psi\quad\text{and}\quad\Psi_{2}=K\Psi. \tag{6.20}\]
_Then the sum \(\Psi=\Psi_{1}+\Psi_{2}\) as in (6.17) is a pseudo-orthogonal Lebesgue type decomposition of \(\Psi\) with respect to \(\Phi\), connected with the pair \(I-K\) and \(K\) in the sense of Definition 6.8._
_Conversely, let \(\Psi=\Psi_{1}+\Psi_{2}\) in (6.17) be a pseudo-orthogonal Lebesgue type decomposition of \(\Psi\) with respect to \(\Phi\) in the sense of Definition 6.8. Then there exists a nonnegative contraction \(K\in\mathbf{B}(\mathfrak{K})\) such that (6.18), (6.19), and (6.20) are satisfied._
To verify the theorem, recall Theorem 6.4 and apply Definition 6.6 to the components \(\Psi_{1}\) and \(\Psi_{2}\) in (6.20); see also Theorem 5.2, or Lemma 4.1 and Lemma 4.3. Note that the condition for the sum in (6.11), as stated in Theorem 6.4, is now absent since this condition automatically follows from the condition (6.18): one has \(\Psi(\ker\,\Phi)\subset\ker\,(I-K)\); see (4.1), (6.3). Observe, also that the singularity condition (6.19) for the component \(\Psi_{2}=K\Psi\) is equivalent to \(\operatorname{ran}\Psi^{*}K\cap\operatorname{ran}\Phi^{*}=\{0\}\); cf. (6.16).
Let \(P_{0}\) be the orthogonal projection onto \(\mathfrak{D}(\Phi,\Psi)^{\perp}\). Then the pair of operators \(\Psi_{\mathrm{reg}}=(I-P_{0})\Psi\) and \(\Psi_{\mathrm{sing}}=P_{0}\Psi\), gives an orthogonal Lebesgue type decomposition \(\Psi=\Psi_{\mathrm{reg}}+\Psi_{\mathrm{sing}}\) with respect to \(\Phi\). It is called the _Lebesgue decomposition_ of \(\Psi\) with respect to \(\Phi\). Note that it follows from Theorem 6.9 that \((I-K)P_{0}=0\), so that \((I-K)(I-P_{0})=I-K\). Hence, for the regular part \(\Psi_{\mathrm{reg}}\) there is the following characterization; cf. [15].
**Corollary 6.10**.: _Let \(\Phi\in\mathbf{B}(\mathfrak{E},\mathfrak{H})\) and \(\Psi\in\mathbf{B}(\mathfrak{E},\mathfrak{K})\). Let \(\Psi=\Psi_{1}+\Psi_{2}\) be a pseudo-orthogonal Lebesgue type decomposition of \(\Psi\) with respect to \(\Phi\), then_
\[\|\Psi_{1}h\|\leq\|\Psi_{\mathrm{reg}}h\|,\quad h\in\mathfrak{E}.\]
**Corollary 6.11**.: _Let \(\Phi\in\mathbf{B}(\mathfrak{E},\mathfrak{H})\) and \(\Psi\in\mathbf{B}(\mathfrak{E},\mathfrak{K})\). Then the following statements are equivalent:_
1. \(\Psi\) _admits a unique pseudo-orthogonal Lebesgue type decomposition with respect to_ \(\Phi\)_;_
2. \(\mathfrak{D}(\Phi,\Psi)\) _is closed._
## 7. Addendum
In Definition 3.4 the sum \(T=T_{1}+T_{2}\) in (3.9) is automatically strict, i.e., \(\operatorname{mul}T_{1}\cap\operatorname{mul}T_{2}=\{0\}\), due to Lemma 3.5. However, the conditions (a) and (b) in Definition 3.4 may be weakened to:
1. \(\operatorname{ran}T_{1}\subset\mathfrak{X}\) and \(\operatorname{ran}T_{2}\subset\mathfrak{Y}\);
2. for every \(\{f,g\}\in T\) there exist elements \(\{f,g_{1}\}\in T_{1}\), \(\{f,g_{2}\}\in T_{2}\) with \(g=g_{1}+g_{2}\), such that \[\|g\|_{\mathfrak{K}}^{2}=\|g_{1}\|_{\mathfrak{X}}^{2}+\|g_{2}\|_{\mathfrak{Y} }^{2},\] and, in addition, \(\operatorname{mul}T_{1}\cap\operatorname{mul}T_{2}\subset XY\operatorname{mul }T\).
Then the conditions (a) and (b') are equivalent to (a) and (b) in Definition 3.4 if and only if the sum \(T=T_{1}+T_{2}\) is strict. Observe that the additional condition \(\operatorname{mul}T_{1}\cap\operatorname{mul}T_{2}\subset XY\operatorname{mul }T\) in (b') is equivalent to \(\operatorname{mul}T_{1}\cap\operatorname{mul}T_{2}=XY\operatorname{mul}T\); it serves as a smoothness condition, since in general one has the range inclusion \(\operatorname{ran}T_{1}\cap\operatorname{ran}T_{2}\subset\operatorname{ran }X^{\frac{1}{2}}\cap\operatorname{ran}Y^{\frac{1}{2}}\). The assertions in Theorem 3.6 remain valid in the weaker sense if the directness condition in (3.10) is dropped.
Similar remarks can be made for the linear relations generated by a pair of bounded operators. Recall that the condition \(\Psi(\ker\,\Phi)=\Psi_{1}(\ker\,\Phi)+\Psi_{2}(\ker\,\Phi)\) in Lemma 6.1 is equivalent to the identity (6.5). The conditions (a) and (b) in Remark 6.2 may be weakened to:
1. \(\operatorname{ran}\Psi_{1}\subset\mathfrak{X}\) and \(\operatorname{ran}\Psi_{2}\subset\mathfrak{Y}\);
2. for all \(\eta\in\mathfrak{E}\) there exist elements \(\eta^{\prime},\eta^{\prime\prime}\in\mathfrak{E}\) with \(\Phi\eta=\Phi\eta^{\prime}=\Phi\eta^{\prime\prime}\) and \(\Psi\eta=\Psi_{1}\eta^{\prime}+\Psi_{2}\eta^{\prime\prime}\), such that \[\|\Psi\eta\|_{\mathfrak{K}}^{2}=\|\Psi_{1}\eta^{\prime}\|_{\mathfrak{X}}^{2}+\| \Psi_{2}\eta^{\prime\prime}\|_{\mathfrak{Y}}^{2},\] while the inclusion \(\Psi_{1}(\ker\,\Phi)\cap\Psi_{2}(\ker\,\Phi)\subset XY\,\Psi(\ker\,\Phi)\) holds.
Then the conditions (a) and (b') are equivalent to (a) and (b) in Remark 6.2 if and only if \(\Psi_{1}(\ker\,\Phi)\cap\Psi_{2}(\ker\,\Phi)=\{0\}\). In the weaker case the assertions in Theorem 6.4 have to be adapted accordingly. The directness condition (6.11) must
be dropped and the condition (6.12) must be replaced by: for every \(\eta\in\mathfrak{E}\) there exist \(\eta^{\prime},\eta^{\prime\prime}\in\ker\,\Phi\), such that
\[\Psi_{1}\eta=(I-K)\Psi(\eta-\eta^{\prime})\quad\text{and}\quad\Psi_{2}\eta=K \Psi(\eta-\eta^{\prime\prime}).\]
|
2307.10904 | A Parametric Study of the SASI Comparing General Relativistic and
Nonrelativistic Treatments | We present numerical results from a parameter study of the standing accretion
shock instability (SASI), investigating the impact of general relativity (GR)
on the dynamics. Using GR hydrodynamics with GR gravity, and nonrelativistic
(NR) hydrodynamics with Newtonian gravity, in an idealized model setting, we
vary the initial radius of the shock and, by varying its mass and radius in
concert, the proto-neutron star (PNS) compactness. We investigate four
compactnesses expected in a post-bounce core-collapse supernova (CCSN). We find
that GR leads to a longer SASI oscillation period, with ratios between the GR
and NR cases as large as 1.29 for the highest-compactness suite. We also find
that GR leads to a slower SASI growth rate, with ratios between the GR and NR
cases as low as 0.47 for the highest-compactness suite. We discuss implications
of our results for CCSN simulations. | Samuel J. Dunham, Eirik Endeve, Anthony Mezzacappa, John M. Blondin, Jesse Buffaloe, Kelly Holley-Bockelmann | 2023-07-20T14:26:56Z | http://arxiv.org/abs/2307.10904v2 | # A Parametric Study of the SASI Comparing General Relativistic and Non-Relativistic Treatments1
###### Abstract
We present numerical results from a parameter study of the standing accretion shock instability (SASI), investigating the impact of general relativity (GR) on the dynamics. Using GR hydrodynamics and gravity, and non-relativistic (NR) hydrodynamics and gravity, in an idealized model setting, we vary the initial radius of the shock and, by varying its mass and radius in concert, the proto-neutron star (PNS) compactness. We investigate two regimes expected in a post-bounce core-collapse supernova (CCSN): one meant to resemble a relatively low-compactness configuration and one meant to resemble a relatively high-compactness configuration. We find that GR leads to a longer SASI oscillation period, with ratios between the GR and NR cases as large as 1.29 for the high-compactness suite. We also find that GR leads to a slower SASI growth rate, with ratios between the GR and NR cases as low as 0.47 for the high-compactness suite. We discuss implications of our results for CCSN simulations.
accretion -- general relativity -- hydrodynamics -- shocks -- supernovae +
Footnote †: journal: ApJ
## 1 Introduction
Since the discovery of the standing accretion shock instability (SASI; Blondin et al., 2003), which many two- and three-dimensional simulations performed to date demonstrate becomes manifest during a core-collapse supernova (CCSN) in the post-shock accretion flow onto the proto-neutron star (PNS), groups have made efforts to understand its physical origin and its effects on the supernova itself. The SASI is characterized in two dimensions (2D) by a large-scale "sloshing" of the shocked fluid, and in three dimensions (3D) by additional spiral modes (Blondin & Mezzacappa, 2007). It is now generally accepted that turbulent neutrino-driven convection plays a major role in re-energizing the stalled shock (e.g., see Burrows et al., 2012; Hanke et al., 2013; Murphy et al., 2013; Couch & Ott, 2015; Melson et al., 2015; Lentz et al., 2015; Abdikamalov et al., 2015; Radice et al., 2015; Melson et al., 2015, 2015; Roberts et al., 2016; Muller et al., 2017; Summa et al., 2018; Radice et al., 2018; O'Connor & Couch, 2018; Vartanyan et al., 2019; Muller et al., 2019; Burrows et al., 2019; Yoshida et al., 2019; Powell & Muller, 2020; Stockinger et al., 2020; Muller & Varma, 2020; Vartanyan et al., 2022; Nakamura et al., 2022; Matsumoto et al., 2022). The same simulations that led to the above conclusion also generally exhibit the SASI, with outcomes ranging from convection-dominated flows to SASI-dominated flows, and flows where neither dominates. Strong SASI activity and, in some cases, SASI-aided explosions have been
reported in, for example, the three-dimensional simulations of Summa et al. (2018), O'Connor and Couch (2018), and Matsumoto et al. (2022). A more precise determination of the relative role played by these two instabilities in the explosion mechanism, on a case-by-case basis (i.e., for different progenitor characteristics; e.g., see Scheck et al., 2008; Hanke et al., 2012, 2013; Couch and O'Connor, 2014; Fernandez et al., 2014; Melson et al., 2015; Abdikamalov et al., 2015; Fernandez, 2015), will require advances in current three-dimensional models to include full general relativity, rotation, magnetic fields, and the requisite neutrino interaction physics with realistic spectral neutrino kinetics, all at high spatial resolution. It is also important to note that, while convection-dominated and SASI-dominated scenarios may lie at the extremes of what is possible, it is not necessary for one or the other instability to be dominant to play an important role - specifically, for the more complex cases where neither dominates, it would be impossible to determine precisely the relative contribution from these two instabilities.
Several studies have concluded that the SASI is an advective-acoustic instability, in which vortical waves generated at the shock advect to the surface of the PNS, which in turn generate acoustic waves that propagate back to the shock and further perturb it (Foglizzo et al., 2006, 2007; Yamasaki and Yamada, 2007; Laming, 2007, 2008; Foglizzo, 2009; Guilet and Foglizzo, 2012). This perturbation generates more vortical waves, which advect to the PNS surface, thus creating a feedback loop that drives the instability. An alternative explanation for the SASI is the purely acoustic mechanism, in which acoustic perturbations just below the shock travel around the post-shock region and constructively interfere with each other, generating stronger acoustic perturbations and thereby feeding the instability (Blondin and Mezzacappa, 2006). A recent study (Walk et al., 2023) suggests that the acoustic mechanism may play a particularly important role in the SASI when rotation is included, implying that the origins of the SASI excitation may depend on conditions between the shock and the PNS.
Other numerical studies focus on particular aspects of this instability, such as the hydrodynamics of the SASI (Ohnishi et al., 2006; Sato et al., 2009; Iwakami et al., 2014), spiral modes (Blondin and Shaw, 2007; Iwakami et al., 2008; Fernandez, 2010), the spin-up of the possible remnant pulsar (Blondin and Mezzacappa, 2007), the effect of nuclear dissociation (Fernandez and Thompson, 2009), saturation of the instability (Guilet et al., 2010), the generation and amplification of magnetic fields (Endeve et al., 2010, 2012), the relative importance of the SASI and convection in CCSNe (Cardall and Budiardja, 2015), the generation of, and impact on, gravitational waves by the SASI (Kotake et al., 2007, 2009; Kuroda et al., 2016; Andresen, 2017; Kuroda et al., 2017; Andresen et al., 2017; O'Connor and Couch, 2018; Hayama et al., 2018; Andresen et al., 2019; Mezzacappa et al., 2020, 2023; Drago et al., 2023), and the effects of rotation (Yamasaki and Yamada, 2005; Yamasaki and Foglizzo, 2008; Walk et al., 2023; Buellet et al., 2023). Some of these studies included sophisticated microphysics, such as realistic equations of state (EoSs), and neutrino transport; however, with the exception of Kuroda et al. (2017), none of these studies solved the general relativistic hydrodynamics (GRHD) equations, instead solving their non-relativistic (NRHD) counterparts, some with an approximate relativistic gravitational potential. It has been demonstrated that GR effects are crucial to include in CCSN simulations (Bruenn et al., 2001; Muller et al., 2012; Lentz et al., 2012; O'Connor and Couch, 2018), yet the SASI itself has not been fully investigated in the GR regime. A recent paper (Kundu and Coughlin, 2022) does analyze steady state accretion through a stationary shock onto compact objects in a Schwarzchild geometry and compares with Newtonian solutions, and posits that GR may have a non-negligible impact on the SASI. They find that, for conditions expected in exploding CCSNe, the freefall speed is of order \(v\sim 0.2\,c\) (with \(c\) the speed of light), and the differences between the GR and NR solutions are of order \(10\%\). For conditions expected in failed CCSNe (i.e., supernovae where the shock is not revived, in which case the freefall speed can be \(v\sim 0.66\,c\)), the differences can be larger.
The timescales that likely influence the SASI depend on signal-speeds associated with advective and acoustic modes in the region between the shock and the PNS surface (Blondin and Mezzacappa, 2006; Foglizzo et al., 2007; Muller, 2020). Motivated in part by Dunham et al. (2020) and Kundu and Coughlin (2022), we expect SASI simulations to behave differently depending on whether or not the treatment of the hydrodynamics and gravity are general relativistic. Specifically, we expect both advective and acoustic modes to be influenced by the different post-shock structure in the GR case relative to the NR case.
This leads to our main science question: _How does a general relativistic treatment of hydrodynamics and gravity affect the oscillation period and growth rate of the SASI?_ To begin to address this, we present the first comparison of the SASI in both a non-relativistic and a general relativistic framework, using an idealized model under two sets of conditions: one set is mildly relativistic and is designed to mirror low-compactness conditions after bounce in a CCSN, and the other set is strongly relativistic and is designed to mirror high-compactness conditions. We focus our attention on the linear regime and characterize the SASI by its growth rate and oscillation period, as was done in Blondin and
Mezzacappa (2006). To capture both the linear regime of the SASI and its transition to the nonlinear regime, we perform our assessment via two suites of axisymmetric numerical simulations, differentiated by their compactness, using GRHD and GR gravity, with the PNS represented by a point mass and gravity encoded in a Schwarzchild spacetime metric. To better assess the impact of GR, we also perform simulations using the same parameter sets but with NRHD and Newtonian gravity, again with the PNS represented by a point mass, but in this case gravity is encoded in the Newtonian potential.
We use a system of units in which \(c=G=1\) and also make use of the Einstein summation convention, with Greek indices running from 0 to 3 and Latin indices running from 1 to 3.
## 2 Physical model
### Relativistic Gravity: Conformally-Flat Condition
We use the 3+1 decomposition of spacetime (see, e.g., Banyuls et al., 1997; Gourgoulhon, 2012; Rezzolla & Zanotti, 2013, for details), which, in the coordinate system \(x=\left(t,x^{i}\right)\), introduces four degrees of freedom: the lapse function, \(\alpha\left(x\right)\), and the three components of the shift vector, \(\beta^{i}\left(x\right)\). We further specialize to the conformally-flat condition (CFC, Wilson et al., 1996), effectively neglecting the impact of gravitational waves on the dynamics. This is a valid approximation when the CCSN progenitor is non-rotating (Dimmelmeier et al., 2005), as is the case for our simulations. The CFC forces the components of the spatial three-metric, \(\gamma_{ij}\left(x\right)\), to take the form
\[\gamma_{ij}=\psi^{4}\,\overline{\gamma}_{ij}, \tag{1}\]
where \(\psi\left(x\right)\) is the conformal factor and the \(\overline{\gamma}_{ij}\) are the components of a time-independent, flat-space metric. We choose an isotropic spherical-polar coordinate system, as it is appropriate to our problem and is consistent with the CFC; the flat-space metric is
\[\overline{\gamma}_{ij}\left(r,\theta\right):=\text{diag}\left(1,r^{2},r^{2}\, \sin^{2}\theta\right), \tag{2}\]
and the lapse function, conformal factor, and shift vector take the form given in Baumgarte & Shapiro (2010),
\[\alpha\left(t,r,\theta,\varphi\right) =\alpha\left(r\right)\;:=\frac{1-r_{s}/r}{1+r_{s}/r}, \tag{3}\] \[\psi\left(t,r,\theta,\varphi\right) =\psi\left(r\right)\;:=1+r_{s}/r,\] (4) \[\beta^{i}\left(t,r,\theta,\varphi\right) =\beta^{i}\left(r\right):=0, \tag{5}\]
where \(r>r_{s}\) is the isotropic radial coordinate measured from the origin and \(r_{s}:=M/2\) is the Schwarzchild radius in isotropic coordinates for an object of mass \(M\). The line element under a 3+1 decomposition in isotropic coordinates takes the form
\[ds^{2}=-\alpha^{2}\,dt^{2}+\gamma_{ij}\,dx^{i}\,dx^{j}. \tag{6}\]
We note here that the proper radius, \(\mathcal{R}\left(r\right)\), corresponding to the coordinate radius, \(r>r_{s}\), is defined by
\[\mathcal{R}\left(r\right):=\int\limits_{r_{s}}^{r}\sqrt{\gamma_{11}}\,dr^{ \prime}=r-r_{s}^{2}/r+2\,r_{s}\,\log\left(r/r_{s}\right)\geq r, \tag{7}\]
where we used Eqs. (1-2) with \(\psi\) given by (4). Under the CFC, the square root of the determinant of the spatial three-metric is
\[\sqrt{\gamma}=\psi^{6}\,\sqrt{\gamma}=\psi^{6}\,r^{2}\,\sin\theta. \tag{8}\]
### Relativistic Hydrodynamics
We solve the relativistic hydrodynamics equations of a perfect fluid (i.e., no viscosity or heat transfer) in the Valencia formulation (Banyuls et al., 1997; Rezzolla & Zanotti, 2013), in which they take the form of a system of hyperbolic conservation laws with sources. Under our assumption of a stationary spacetime, the equations can be written as
\[\partial_{t}\,\mathbf{U}+\frac{1}{\sqrt{\gamma}}\,\partial_{i}\left(\alpha\, \sqrt{\gamma}\,\mathbf{F}^{i}\right)=\alpha\,\mathbf{S}, \tag{9}\]
where \(\mathbf{U}:=\mathbf{U}\left(t,r,\theta,\varphi\right)\) is the vector of evolved fluid fields,
\[\mathbf{U}=\begin{pmatrix}D\\ S_{j}\\ \tau\end{pmatrix}=\begin{pmatrix}\rho\,W\\ \rho\,h\,W^{2}\,v_{j}\\ \rho\,h\,W^{2}-p-\rho\,W\end{pmatrix}, \tag{10}\]
\(\mathbf{F}^{i}:=\mathbf{F}^{i}\left(\mathbf{U}\right)\) is the vector of fluxes of those fields in the \(i\)-th spatial dimension,
\[\mathbf{F}^{i}=\begin{pmatrix}D\,v^{i}\\ P^{i}_{\ j}\\ S^{i}-D\,v^{i}\end{pmatrix}, \tag{11}\]
and \(\mathbf{S}:=\mathbf{S}\left(\mathbf{U}\right)\) is the vector of sources,
\[\mathbf{S}=\begin{pmatrix}0\\ \frac{1}{2}\,P^{ik}\,\partial_{j}\,\gamma_{ik}-\alpha^{-1}\left(\tau+D\right) \,\partial_{j}\,\alpha\\ -\alpha^{-1}S^{j}\,\partial_{j}\,\alpha\end{pmatrix}, \tag{12}\]
where \(D\) is the conserved rest-mass density, \(S_{j}\) is the component of the Eulerian momentum density in the \(j\)-th spatial dimension, and \(\tau:=E-D\), with \(E\) the Eulerian energy density. The component of the fluid three-velocity in the \(j\)-th spatial dimension is denoted by \(v^{j}\), and \(W:=\left(1-v^{i}v_{i}\right)^{-1/2}\) is the Lorentz factor of the fluid, both as measured by an Eulerian observer. The relativistic specific enthalpy as measured by an observer comoving with the fluid; i.e., a comoving observer, is \(h:=1+\left(e+p\right)/\rho\), where \(\rho\) is the baryon mass density, \(e\) is the internal energy density, and \(p\) is the thermal pressure, all measured by a comoving observer. Finally, \(P^{ij}:=\rho\,h\,W^{2}\,v^{i}\,v^{j}+p\,\gamma^{ij}\), with \(\gamma^{ij}\) the inverse of \(\gamma_{ij}\); i.e., \(\gamma^{ik}\gamma_{kj}=\delta^{i}_{\ j}\). See Rezzolla & Zanotti (2013) for more details.
We close the hydrodynamics equations with an ideal EoS,
\[p\left(e\right)=\left(\Gamma-1\right)e, \tag{13}\]
where \(\Gamma\in\left(1,2\right]\) is the ratio of specific heats. For this study, we set \(\Gamma=4/3\). We further assume the EoS is that of a polytrope; i.e.,
\[p\left(\rho\right)=K\,\rho^{\Gamma}, \tag{14}\]
where \(K>0\) is the polytropic constant, whose logarithm can be considered a proxy for the entropy, \(S\); i.e., \(S\propto\log\left(p/\rho^{\Gamma}\right)\). The constant \(K\) takes different values on either side of a shock, in accordance with physically admissible solutions. (14) is consistent with (13) through the first law of thermodynamics for an isentropic fluid.
### Non-Relativistic Hydrodynamics
Under the 3+1 formalism of GR, the effect of gravity is encoded in the metric via the lapse function, the conformal factor, and the shift vector, whereas with NR, the metric is that of flat space and the effect of gravity is encoded in the Newtonian gravitational potential, \(\Phi\). Of course, the NRHD equations can be recovered from the GRHD equations by taking appropriate limits; i.e., \(v^{i}v_{i}\ll 1\), \(p,e\ll\rho\), and \(\Phi\ll 1\), and setting \(\alpha\approx\alpha_{\textsc{NR}}:=1+\Phi\).
In the case of NR, we solve
\[\partial_{t}\,\mathbf{U}+\frac{1}{\sqrt{\gamma}}\,\partial_{i}\left(\sqrt{\gamma} \,\mathbf{F}^{i}\right)=\mathbf{S}, \tag{15}\]
where
\[\mathbf{U}=\begin{pmatrix}D\\ S_{j}\\ E\end{pmatrix}=\begin{pmatrix}\rho\\ \rho\,v_{j}\\ e+\frac{1}{2}\,\rho\,v^{i}\,v_{i}\end{pmatrix}, \tag{16}\]
\[\mathbf{F}^{i}=\begin{pmatrix}\rho\,v^{i}\\ P^{i}_{\ j}\\ \left(E+p\right)v^{i}\end{pmatrix}, \tag{17}\]
and
\[\boldsymbol{S}=\begin{pmatrix}0\\ \frac{1}{2}\,P^{ik}\,\partial_{j}\,\overline{\gamma}_{ik}-\rho\,\partial_{j}\, \Phi\\ -S^{j}\,\partial_{j}\,\Phi\end{pmatrix}, \tag{18}\]
where \(P^{ij}:=\rho\,v^{i}\,v^{j}+p\,\overline{\gamma}^{ij}\) and we assume \(\Phi\) is due only to the point source PNS,
\[\Phi\left(r\right):=\frac{-M_{\text{\tiny PNS}}}{r}. \tag{19}\]
## 3 Steady-state accretion shocks
We take initial conditions for our simulations from steady-state solutions to (9) (GR) and (15) (NR). To determine the steady-state solutions, we assume the fluid distribution is spherically symmetric and time-independent. Following Blondin et al. (2003), we consider a stationary accretion shock located at \(r=R_{\text{\tiny S}}\) with a PNS mass \(M_{\text{\tiny PNS}}\), PNS radius \(R_{\text{\tiny PNS}}\), and a constant mass accretion rate \(\dot{M}>0\). We assume a polytropic constant ahead of the shock, \(K=2\times 10^{14}\,\left[\left(\operatorname{erg}\operatorname{cm}^{-3}\right) /\left(\operatorname{g}\operatorname{cm}^{-3}\right)^{4/3}\right]\), chosen so that the pre-shock flow is highly supersonic (all of our models have a pre-shock Mach number greater than 15). Given that our steady-state solutions have constant entropy between the PNS surface and the shock, they are convectively stable. This enables us to isolate the SASI and study its development.
### Relativistic Steady-State Solutions
Focusing on the equation for \(D\), we find (temporarily defining \(v\equiv v^{r}\)),
\[\alpha\,\psi^{6}\,W\times 4\pi\,r^{2}\rho\,v=-\dot{M}. \tag{20}\]
Manipulation of the equations for \(D\) and \(\tau\) in (9) yields the relativistic Bernoulli equation,
\[\alpha\,h\,W=\mathcal{B}, \tag{21}\]
where \(\mathcal{B}>0\) is the relativistic Bernoulli constant. At spatial infinity, the fluid is assumed to be at rest and the spacetime curvature negligible, so \(\alpha_{\infty}=W_{\infty}=1\). Further, at spatial infinity, we assume the fluid to be cold; i.e., \(\left(e+p\right)/\rho\ll 1\), so that \(h_{\infty}=1\) and \(\mathcal{B}_{\infty}=1\). Since \(\mathcal{B}\) is a constant, \(\mathcal{B}=1\) everywhere.
Given \(K\), Eqs. (14), (20), and (21) (with \(\mathcal{B}=1\)) form a system of three equations in the three unknowns, \(\rho\), \(v\), and \(p\). From initial guesses \(v_{0}=-\sqrt{2\,M_{\text{\tiny PNS}}/r}\), \(\rho_{0}=-\dot{M}/\left(4\pi\,r^{2}\,v_{0}\right)\), and \(p_{0}=K\,\rho_{0}^{\text{\tiny I}}\), the first two of which are obtained from the Newtonian approximation at a distance \(r>R_{\text{\tiny S}}\) for highly supersonic flow, we define dimensionless variables \(u_{1}:=\rho/\rho_{0}\), \(u_{2}:=v/v_{0}\), and \(u_{3}:=p/p_{0}\). These are substituted into the system of equations, which are then solved with a Newton-Raphson algorithm to determine the state of the fluid everywhere ahead of the shock. To join the pre- and post-shock states of the fluid at \(r=R_{\text{\tiny S}}\), we apply the relativistic Rankine-Hugoniot jump conditions (i.e., the Taub jump conditions, Taub, 1948) to obtain \(\rho\), \(v\), and \(p\) just below the shock. Once the state of the fluid just below the shock is found, the polytropic constant for the post-shock fluid is computed with (14) and the same system of equations is solved for the state of the fluid everywhere below the shock.
### Non-Relativistic Steady-State Solutions
The steady-state solution method for the non-relativistic case (taken from Blondin et al. (2003)) follows a similar procedure as the relativistic case, except we begin from the NR equations for mass density and energy density,
\[\partial_{t}\,\rho+\frac{1}{\sqrt{\pi}}\,\partial_{i}\left( \sqrt{\overline{\gamma}}\,\rho\,v^{i}\right) =0, \tag{22}\] \[\partial_{t}\,E+\frac{1}{\sqrt{\overline{\gamma}}}\,\partial_{i} \left[\sqrt{\overline{\gamma}}\left(E+p\right)v^{i}\right] =-\rho\,v^{i}\,\partial_{i}\,\Phi, \tag{23}\]
where \(E:=e+\frac{1}{2}\,\rho\,\overline{\gamma}_{ij}\,v^{i}\,v^{j}\). From these, and making the same assumptions as in the relativistic case, we arrive at a system of two equations for the three unknowns, \(\rho\), \(v\equiv v^{r}\), and \(p\),
\[4\pi\,r^{2}\,\rho\,v =-\dot{M}, \tag{24}\] \[\frac{1}{2}\,v^{2}+h_{\text{\tiny NR}}+\Phi =\mathcal{B}_{\text{\tiny NR}}, \tag{25}\]
with \(h_{\rm NR}=h-1=\left(e+p\right)/\rho\) the non-relativistic specific enthalpy and \(\mathcal{B}_{\rm NR}\) the non-relativistic Bernoulli constant. Following Blondin et al. (2003), we set \(\mathcal{B}_{\rm NR}=0\). As in the GR case, we close this system with (14).
In the non-relativistic limit, \(\alpha\approx 1+\Phi\) and \(W\approx 1+\frac{1}{2}\,v^{i}\,v_{i}\); substituting these into (21) yields, to leading order, \(1+\mathcal{B}_{\rm NR}\), in agreement with (25).
### Comparison of NR and GR Steady-State Solutions
For the low-compactness case (e.g., see Bruenn et al., 2013; Melson et al., 2015; Burrows et al., 2020), Figure 1 shows steady-state accretion shock solutions as functions of coordinate distance \(r\), with \(M_{\rm PNS}=1.4\,M_{\odot}\) (where \(M_{\odot}\) is the mass of our Sun), \(\dot{M}=0.3\,M_{\odot}\,\rm s^{-1}\), and \(R_{\rm s}/\rm km\in\{120,150,175\}\).
In general, the magnitudes of the density, velocity, and pressure just below the shock decrease as the shock radius increases (see, e.g., Eqs. (1-3) in Blondin et al., 2003). From the top-right panel of Figure 1, it can be seen that the velocities in the GR and NR cases agree well near the shock and deviate from each other for smaller radii, with the velocity being smaller in the GR case than in the NR case. The top-left and bottom-left panels show that the densities and pressures in the GR case are larger than their NR counterparts at smaller radii. The slope of the NR density profile matches expectations of \(\rho\left(r\right)\propto r^{-3}\)(Blondin et al., 2003), but the GR density profile deviates noticeably from this as the inner-boundary is approached. From the bottom-right panel, it can be seen that the lapse function and its Newtonian approximation begin to deviate from each other near \(r=40\,\rm km\), with the degree of deviation increasing for smaller radii.
For the high-compactness case (e.g., see Liebendorfer et al., 2001; Walk et al., 2020), Figure 2 shows steady-state accretion shock solutions as functions of coordinate distance \(r\), with \(M_{\rm PNS}=2.8\,M_{\odot}\), \(\dot{M}=0.3\,M_{\odot}\,\rm s^{-1}\), and \(R_{\rm s}/\rm km\in\{120,150,175\}\). The bottom-right panel shows the density profile of the GR and NR cases agree well with the shock radius. The top-right panel shows the density profile of the GR case, with \(M_{\rm PNS}=2.8\,M_{\odot}\), \(\dot{M}=0.3\,M_{\odot}\,\rm s^{-1}\), and \(R_{\rm s}/\rm km\in\{120,150,175\}\). The bottom-right panel shows the density profile of the GR case, with \(M_{\rm PNS}=2.
\(\{60,70\}\). The profiles show the same trends as those in Figure 1, although in this case the trends are more pronounced. One notable difference is that the location of the largest deviation in the velocity between the NR and GR cases occurs further in, near \(r=12\) km. Another notable feature of both Figures 1 and 2 is that the fluid velocity in the GR case is consistently slower than that in the NR case, in agreement with Kundu & Coughlin (2022).
We compare our numerical results with an estimate provided by Muller (2020), which provides an analytic estimate of the oscillation period of the SASI, \(T_{\rm aa}\), based on the assumption that the SASI is an advective-acoustic cycle, in which a fluid parcel advects from the shock to the PNS surface in time \(\tau_{\rm ad}\), which generates acoustic waves that propagate from the PNS surface to the shock in time \(\tau_{\rm ac}\). We modify that formula to include the effects of GR by including the metric factor, which involves the conformal factor and which converts the radial coordinate increment to the proper radial distance increment, and by replacing the non-relativistic signal-speeds with their relativistic counterparts,
\[T_{\rm aa}\approx\tau_{\rm ad}+\tau_{\rm ac}=\int\limits_{R_{\rm a}}^{R_{\rm PNS }}\frac{\sqrt{\gamma_{11}}\,dr}{\lambda_{0}^{r}}+\int\limits_{R_{\rm PNS}}^{R _{\rm a}}\frac{\sqrt{\gamma_{11}}\,dr}{\lambda_{+}^{r}}, \tag{26}\]
where \(\lambda_{0}^{r}\) and \(\lambda_{+}^{r}\) are the radial signal-speeds of matter and acoustic waves, respectively. Using our metric, the signal-speeds are (Rezzolla & Zanotti, 2013)
\[\lambda_{0}^{r} =\alpha\,v^{r} \tag{27}\] \[\lambda_{+}^{r} =\frac{\alpha}{1-v^{i}v_{i}\,c_{\rm s}^{2}}\left\{v^{r}\left(1-c_ {s}^{2}\right)+c_{s}\sqrt{\left(1-v^{i}v_{i}\right)\left[\gamma^{11}\left(1-v ^{i}v_{i}\,c_{s}^{2}\right)-v^{r}\,v^{r}\left(1-c_{s}^{2}\right)\right]}\right\} \stackrel{{\rm NR}}{{=}}v^{r}+c_{s}, \tag{28}\]
Figure 2: Post-shock, steady-state solutions for GR (solid) and NR (dashed) equations as functions of coordinate distance \(r\in[3\,{\rm km},R_{\rm s}]\) for three models having the same accretion rate of \(0.3\,M_{\odot}/{\rm s}\), the same PNS mass of \(2.8\,M_{\odot}\), and shock radii 60 km (blue) and 70 km (orange). For this PNS mass, the Schwarzchild radius is \(\sim\)2.1 km. Quantities defined just ahead of the shock are denoted with a subscript “1”. The top-left panel shows the comoving baryon mass density normalized to its value just ahead of the shock, the top-right panel shows the radial component of the fluid three-velocity normalized to its value just ahead of the shock, the bottom-left panel shows the comoving pressure normalized to the Newtonian ram pressure just ahead of the shock, \(\rho_{1}\,v_{1}^{2}\), and the bottom right panel shows the lapse function (solid) and the Newtonian approximation to the lapse function (dashed), \(1+\Phi\), with \(\Phi\) the Newtonian gravitational potential, normalized to their values at the shock.
where \(c_{s}\) is the sound-speed and where the second equality in each expression is the non-relativistic limit. For 1D problems, this expression depends only on the steady-state values of \(c_{s}\) and \(v^{r}\). We also compare our results with an estimate based on the assumption that the SASI is a purely acoustic phenomenon. We define a time, \(T_{\rm ac}\), as the time taken by an acoustic perturbation to circumnavigate the PNS at a characteristic radius \(R_{\rm ac}\), assuming \(v^{\theta}\ll c_{s}\,\sqrt{\gamma^{22}}\),
\[T_{\rm ac}:=2\pi\,R_{\rm ac}/\left(h_{\theta}\,\lambda_{+}^{\theta}\right), \tag{29}\]
where
\[\lambda_{+}^{\theta}=\frac{\alpha}{1-v^{i}v_{i}\,c_{s}^{2}}\left\{v^{\theta} \left(1-c_{s}^{2}\right)+c_{s}\sqrt{\left(1-v^{i}v_{i}\right)\left[\gamma^{22 }\left(1-v^{i}v_{i}\,c_{s}^{2}\right)-v^{\theta}\,v^{\theta}\left(1-c_{s}^{2} \right)\right]}\right\}\stackrel{{\rm sn}}{{=}}v^{\theta}+c_{s}/r \tag{30}\]
is the acoustic wave-speed in the \(\theta\)-dimension (Rezzolla & Zanotti, 2013), \(h_{\theta}\) is the scale factor in the \(\theta\)-dimension.
We plot, in Figure 3, \(\lambda_{+}^{r,\rm GR}/\lambda_{+}^{r,\rm NR}\), \(\lambda_{+}^{\theta,\rm GR}/\lambda_{+}^{\theta,\rm NR}\), and \(\lambda_{0}^{r,\rm GR}/\lambda_{0}^{r,\rm NR}\) as functions of \(\eta\), defined as
\[\eta\left(r\right):=\left(r-R_{\rm pNS}\right)/\left(R_{\rm s}-R_{\rm pNS} \right), \tag{31}\]
for all of our models. In all cases, the signal-speeds are slower in the GR case. This difference is accentuated in the high-compactness suite and for smaller radii.
The growth rate of the SASI depends on the steady-state conditions below the shock; however, obtaining an analytic estimate for this is difficult, and although there have been efforts to explain the physics governing the growth rate assuming non-relativistic models (e.g., Blondin & Mezzacappa, 2006; Foglizzo et al., 2007; Laming, 2007, 2008; Foglizzo, 2009; Guilet & Foglizzo, 2012), an analytic estimate remains an open question and no estimate exists for a GR model. Here, we aim to compare the NR and GR growth rates with numerical simulations.
We emphasize that our goal is to characterize the SASI in terms of its period and growth rate, and to compare their GR and NR values. Determining its physical origin, whether it be advective-acoustic or purely acoustic, is beyond the scope of this study. The rough estimates provided by (26) and (29) are merely intended as points of reference for our numerically determined values.
## 4 Simulation Code and Setup
Figure 3: Ratio of radial GR acoustic signal-speed to radial NR acoustic signal-speed (solid), ratio of angular GR acoustic signal-speed to angular NR acoustic signal-speed (dashed), and ratio of radial GR advective signal-speed to radial NR advective signal-speed (dotted), plotted as functions of \(\eta\) for all models. Low-compactness models are shown in the top panel and high-compactness models are shown in the bottom panel. All signal-speeds have been multiplied with the appropriate scale factors.
We perform our simulations with thorrado1, an open-source code under development, aiming to simulate CCSNe. thornado uses high-order discontinuous Galerkin (DG) methods to discretize space and strong-stability-preserving Runge-Kutta methods to evolve in time. For details on the implementation in the non-relativistic case, see Endeve et al. (2019) and Pochik et al. (2021). In the relativistic case, see Dunham et al. (2020) and Dunham et al. (2023) (in prep.). All of our simulations use the HLL Riemann solver (Harten et al., 1983), a quadratic polynomial representation (per dimension) of the solution in each element, and third-order SSP-RK methods for time integration (Gottlieb et al., 2001) with a timestep \(\Delta t:=C_{\textsc{CFL}}\frac{1}{d(2\,k+1)}\min_{i\in\{1,\cdots,d\}}\Delta x ^{i}/\left|\lambda^{i}\right|\), where \(C_{\textsc{CFL}}=0.5\) is the CFL number, \(k\) the degree of the approximating polynomial (in our case, \(k=2\)), \(\Delta x^{i}\) the mesh width in the \(i\)-th dimension, \(\lambda^{i}\) the fastest signal-speed in the \(i\)-th dimension, and \(d\) the number of spatial dimensions (Cockburn and Shu, 2001).
Footnote 1: [https://github.com/endeve/thornado](https://github.com/endeve/thornado)
Two important aspects of successful implementations of the DG method are mitigating spurious oscillations and enforcing physical realizability of the polynomial approximation of the solution. To mitigate oscillations, thornado uses the minmod limiter and applies it to the characteristic fields (see, e.g., Shu, 1987; Pochik et al., 2021). For the interested reader, we set the \(\beta_{\textsc{fvd}}\) parameter for the limiter, defined in Pochik et al. (2021), to \(1.75\) for all runs. thornado also uses the troubled-cell indicator described in Fu and Shu (2017) to decide on which elements to apply the minmod limiter; for the threshold of that indicator we use the value \(5\times 10^{-3}\). To enforce physical realizability of the solution in the NR case, thornado uses the positivity limiter described in Zhang and Shu (2010), and in the GR case, uses the limiter described in Qin et al. (2016); for the thresholds of both limiters we use the value \(10^{-13}\).
The hydrodynamics in thornado has recently been coupled to AMReX2, an open-source software library for block-structured adaptive mesh refinement and parallel computation with MPI (Zhang et al., 2019); however, our simulations are all performed on a uni-level mesh.
Footnote 2: [https://github.com/AMReX-codes/amrex](https://github.com/AMReX-codes/amrex)
Our computational domain, \(\mathcal{D}\), is defined for all models as \(\mathcal{D}:=[R_{\textsc{PNS}},1.5\,R_{\textsc{s}}\,(t=0)]\times[0,\pi]\). The radial extent allows us to determine whether or not the SASI has become nonlinear, which we define to be when any radial coordinate of the shock exceeds \(10\%\) of the initial shock radius. All simulations are evolved sufficiently long to achieve ten full cycles of the SASI. In some cases, the shock exceeds our threshold of nonlinearity before completing ten full cycles; in those cases, we only use data from the linear regime.
The PNS is treated as a fixed, spherically symmetric mass in order to maintain a steady-state, and we ignore the self-gravity of the fluid. Because the largest accretion rate we consider is \(0.5\,M_{\odot}\,\mathrm{s}^{-1}\) and because this lasts for a maximum of \(550\,\mathrm{ms}\) in our 2D models, the most mass that would accrete onto the PNS is \(0.275\,M_{\odot}\) and therefore would provide a sub-dominant contribution to \(M_{\textsc{PNS}}\) in our simulations.
We consider models with three free parameters: the mass of the PNS, \(M_{\textsc{PNS}}\), the radius of the PNS, \(R_{\textsc{PNS}}\), and the initial radius of the shock, \(R_{\textsc{s}}\). We also varied the mass accretion rate, \(\dot{M}\), but found that the oscillation periods and growth rates to be insensitive to this parameter, and we do not discuss these models further; all following discussion is for models with an accretion rate \(\dot{M}=0.3\,M_{\odot}\,\mathrm{s}^{-1}\). Our choice of parameters is motivated by the physical scale of CCSNe; the ranges of our parameter space are informed by models from Liebendorfer et al. (2001); Bruenn et al. (2013); Melson et al. (2015b); Burrows et al. (2020); Walk et al. (2020), and can be found in Table 1. We also classify our simulations by their compactness, \(\xi\), which we define as (O'Connor and Ott, 2011),
\[\xi:=\frac{M/M_{\odot}}{R_{\textsc{PNS}}/\left(20\,\mathrm{km}\right)}. \tag{32}\]
One suite, which we refer to as "low-compactness", has \(\xi=0.7\); the other suite, which we refer to as "high-compactness", has \(\xi=2.8\). We vary \(M_{\textsc{PNS}}\) and \(R_{\textsc{PNS}}\) in such a way as to ensure these two values of compactness.
In our model naming convention, we first list whether the model used GR or NR along with the dimensionality (1D or 2D), followed by the mass of the PNS in Solar masses, the radius of the PNS in kilometers, and lastly the shock radius in kilometers; e.g., the 2D GR model with \(M_{\textsc{PNS}}=1.4\,M_{\odot}\), \(R_{\textsc{PNS}}=40\,\mathrm{km}\), and \(R_{\textsc{s}}=150\,\mathrm{km}\) is named GR2D_M1_4_Rpns040_Rs1.50e2. If we compare an NR model with a GR model having the same parameters we may drop that specification from the model name. If no confusion will arise, we may also drop the dimensionality.
The inner radial boundary corresponds to the surface of the PNS. To determine appropriate inner-boundary conditions in the GR case, we assume \(D\) and \(\tau\) follow power laws in radius; from the initial conditions, we extrapolate \(D\) and \(\tau\) in radius using a least-squares method with data from the innermost five elements on the grid to determine the
appropriate exponents. The radial momentum density interior to \(R_{\rm pss}\) is kept fixed to its initial value. We leave the outer radial boundary values fixed to their initial values for all fields. In the polar direction, we use reflecting boundary conditions at both poles. For the inner-boundary conditions in the non-relativistic case, we assume \(\rho\left(r\right)\propto r^{-3}\) and \(E\left(r\right)\propto r^{-4}\), the latter of which follows from our assumption of \(\rho\propto r^{-3}\), (13), (14), the assumption of \(\Gamma=4/3\), and the assumption of a small velocity at the PNS surface.
For the low-compactness conditions we enforce a radial resolution of 0.5 km per element for all runs, which we found necessary for the shock in an unperturbed model to not deviate by more than 1% over 100 advection times. This is shown in the top panel of Figure 4, which plots the relative deviation of the shock radius from its initial position as a function of \(t/\tau_{\rm ad}\) for runs with different radial resolutions for model GR1D_M1.4_Rpns040_Rs1.20e2. These results suggest that the steady-state is not maintained if the radial resolution is too coarse; e.g., greater than about 1 km.
For the high-compactness conditions we enforce a radial resolution of 0.25 km per element for all runs in order to maintain the same radial resolution of the pressure scale height, \(p\left|dp/dr\right|^{-1}\), as in the low-compactness models, while also ensuring that the shock does not deviate from its initial location by more than 1%. This can be seen in the bottom panel of Figure 4, which plots the same quantity as the top panel, but for model GR1D_M2.8_Rpns020_Rs6.00e1.
To verify that our chosen angular resolution of 64 elements (\(\sim\)2.8\({}^{\circ}\)) is sufficient to resolve the angular variations of the fluid, we run two additional simulations of model NR2D_M2.8_Rpns040_Rs1.20e2, one with 128 angular elements and one with 256 angular elements. From those runs, we extract the best-fit values for the growth rates and oscillation periods (see SS 5) and find them to not significantly differ from those of the 64-angular-element run.
Our simulations are initialized with the steady-state solutions discussed in SS 3, and we take extra care to minimize initial transients. The initial conditions we obtain come from solving Eqs. (14), (20), and (21) in the GR case, and Eqs. (14), (24), and (25) in the NR case, which are not exact solutions of our discretized equations, so transients will be present when simulations are initialized with these solutions. To mitigate the effects of the transients, the fields are set up in 1D with the method described above and then evolved for 100 advection times, which was experimentally determined to be of sufficient duration to quell any transients. We verify that the system has achieved a steady-state by plotting, in Figure 5, the maximum, at each snapshot, of the absolute value of the normalized time derivative of the mass density versus \(t/\tau_{\rm ad}\) for low-compactness model GR1D_M1.4_Rpns040_Rs1.20e2 (top panel) and high-compactness model GR1D_M2.8_Rpns020_Rs6.00e1 (bottom panel); other models exhibit similar behavior. It can be seen that the low-compactness model settles down after approximately 35 advection times, followed by two slight increases near 45 and 50 advection times, then settles down until we end the simulation after 100 advection times. We attribute the two slight increases to limiters activating when the shock crosses an element boundary. The relaxed 1D data is mapped to 2D, a perturbation to the pressure is applied (see below), and the system is evolved. The initial relaxation in 1D removes numerical noise and allows for a smaller perturbation amplitude, leading to a longer-lasting linear regime, which makes for a cleaner signal.
We seed the instability by imposing a pressure perturbation onto the steady-state flow below the shock,
\[p\left(r,\theta\right)=p\left(r\right)+\delta p\left(r,\theta\right), \tag{33}\]
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multicolumn{1}{c}{ Model} & \(M_{\rm pss}\left[M_{\odot}\right]\) & \(R_{\rm pss}\left[\rm km\right]\) & \(R_{\rm s}\left[\rm km\right]\) & \(\xi\) \\ \hline M1.4\_Rpns040\_Rs1.20e2 & 1.4 & 40 & 120 & 0.7 \\ M1.4\_Rpns040\_Rs1.50e2 & 1.4 & 40 & 150 & 0.7 \\ M1.4\_Rpns040\_Rs1.80e2 & 1.4 & 40 & 180 & 0.7 \\ \hline M2.8\_Rpns020\_Rs6.00e1 & 2.8 & 20 & 60 & 2.8 \\ M2.8\_Rpns020\_Rs7.00e1 & 2.8 & 20 & 70 & 2.8 \\ \hline \end{tabular} Note. – Model parameters chosen for the 5 models. All models were run with both GR and NR. The first three rows correspond to the low-compactness models and the last two rows correspond to the high-compactness models.
\end{table}
Table 1: Model Parameters
with \(p\left(r\right)\) the steady-state pressure at radius \(r\), and where \(\delta p\left(r,\theta\right)/p\left(r\right)\) is defined in a scale-independent manner as
\[\frac{\delta p\left(r,\theta\right)}{p\left(r\right)}:=10^{-6}\ \exp\left(\frac{-\left(\eta-\eta_{\rm c}\right)^{2}}{2\,\sigma^{2}} \right)\,\cos\theta, \tag{34}\]
with \(\eta\) defined in (31), and where \(\eta_{\rm c}=0.75\) and \(\sigma=0.05\). The perturbation is not allowed to extend into the pre-shock flow. We opted to perturb the post-shock flow as opposed to the pre-shock flow after having tried various pre-shock and post-shock perturbations and finding that pre-shock perturbations generate noise when they cross the shock front, thus creating a noisy signal and making it more difficult to extract quantities of interest. Similarly, we choose a Gaussian profile because the smoothness of the profile generates less noise than, e.g., a top-hat profile. The \(\cos\theta\) factor is meant to excite an \(\ell=1\) Legendre mode of the SASI. While this perturbation method does not exactly mimic the hydrodynamics inside a CCSN, it is sufficient to study the SASI in the linear regime.
## 5 Results and Discussion
Here we discuss our analysis methods and compare the SASI growth rates and oscillation periods for our 5 models in GR and NR.
### Analysis Methods
To extract SASI growth rates from our simulations, we follow Blondin & Mezzacappa (2006) and expand a quantity affected by the perturbed flow, \(A\left(t,r,\theta\right)\), in Legendre polynomials,
\[A\left(t,r,\theta\right)=\sum_{\ell=0}^{\infty}G_{\ell}\left(t,r\right)\,P_{ \ell}\left(\cos\theta\right), \tag{35}\]
Figure 4: Relative deviation of the shock radius from its initial location versus \(t/\tau_{\rm ad}\) for different radial resolutions \(dr\) for the low-compactness model GR1D_M1.4_Rpns040_Rs1.20e2 (top panel) and the high-compactness model GR1D_M2.8_Rpns020_Rs6.00e1 (bottom panel).
where we normalize the \(P_{\ell}\) such that
\[\int\limits_{-1}^{1}P_{\ell}\left(x\right)\,P_{\ell^{\prime}}\left(x\right)\,dx= \delta_{\ell\ell^{\prime}}, \tag{36}\]
with \(\delta_{\ell\ell^{\prime}}\) the Kronecker delta function. Then,
\[G_{\ell}\left(t,r\right):=\int\limits_{-1}^{1}A\left(t,r,\theta\right)\,P_{ \ell}\left(\cos\theta\right)\,d\left(\cos\theta\right)=\int\limits_{0}^{\pi} A\left(t,r,\theta\right)\,P_{\ell}\left(\cos\theta\right)\,\sin\theta\,d\theta. \tag{37}\]
After experimenting with several quantities, we decided to use the quantity proposed by Scheck et al. (2008),
\[A\left(t,r,\theta\right):=\frac{1}{\sin\theta}\,\frac{\partial}{\partial\theta }\left(v^{\theta}\left(t,r,\theta\right)\,\sin\theta\right), \tag{38}\]
where \(v^{\theta}\) is the fluid velocity in the polar direction as measured by an Eulerian observer, having units of \(\mathrm{rad\,s^{-1}}\). With \(G_{\ell}\), we compute the power in the \(\ell\)-th Legendre mode, \(H_{\ell}\left(t\right)\), by integrating over a shell below the shock, bounded from below by \(r_{a}=0.8\,R_{\mathrm{s}}\) and from above by \(r_{b}=0.9\,R_{\mathrm{s}}\),
\[H_{\ell}\left(t\right):=\int\limits_{0}^{2\pi}\int\limits_{0}^{\pi}\int\limits _{r_{a}}^{r_{b}}\left[G_{\ell}\left(t,r\right)\right]^{2}\left[\psi\left(r \right)\right]^{6}r^{2}\,\sin\theta\,dr\,d\theta\,d\varphi=4\pi\int\limits_{r_ {a}}^{r_{b}}\left[G_{\ell}\left(t,r\right)\right]^{2}\left[\psi\left(r\right) \right]^{6}r^{2}\,dr. \tag{39}\]
For the Newtonian runs, \(\psi=1\). We experimented with different values of \(r_{a}\) and \(r_{b}\) and found that a thin shell just below the shock gave the cleanest signal.
Figure 5: Maximum, at each snapshot, of the absolute value of the normalized time derivative of the mass density versus \(t/\tau_{\mathrm{ad}}\) for low-compactness model GR1D_M1_4_Rpns040_Rs1.20e2 (top panel) and high-compactness model GR1D_M2_8_Rpns020_Rs6.00e1 (bottom panel).
To extract the SASI growth rate and oscillation period from \(H_{1}\), we begin by fitting the simulation data to the function (Blondin & Mezzacappa, 2006)
\[F_{1}\left(t\right)=F\,e^{2\,\omega\,t}\,\sin^{2}\left(\frac{2\pi}{T}\,t+\delta \right), \tag{40}\]
where \(\omega\) is the growth rate of the SASI, \(T\) is the oscillation period of the SASI, \(F\) is an amplitude, and \(\delta\) is a phase offset.
We fit the data to the model using the Levenberg-Marquardt nonlinear least squares method (e.g., see More, 1978), provided by scipy's curve_fit function, which also provides an estimate on the uncertainty of the fit via the diagonal entries of the covariance matrix; we use this to define the uncertainty in the growth rate. The temporal extent over which we perform the fit is defined to begin after one SASI oscillation and to end after seven SASI oscillations, where, for simplicity, we use (26) to define the period of one SASI oscillation.
The period we report is obtained by performing a Fourier analysis: We integrate the lateral flux in the radial direction,
\[F_{\theta}^{r}=\sqrt{\gamma}\,\rho\,v^{r}\,v_{\theta}\times\alpha\,h\,W^{2} \stackrel{{\text{xn}}}{{=}}\sqrt{\gamma}\,\rho\,v^{r}\,v_{\theta}, \tag{41}\]
from \(r_{a}\) to \(r_{b}\) (defined as in the computation of the \(H_{\ell}\)), integrate the result over \(\mathbb{S}^{2}:=\left\{\left(\theta,\varphi\right)\in\mathbb{R}^{2}\big{|} \theta\in\left[0,\pi\right],\varphi\in\left[0,2\pi\right)\right\}\), and then take the Fourier transform of that result using the fft tool from scipy. From the result, \(\widetilde{F}_{\theta}^{r}(\widetilde{T})\), we define the period of the SASI as the value of \(\widetilde{T}\) corresponding to the peak of the Fourier amplitudes, and we define the uncertainty in the period as the full width at half maximum (FWHM) of the Fourier amplitudes. The FFT is computed over the same time interval as the aforementioned fit. (We do not use the period returned from curve_fit because, for some models, pollution from higher-order modes spoils the ability of our fitting function to capture the \(\ell=1\) mode.)
### Overall Trends
Here we discuss trends that appear across both the low-compactness models and the high-compactness models. We summarize our results in Table 2, which lists the model, the best-fit oscillation period and uncertainty, the best-fit growth rate and uncertainty, the product of the best-fit growth rate and best-fit oscillation period, the period assuming an advective-acoustic mechanism (estimated using (26)), and the period assuming a purely acoustic mechanism (estimated using (29)). Due to data uncertainties, and the similar values provided by the two estimates, we are unable to discern whether the SASI is governed by an advective-acoustic mechanism or by a purely acoustic mechanism using these simple estimates.
As a first general trend, we see that, for a given PNS radius, the oscillation period increases as the shock radius increases, as seen clearly in the second column of Table 2. This is expected, for as the shock radius increases, the waves supported by the fluid must traverse a larger region, therefore each cycle will take longer. As a second general trend, we observe that, for a given PNS radius, the growth rate decreases as the shock radius increases.
We note that the power injection rate (i.e., the growth rate) per SASI cycle is approximately constant for all low-compactness models and all high-compactness, NR models, and also approximately constant, but with a different mean, for the high-compactness, GR models. Phrased another way, \(\omega\,T\) is approximately constant for all models within one of these two groups. This can be seen from the fourth column of Table 2: The values for the low-compactness models and the high-compactness, NR models have a mean of 1.58 with a small scatter, \(1.58^{+0.20}_{-0.13}\), while the high-compactness, GR models have a mean of 1.01 with a small scatter, \(1.01^{+0.03}_{-0.03}\).
### Impact of GR
Next we discuss the impact of GR on the oscillation period and the growth rate.
#### 5.3.1 Oscillation Period
First we discuss how the oscillation period varies with PNS compactness and shock radius. In Figure 6, we plot the amplitudes of the Fourier transform of (41), \(\widetilde{F}_{\theta}^{r}\), versus \(\widetilde{T}\) for all models, where \(\widetilde{T}\) is defined as the inverse of the frequency determined by the FFT. No windowing was applied when computing the FFT of the signal. We see that the difference in the optimal period, \(T\) (defined as the \(\widetilde{T}\) associated with the largest Fourier amplitude), between NR
and GR increases with increasing compactness, with the GR period consistently longer than the NR period. This can be explained by differences in the structure of the post-shock solutions; in particular, the signal-speeds, which are shown in Figure 3. Both the radial acoustic and advective signal-speeds, as well as the angular acoustic signal-speed, are consistently smaller in GR. Because of this, the period is longer for a given model when GR is used, regardless of whether the SASI is governed by an advective-acoustic or a purely acoustic cycle.
In Figure 7, we plot the estimates for the period provided by Eqs. (26) and (29), along with the period and its uncertainty provided by the Fourier analysis, versus initial shock radius for all the models. The NR and GR models, along with their associated estimates, follow the same general trends. For the low-compactness models, we find that the ratio between \(T_{\mbox{\tiny GR}}\) and \(T_{\mbox{\tiny NR}}\) is as large as 1.14, the relative difference between \(T_{\mbox{\tiny GR}}\) and the advective-acoustic estimate, (26), is smaller than 13%, and the relative difference between \(T_{\mbox{\tiny GR}}\) and the purely acoustic estimate, (29), is smaller than 21%. For the high-compactness models, we find that the ratio between \(T_{\mbox{\tiny GR}}\) and \(T_{\mbox{\tiny NR}}\) is as large as 1.29, the relative difference between \(T_{\mbox{\tiny GR}}\) and the advective-acoustic estimate, (26), is smaller than 17%, and the relative difference between \(T_{\mbox{\tiny GR}}\) and the purely acoustic estimate, (29), is smaller than 13%.
#### 5.3.2 Growth Rate
Next we discuss how the growth rate varies with PNS compactness and shock radius. In Figure 8, we plot the power in the first Legendre mode, \(H_{1}\), during the linear regime, as a function of time in milliseconds for all our models. For all models displayed here, the shock deviates from spherical symmetry by less than 10%, and we consider this sufficient for characterizing the evolution as being in the linear regime.
We see a clear trend in the growth rate: the GR models display a slower SASI growth rate when compared to their NR counterparts. In Figure 8, it can be seen that, even for the low-compactness models, GR has a non-negligible effect on the growth rate. This effect is a function of shock radius, with a smaller shock radius leading to a larger difference in the NR and GR growth rates. This effect is drastically enhanced when going from the low-compactness models to the high-compactness models. In the high-compactness models, the power in the \(\ell=1\) mode is some five orders of magnitude lower in the GR case by the end of the simulation (see Figure 8). The effects of GR can also be seen in Figure 9, which plots the growth rate for all models as a function of \(R_{\rm g}\). Again, in all cases, the growth rate is slower
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multicolumn{1}{c}{ Model} & \(T\pm\Delta T\) [ms] & \(\omega\pm\Delta\omega\) [ms\({}^{-1}\)] & \(\omega\,T\) & \(T_{\rm an}\) [ms] & \(T_{\rm ac}\) [ms] \\ \hline NR\_M1.4\_Rpns040\_Rs1.20e2 & 23.7044 \(\pm\) 11.9819 & 0.0733 \(\pm\) 0.0011 & 1.7368 & 20.8348 & 26.2646 \\ GR\_M1.4\_Rpns040\_Rs1.20e2 & 27.0412 \(\pm\) 14.3661 & 0.0578 \(\pm\) 0.0011 & 1.5623 & 23.6236 & 26.9372 \\ NR\_M1.4\_Rpns040\_Rs1.50e2 & 37.1673 \(\pm\) 22.3333 & 0.0409 \(\pm\) 0.0005 & 1.5209 & 34.3279 & 36.6791 \\ GR\_M1.4\_Rpns040\_Rs1.50e2 & 40.2225 \(\pm\) 22.3889 & 0.0360 \(\pm\) 0.0004 & 1.4491 & 38.6747 & 37.4293 \\ NR\_M1.4\_Rpns040\_Rs1.75e2 & 51.1234 \(\pm\) 23.4707 & 0.0300 \(\pm\) 0.0003 & 1.5341 & 47.7212 & 46.0841 \\ GR\_M1.4\_Rpns040\_Rs1.75e2 & 57.6475 \(\pm\) 25.9759 & 0.0265 \(\pm\) 0.0003 & 1.5274 & 53.5372 & 46.8924 \\ \hline NR\_M2.8\_Rpns020\_Rs6.00e1 & 6.1049 \(\pm\) 4.1534 & 0.2910 \(\pm\) 0.0085 & 1.7767 & 5.2343 & 6.5656 \\ GR\_M2.8\_Rpns020\_Rs6.00e1 & 7.5868 \(\pm\) 2.4527 & 0.1365 \(\pm\) 0.0042 & 1.0360 & 8.6607 & 7.2635 \\ NR\_M2.8\_Rpns020\_Rs7.00e1 & 7.9656 \(\pm\) 3.6955 & 0.1897 \(\pm\) 0.0054 & 1.5113 & 7.4177 & 8.2693 \\ GR\_M2.8\_Rpns020\_Rs7.00e1 & 10.2503 \(\pm\) 3.2875 & 0.0954 \(\pm\) 0.0026 & 0.9781 & 12.1099 & 9.0178 \\ \hline \end{tabular} Note. – Oscillation periods, growth rates, and their uncertainties for all ten models having the same accretion rate of \(0.3\,M_{\odot}\,\mbox{s}^{-1}\). The first six are the low-compactness models; the last four are the high-compactness models. The uncertainties for the growth rates are defined as the square roots of the diagonal entries of the covariance matrix corresponding to the growth rate. The uncertainies for the oscillation period are defined as the full-width half-maximum values of the Fourier amplitudes (see Figure 6). The fourth column shows the product of the best-fit growth rate multiplied by the best-fit oscillation period. The fifth column shows the estimate for the period assuming an advective-acoustic origin of the SASI ((26)), and the sixth column shows the estimate for the period assuming a purely acoustic origin of the SASI ((29)), where we use \(R_{\rm ac}=0.85\,R_{\rm s}\), the midpoint of the shell in which we compute the power (see § 5).
\end{table}
Table 2: Results
for the GR models and the difference between GR and NR growth rates increases with decreasing shock radius and increasing PNS mass - i.e., under more relativistic conditions - the ratio becoming as small as 0.47.
In Figure 10, we again plot \(H_{1}\), but this time as a function of \(t/T\). (When interpreting this figure, note that \(T\) is different for each model.) This figure demonstrates that the growth rate per oscillation period is approximately constant for a given physical model (NR or GR) and compactness. Further, all the low-compactness models (both NR and GR) and the NR high-compactness models reach about the same total power of \(\sim\)\(10^{25}\) after ten oscillations, while the high-compactness GR models reach a maximum of \(\sim\)\(10^{23}\), demonstrating that GR modifies both the oscillation period and the growth rate, but modifies them in such a way as to keep \(\omega\,T\) roughly constant for each of these groupings of models.
## 6 Summary, Conclusion, and Future Work
We examined the effect of GR on 5 idealized, axisymmetric models of the standing accretion shock instability (SASI) by performing a parameter study in which we systematically varied the initial shock radius, along with the mass and radius of the proto-neutron star (PNS); i.e., the compactness of the PNS. We compared these runs by measuring the growth rate and oscillation period of the SASI for all models, which were run once with NRHD and Newtonian gravity and once with GRHD and GR gravity. We set up our simulations to excite a clean \(\ell=1\) Legendre mode, computed the power of the SASI in that mode within a thin shell just below the shock, and, following Blondin & Mezzacappa (2006), fit the resulting signal to (40). From the fits, we computed the growth rates and their uncertainties, and using the FFT, we computed the oscillation periods and their uncertainties.
Figure 6: Amplitudes of Fourier transforms of (41), normalized by their largest values, as functions of \(\widetilde{T}\) in ms, where \(\widetilde{T}\) is the inverse of the frequency returned by the FFT. From left to right, columns correspond to models with increasing shock radii and rows corresponds to low-compactness (top) and high-compactness (bottom) models. NR results are shown in blue and GR results are shown in orange.
For the low-compactness (\(\xi=0.7\)) models, we found that the period of the SASI in the GR case is larger than that of the SASI in the NR case, with a ratio between the two as large as \(\sim\)1.14. For the high-compactness (\(\xi=2.8\)) models, we found that the period of the SASI in the GR case is larger than that of the SASI in the NR case, with a ratio between the two as large as \(\sim\)1.29, a significantly larger amount. We explained these differences as resulting from differences in the post-shock flow structure for the GR and NR setups. Our numerically-determined oscillation periods are consistent with simple estimates assuming both advective-acoustic and purely acoustic mechanisms, however, due to uncertainties in numerically-determined oscillation periods, we cannot discern between the two with our data. For all models, we found that the growth rate is slower when GR is used, and significantly slower for the high-compactness models, with the ratio between the GR and NR cases as low as 0.47. We found that both the growth rate and the oscillation period are practically independent of the accretion rate for the range of parameter space we considered.
The connection between our results and the results from realistic core-collapse supernova (CCSN) simulations can be made by considering the trends _across_ all of the models considered here.
First, our results suggest that CCSN simulations based on Newtonian hydrodynamics and Newtonian gravity may fail to predict correctly the growth rate of the SASI and its period of oscillation as conditions below the shock become increasingly relativistic. Under such conditions, the growth rate in the Newtonian case may be overestimated by a factor of two, or more, relative to the GR case. Additionally, the period may be underestimated by about 20%.
Second, as the conditions we considered became increasingly relativistic, the SASI growth rate continued to increase and its period continued to decrease. Thus, our studies provide theoretical support for the conclusions reached in the study of the SASI development in higher-mass progenitors (see, e.g., Hanke et al., 2013; Matsumoto et al., 2022), where the time scales for the development of convection may be long and where the SASI, able to develop on shorter
Figure 7: SASI oscillation period and uncertainty in ms as a function of PNS mass and shock radius in km, as determined by the FFT (see text for details). NR results are shown in blue and GR results are shown in orange. Crosses correspond to estimates of the period assuming the SASI is governed by an advective-acoustic mechanism ((26)) using the GR data and the GR part of (26) (orange) and using the NR data and the NR part of (26) (blue). Filled circles correspond to estimates of the period assuming the SASI is governed by a purely acoustic mechanism ((29)).
time scales, is able to provide support to the stalled shock, potentially sustaining neutrino heating, and, in turn, the development of neutrino-driven convection, with all of their anticipated benefits for generating an explosion.
Given the substantial impact of GR we find in our simulations, we stress the importance of GR treatments in any future studies aiming to understand the SASI in a CCSN context.
Our results also suggest that CCSN simulations based on Newtonian hydrodynamics and Newtonian gravity may fail to predict correctly the emission of gravitational waves by the SASI (specifically, its frequency and amplitude), a primary target of gravitational wave astronomy given its anticipated existence in that part of frequency space where current-generation gravitational wave detectors such as LIGO and VIRGO are most sensitive. In addition, efforts to discern between contributions from different SASI modes - e.g., sloshing and spiral - may be affected, as well.
In light of the results presented here, further analysis is well motivated. Our assumption of axisymmetry will need to be lifted, since the SASI is known to have non-axisymmetric modes (Blondin & Mezzacappa, 2007; Blondin & Shaw, 2007; Fernandez, 2010, 2015). A full three-dimensional comparison of the SASI in NR and GR, such as the one performed here for axisymmetry, should be conducted. Further analyses may also include adding a third type of model that uses NRHD and a GR monopole potential, similar to what is done in several CCSN simulation codes (e.g., Rampp & Janka, 2002; Kotake et al., 2018; Skinner et al., 2019; Bruenn et al., 2020), in order to discern whether the use of an effective potential to capture the stronger gravitational fields present in GR is able to better capture the SASI growth rate and subsequent evolution. Of course, even if it does, nothing can replace a true GR implementation, as we have done here, and as must be done in future CCSN models.
Figure 8: \(H_{1}\) in cgs units versus \(t\) in ms. The layout of the panels is the same as in Figure 6: From left to right, columns correspond to models with increasing shock radii and rows correspond to low-compactness (top) and high-compactness (bottom) models. NR results are shown in blue and GR results are shown in orange. Note that the horizontal axis limits are different in each panel.
## 7 Data Availability
The data underlying this paper will be shared on reasonable request to the corresponding author.
SJD, EE, and AM acknowledge support from the National Science Foundation's Gravitational Physics Program under grant NSF PHY 1806692, and EE, AM, and JLB acknowledge support from the National Science Foundation's Gravitational Physics Program under grant 2110177. This research was supported in part by the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the U.S. Department of Energy Office of Science and the National Nuclear Security Administration. This work was conducted in part using the resources of the Advanced Computing Center for Research and Education at Vanderbilt University, Nashville, TN.
thornado(Pochik et al., 2021), AMReX(Zhang et al., 2019), Matplotlib(Hunter, 2007), NumPy(Harris et al., 2020), SciPy(Virtanen et al., 2020), yt(Turk et al., 2011)
|
2310.04366 | Swordfish: A Framework for Evaluating Deep Neural Network-based
Basecalling using Computation-In-Memory with Non-Ideal Memristors | Basecalling, an essential step in many genome analysis studies, relies on
large Deep Neural Networks (DNNs) to achieve high accuracy. Unfortunately,
these DNNs are computationally slow and inefficient, leading to considerable
delays and resource constraints in the sequence analysis process. A
Computation-In-Memory (CIM) architecture using memristors can significantly
accelerate the performance of DNNs. However, inherent device non-idealities and
architectural limitations of such designs can greatly degrade the basecalling
accuracy, which is critical for accurate genome analysis. To facilitate the
adoption of memristor-based CIM designs for basecalling, it is important to (1)
conduct a comprehensive analysis of potential CIM architectures and (2) develop
effective strategies for mitigating the possible adverse effects of inherent
device non-idealities and architectural limitations.
This paper proposes Swordfish, a novel hardware/software co-design framework
that can effectively address the two aforementioned issues. Swordfish
incorporates seven circuit and device restrictions or non-idealities from
characterized real memristor-based chips. Swordfish leverages various
hardware/software co-design solutions to mitigate the basecalling accuracy loss
due to such non-idealities. To demonstrate the effectiveness of Swordfish, we
take Bonito, the state-of-the-art (i.e., accurate and fast), open-source
basecaller as a case study. Our experimental results using Sword-fish show that
a CIM architecture can realistically accelerate Bonito for a wide range of real
datasets by an average of 25.7x, with an accuracy loss of 6.01%. | Taha Shahroodi, Gagandeep Singh, Mahdi Zahedi, Haiyu Mao, Joel Lindegger, Can Firtina, Stephan Wong, Onur Mutlu, Said Hamdioui | 2023-10-06T16:37:03Z | http://arxiv.org/abs/2310.04366v2 | # Swordfish: A Framework for Evaluating
###### Abstract.
_Basecalling_, an essential step in many genome analysis studies, relies on large Deep Neural Networks (DNNs) to achieve high accuracy. Unfortunately, these DNNs are computationally slow and inefficient, leading to considerable delays and resource constraints in the sequence analysis process. A Computation-In-Memory (CIM) architecture using memristors can significantly accelerate the performance of DNNs. However, inherent device non-idealities and architectural limitations of such designs can greatly degrade the basecalling accuracy, which is critical for accurate genome analysis. To facilitate the adoption of memristor-based CIM designs for basecalling, it is important to (1) conduct a comprehensive analysis of potential CIM architectures and (2) develop effective strategies for mitigating the possible adverse effects of inherent device non-idealities and architectural limitations.
This paper proposes Swordfish, a novel hardware/software co-design framework that can effectively address the two aforementioned issues. Swordfish incorporates seven circuit and device restrictions or non-idealities from characterized real memristor-based chips. Swordfish leverages various hardware/software co-design solutions to mitigate the basecalling accuracy loss due to such non-idealities. To demonstrate the effectiveness of Swordfish, we take Bonito, the state-of-the-art (i.e., accurate and fast), open-source basecaller as a case study. Our experimental results using Swordfish show that a CIM architecture can realistically accelerate Bonito for a wide range of real datasets by an average of 25.7\(\times\), with an accuracy loss of 6.01%.
+
Footnote †: 1}\)TU Delft
+
Footnote †: 2}\)ETH Zurich
+
Footnote †: 2}\)ETH Zurich
## 1. Introduction
_Basecalling_ is the first computational step required to translate noisy electrical signals generated by modern sequencing machines to strings of DNA nucleotide bases (i.e., [A, C, G, T)], also known as DNA reads or simply reads [6, 12, 53, 60, 98, 107, 127, 131, 133]. The accuracy of basecalling directly affects the overall accuracy and the computational effort (in terms of required algorithms and their complexity and runtimes) of subsequent genome analysis steps. The speed of basecalling also determines how fast one can run through all computational steps of a genomic study [107, 120, 134]. Therefore, accurate and fast basecalling is critical for advancing genomic studies that hold the key to unlocking the potential of precision medicine, facilitating virus surveillance, and driving advancements in healthcare and science [5, 6, 7, 13, 14, 15, 28, 29, 34, 41, 42, 62, 67, 84, 87, 103, 137, 142].
Current state-of-the-art (SotA) basecallers leverage Deep Neural Networks (DNNs) to achieve high accuracy [31, 96, 105, 120, 140, 149]. However, SotA DNN-based basecallers encounter different shortcomings when implemented using different approaches. Specifically, DNN-based basecaller designs on Central Processing Units (CPUs) and Graphics Processing Units (GPUs) face multiple major shortcomings: (1) they are computationally intensive and slow [107, 120, 134], (2) they require extensive data movement between the processor and memory [16, 17, 79], and (3) they are limited by the use of costly hardware, such as expensive SRAM memories that require 6 transistors for storing only 1 bit of information [30, 102]. When implemented on a hardware accelerator, these DNN-based basecallers face two other limitations: (1) They rely on costly floating-point (FP) computations, which place high demands on the required system's memory bandwidth and compute units with FP capability. This makes hardware acceleration difficult due to the large number and size of neural network model parameters. (2) They use costly Machine Learning (ML) techniques such as skip connections1[96, 123, 140], leading to added computation, memory, and storage overheads (e.g., to store the activation parameters that are fed to the last layers of the NN) [120]. Therefore, over the past decade, both industry and academia [27, 68, 101, 111, 115, 119] have explored the use of Computation-In-Memory (CIM)2 using memristor-based devices to accelerate DNNs.
Footnote 1: Skip connection is an ML technique that allows skipping a few neural network layers and forwarding the output to the input of a layer further ahead.
Footnote 2: Interchangeably, also referred to as Processing-In-Memory (PIM) [85].
This growing interest in using CIM for resolving the shortcomings of DNNs is driven by two main factors: (1) the potential of the CIM paradigm to process data where it resides to reduce the large performance and energy overheads of data movement and (2) the analog operational properties of these nanoscale emerging technologies (e.g., memristors) that intrinsically support efficient Vector-Matrix-Multiplication (VMM), multiple of which are used to implement a Matrix-Matrix-Multiplication (MMM) that is the most dominant operation in DNNs. However, the memristor-based CIM solutions for basecalling can greatly degrade the DNN inference accuracy due to (1) the limited quantization levels supported by memristor devices [27, 111] and (2) non-idealities of memristive devices and circuits used to adopt memristor-based memory arrays, such as sneak paths [48, 118] and the non-linearity of peripheral circuitry [58, 83, 147]. To propose viable solutions for accelerating the large-scale DNN-based basecallers, these aspects must be considered at all computing stack layers, i.e., application, architecture, and device. Such considerations are only possible with a framework capable of evaluating the impact of the non-idealities in memristor-based CIM architecture on the end-to-end basecalling accuracy.
This framework should also be able to account for the overhead that the solutions to overcome the accuracy loss may bring.
To this end, we propose _Swordfish_, a modular and extensible hardware/software co-design framework that allows us to (1) evaluate the impact of memristor non-idealities and CIM limitations on the accuracy and performance of basecalling and (2) investigate potential mitigation techniques and measure their effect on accuracy for each non-ideality (**Contribution #1**). Swordfish is used to investigate the acceleration of basecalling via emerging computing paradigms and technologies. Specifically, with Swordfish, we comprehensively investigate the potential of accurate acceleration of a SotA basecaller (Bonito) on a SotA CIM architecture (PUMA [(9)]) by accounting for the non-idealities of the underlying devices and technologies of the underlying architecture, for the first time (**Contribution #2**). Swordfish integrates real-world applications with multiple critical comparison metrics, distinct mitigation strategies to tackle the challenges of novel hardware, and comprehensive real measurements to guide the modeling of memristors. Our evaluations using Swordfish show that on a wide range of real genome datasets, PUMA accelerates Bonito, a SotA basecaller, by an average of 25.7\(\times\) realistically (i.e., the average throughput improvement is 25.7\(\times\) when we consider essential mitigation techniques to prevent huge accuracy loss). This performance still comes at the cost of a 6.01% accuracy loss (Section 5). Our evaluations also yield several key suggestions and recommendations for DNN, hardware, and system designers of future emerging accelerators with memristors for DNN-based basecallers and other applications that have two most important metrics (e.g., accuracy and performance) to consider in their evaluation (**Contribution #3**). Specifically, our investigation using Swordfish results in multiple unique insights: (1) Our results challenge the prevalent assumption that DNN-based applications will automatically succeed on memristor-based CIM due to inherent redundancy in large neural networks, (2) combining mitigation techniques at only one abstraction level (e.g., circuit or system level) does not necessarily improve the accuracy loss as they can potentially go against each other, and (3) combining multiple mitigation techniques at the circuit and system levels can offset the accuracy loss induced by non-idealities significantly.
## 2. Background and Motivation
This section briefly discusses the necessary background and motivation for this work. We refer the reader to comprehensive reviews [(85; 6; 18; 45; 98)] for more details.
### Genome Sequencing Pipeline
The genome sequencing pipeline consists of computational steps we employ to acquire genome sequences as strings of DNA characters (i.e., {A, C, G, T}) [(127; 6; 131; 60; 98; 107; 127; 133)] for subsequent analysis in bioinformatics, e.g., cell type identification, identification of marker genes, and variant detection.
Although, currently, the most available data and tools in the genomics realm are for short reads [(20; 39)] (mainly produced by Illumina sequencers), working with highly accurate long genome sequences is generally favorable as they reduce the computational cost of reconstructing the genome. For this reason, there is a large momentum towards accurate long-read sequencing [(6)]. Our work focuses on finding solutions and analysis tools that target long reads while also not discarding tools (e.g., GenAx [(39)] and GenASM [(20)]), designed for short reads. A leading method for long-read sequencing is the nanopore sequencing technology. Nanopore sequencers [(90; 93; 94)] translate raw signal squiggles into bases (A, C, G, T) using complex neural networks. Today, Oxford Nanopore Technologies (ONT) is a company that produces the most commonly used sequences based on Nanopore technology.
Fig. 1 illustrates the nanopore genome sequencing pipeline [(107)] and the placement and execution time breakdown of each of its steps. We use SotA tools for each step and run the tool on the datasets described in Section 4.
We make two main observations. First, basecalling is the first computational step in the pipeline. Second, basecalling dominates the execution time of a single run in the pipeline. These steps make up more than 40% of the entire execution time. Our empirical observation aligns with those in prior works [(107; 33; 81)].
### Basecalling
Basecalling is responsible for converting raw electrical signals produced by a nanopore sequencer to digital genome symbols, i.e., [A, C, G, T] [(127; 12; 53; 60)]. Recent works [(92; 95; 96; 134)] heavily investigate the use of DNNs for basecalling as they can provide high accuracy than Hidden Markov Model (HMM) based techniques [(91)].
There are generally two approaches for improving the accuracy and/or performance of a basecaller: 1) software-based and 2) hardware-based. Software-based methods propose new algorithms (e.g., DNNs [(95; 140; 96)] instead of HMMs [(91)]) or faster and/or smaller DNN architectures [(120; 140)]. Hardware-based approaches propose various hardware platforms for the target algorithm (i.e., DNN or HMM) to improve performance with (hopefully) small impact on accuracy [(81; 120)].
We observe four main shortcomings in SotA basecallers, which limit their execution time and/or hardware acceleration:
* SotA basecallers are slow and energy inefficient. For example, Guppy basecalls 3 Giga basepairs (Gbps) in \(\sim\)6 hours while a following step in the genomics pipeline, such as read mapping using minimap2 [(71)] takes only \(\sim\)0.11 hours [(120)].
* SotA basecallers use DNN models with costly skip connections [(123)]. For example, Bonito needs an additional \(\sim\)21% of model parameters (along with associated memory and storage overheads) for skip connections and requires additional computation on them. Note that a skip connection permits bypassing certain layers within the neural network, transmitting the output of one layer as the input to subsequent layers [(123)]. These connections are costly because they (1) typically force the network to perform additional computation, for example, to match the channel sizes, (2) incur extra memory and storage overhead, as they require storing the activation parameters that are fed to the later layers [(16; 17)], and (3) incur additional off-chip data movement overhead when these networks are run on conventional processor-centric hardware platforms, like CPUs and GPUs.
Figure 1. Overview of the nanopore genome sequencing pipeline and execution time breakdown of different steps.
* [leftmargin=*]
* SotA basecallers exploit 32-bit floating point precision for their model parameters [96, 134, 140]. This effectively increases (1) the required bandwidth and processing units, e.g., with FP compute capability, and (2) inefficiency in the hardware realization of the underlying models.
* SotA basecallers incur expensive data movement between the computation units and the memory units [79, 81, 120].
We emphasize that 40% of execution time spent on basecalling (Section 2.1), the first and arguably most critical step in the pipeline, is significant and worth accelerating. Today's best basecallers often underperform on SotA systems, generating bottlenecks. A potentially 40% decrease in genome analysis runtime implies a proportional reduction in power and energy, which is critical considering the extensive data and computational demands of modern genome analysis systems. Therefore, optimizing basecalling contributes greatly to improving the efficiency and sustainability of the genomics pipeline.
### Memristor-based CIM and Associated Non-Idealities
Resistive memories or memristive devices, such as ReRAM, PCM, and STT-MRAM [59, 69, 119, 132], have recently been introduced as suitable candidates for both storage and computation units that can efficiently perform vector-matrix multiplication [138] and logical bulk bit-wise operations [26, 113, 114, 139, 73], as they can follow Kirchhoff's law inherently [121]. Therefore, many recent works [9, 26, 27, 111, 112, 139, 143, 144, 145] exploit these devices in their CIM architectures. Memristor devices also enjoy non-volatility, high-density, and near-zero standby power [139, 73, 11].
A typical memristor-based memory crossbar capable of VMM and other logical operations is shown in Fig. 2[9, 26, 27, 111, 139] alongside its possible non-idealities.
This memristor-based structure can suffer from at least four types of non-idealities or variations that can eventually affect the results of the enabled VMM operation, i.e., lead to errors in the VMM result: (1) The non-ideal digital to analog converter (DAC), due to the effective resistive load (known as \(R_{Load}\)) in its circuit [55], (2) Variation of synaptic conductance, which includes both imperfect programming operation (commonly known as write variations) and the process variation that exist in memristors [4, 23, 70, 148], (3) The wire resistance and sneak paths, due to imperfect wires (i.e., wires with different resistances) and the changes in the voltages of the internal nodes while performing a VMM operation [56, 148], and (4) non-ideal sensing circuit or analog to digital converters (ADCs), due to rigid or hard-to-accurately-change references used for distinguishing/sensing the end result [55, 144]. Our work focuses on these specific non-idealities inherent to memristor technologies in a CIM architecture. While we do not explicitly address other circuit challenges and non-idealities, we acknowledge their presence and the existing solutions developed to mitigate them in electronic systems. For example, crosstalk [140, 129, 130], which involves interference between adjacent circuit traces or wires, can indeed lead to data corruption and compromise information integrity. However, we focus on the specific non-idealities relevant to our hardware architecture, not crosstalk. Note that industry-standard techniques, such as shielding and layout design, decoupling components, ground and power distribution, signal timing and margins, ECC and scrubbing, isolation and shielding, and crosstalk-aware clock distribution, have been extensively studied and developed to mitigate crosstalk issues. We assume that similar techniques can be applied to address any potential crosstalk concerns in memristor-based CIM systems.
Recent works [9, 111, 19, 27, 86] report impressive performance and energy improvements for DNN models executed on memristor-based CIM architectures, mainly assuming idealized underlying hardware. Moreover, DNNs are known to be resilient to some noise [125, 44, 126, 128, 44, 66]. However, since memristor-based CIM architectures are indeed non-ideal and the resiliency of DNNs has a limit, to decide whether or not these platforms are indeed suitable for realizing our DNN-based basecaller, one needs to evaluate the impact of these non-idealities on the end-to-end application accuracy and account for the overhead that the solutions to overcome the accuracy loss may bring. Such a framework is missing among prior works and is a contribution of our work (Section 3).
### Programmable Inference Architecture
PUMA (Programmable Ultra-efficient Memristor-based Accelerator) [9, 10, 11] is a complete set of (micro)architecture, simulator, and compiler that supports the execution of many ML applications, using memristor crossbars enhanced with general-purpose execution units. PUMA uses a spatial architecture and provides the necessary programmability and generality to execute a wide range of ML-based applications on memristor-based crossbars. For evaluations in Swordfish, we assume an PUMA-based architecture for two reasons. First, PUMA supports all the necessary types of NN layers in basecallers: CNN, LSTM, and linear. This is especially handy for our main target basecaller, Bonito. Second, the architecture, simulator, and compiler are open-sourced [10, 11] and well-documented for an extension, unlike many other rich architectures.
## 3. Swordfish Framework
Swordfish is a framework designed to guide the evaluation of CIM designs for DNN-based basecallers.
### Swordfish Overview
Fig. 3 presents an overview of the Swordfish framework. Swordfish consists of 4 key modules:
* _Partition & Map_ module that partitions and maps the Vector-Matrix-Multiplication (VMM) operations of the target DNN-based basecaller to the underlying CIM platform,
Figure 2. Overview of memristor-based crossbar arrays and possible non-idealities.
* _VMM Model Generator_ module that generates an end-to-end model for possible non-idealities and errors of a VMM operation considering the underlying technology in the CIM design,
* _Accuracy Enhancer_ module that implements online and offline mitigation techniques to counter accuracy loss, and
* _System Evaluator_ module that analyzes the accuracy and throughput of basecaller while also providing an area overhead.
We emphasize that the accuracy analysis in the System Evaluator module is critical and unlike evaluations of conventional platforms, e.g., Field-Programmable Gate Arrays (FPGAs) or GPUs. Its importance stems from the abundance of the underlying non-idealities, variations, limitations, and hardware perturbations of the emerging hardware paradigms (Sundhi et al., 2017). From now on, we refer to the proposed framework as _Swordfish_ and the actual implemented memristor-based CIM design for our target basecaller Bonito as _SwordfishAccel_.
### Partition & Map
To run the DNN of a basecaller on a CIM architecture, one should map each of the VMM operations in the target DNN to the analog memory arrays and the rest of the operations to the digital peripheral circuitry. The Partition & Map module takes care of this task in Swordfish by dividing individual functions of the basecaller into the analog or digital components of the underlying architecture. This process is required one time for every basecaller and has two steps.
In the first step, Swordfish decides which memory crossbars will perform each VMM operation of each layer. For Bonito basecaller, Swordfish decides which memory crossbars handle the VMM of the first convolutional layer and which crossbars are responsible for the VMMs of the following LSTM and linear layers. Swordfish assumes that all the underlying crossbars have the same size and readout peripheral circuitry (e.g., ADCs).
In the second step, Swordfish decides how it maps the weights to each crossbar. Swordfish supports different programming/writing techniques for memristor devices, such as write-read-verify (WRV) and Set/Reset pulse programming.
In mapping and evaluation, Swordfish makes the following widely common design choices:
* The input streams into the first layer of DNN. Swordfish does not divide the input into chunks and leaves this task to the host. Doing so helps Swordfish to evaluate the maximum throughput of a basecaller (Swordfish, 2017; Dosov et al., 2018), independently of the input size.
* The next layer starts its computation as soon as the previous layer of the basecaller produces enough values. This is also a common assumption for evaluating the maximum possible throughput of a DNN in simulation (Bou et al., 2019; Dosov et al., 2018).
* Multiple crossbar arrays can be simultaneously active and perform the necessary operations (VMM and other operations necessary for the target DNN, such as activation. This assumption ensures that full chip utilization is not limited due to power constraints. One can consider this parallelism to be analogous to the concurrent activation of multiple subarrays in different banks and bank groups in traditional DRAM (Swordfish, 2017; Dosov et al., 2018; Dosov et al., 2018).
* Swordfish optimizes its design decisions for the highest achievable accuracy, throughput, and memory utilization in the stated order. This is a common priority order for optimizations in basecaller's (Swordfish, 2017; Dosov et al., 2018; Dosov et al., 2018).
### VMM Model Generator
VMM Model Generator is responsible for generating the non-ideal output per each VMM required by the basecaller. VMM Model Generator differentiates between constraints and non-idealities. This is essential in a CIM design where non-idealities or constraints do not necessarily lead to a loss in the accuracy of the application. To model the effect of these constraints and non-idealities on the accuracy of an application, Swordfish considers them at the lowest-level building block where they aggregate, i.e., where their results merge. In a memristor-based CIM architecture for a DNN-based basecaller, such an effective place to consider the effects of constraints and non-idealities is the VMM operation output. Therefore, the VMM Model Generator in Swordfish focuses on assessing the effects of each factor on a VMM operation, while our evaluations and analyses assess the end-to-end basecalling metric.
This module takes three types of inputs. First, it takes the results of the previous module (i.e., \(\bigcirc\) Partition & Map in Fig. 3) to determine the size of the VMM. Second, it takes the circuit and device description (i.e., constraints and non-idealities) that can affect accuracy. Examples inputs in this category are (1) the level of quantization, (2) the circuit variations (e.g., in inputs (e.g., DACs), wires, and outputs (e.g., ADCs) device), and (3) device variations. Third, it takes the weights of the target basecaller, which can be provided directly by the user or the Accuracy Enhancer module that applies multiple training mechanisms (Section 3.4). The module outputs the non-ideal output vector per each input vector and weight matrix (i.e., the expected vector result for a VMM).
Swordfish supports two different approaches for modeling a VMM. The first approach is to use a pre-calculated library of measurements on actual devices. The second approach is to use an analytical model (e.g., a fast crossbar model (FCM) (Sundhi et al., 2017)). Section 5 evaluates these approaches separately.
In the first approach, Swordfish queries a library that, for a given array size and input vector, returns an output vector randomly chosen from many (\(\geq 10^{4}\)) possible outputs based on measurements on an actual crossbar with the same dimensions as the length of the active input vector. The measurements in the library already contain all the possible non-idealities in the target VMM operation, i.e., non-idealities that may arise from DACs, ADCs, circuits, and devices in the crossbar. One can build this library by measuring multiple tiles several times. For each of these measurements, one should program the initial values of memristors within a tile with the weight values of the target DNN to be evaluated on Swordfish. In this paper, the distinct initial resistance states are based on the Bonito basecaller (Dosov et al., 2018). The random choice from the library aims to account for variations and non-idealities among different memristor-based tiles, which can arise from different initial values of each memristor device and/or manufacturing differences. By integrating real
Figure 3. Overview of Swordfish framework.
measurements and accounting for tile-to-tile differences, we believe our methods accurately reflect non-ideality distribution in practical settings. Although this approach accurately represents the VMM operation considering many possible non-idealities, it lacks the flexibility of separately studying or measuring the effects of each possible error due to different non-idealities. This approach is also limited to the crossbar configurations (for example, crossbars of 64x64 and 256x256) to whose measurements one has access (Section 4).
In the second approach, Swordfish utilizes existing analytical models that are available for ADCs, DACs, and variation profiles of the underlying devices in the crossbar. Fig. 4 illustrates the steps Swordfish uses in its VMM Model Generator for this approach.
In Fig. 4, Swordfish applies the analytical model for a non-ideal DAC model () to the input vector of the VMM operation () and obtains the non-ideal input voltages as the output vector (). Swordfish then applies this new vector to a crossbar with an updated non-ideal weight matrix (), where non-idealities have been applied to the original weight matrix (from the VMM operation) based on the expected variations of each cell, which are usually obtained based on generic characterization of memristor-based crossbar arrays, i.e., without any peripheral circuitry or target weights specific to a particular DNN. The output is considered a non-ideal output current () that Swordfish applies to a model of non-ideal ADC () and obtains the output vector (), an output vector that might contain some errors.
Fig. 5 presents an overview of how Swordfish models the crossbar non-idealities for the second approach (i.e., the analytical model in the VMM Model Generator module) ( in Fig. 4). For this, Swordfish first takes the crossbar instances ( in Fig. 5) from the Partition & Map module. Swordfish considers these crossbar instances as separate matrices with digital weights (). Then, Swordfish uses a non-linear model for the synaptic device states () to map the weight matrices of digital weights into ideal corresponding conductance matrices (). After that, Swordfish applies to these metrics the synaptic variations for the crossbar () that are determined from an analytical model based on the estimated behavior of memristor devices within a crossbar array. The output consists of the same number of matrices, but now with adjusted weights (). Swordfish finally applies to those matrices the profile of all known circuit-level non-idealities () by adding representative metrics for these non-idealities. The output consists of matrices accounting for all variations and non-idealities ().
### Accuracy Enhancer
Since accuracy is a critical metric in basecalling, Swordfish applies several mitigation techniques to deal with the non-idealities and their induced errors on the VMM and/or basecalling. More specifically, Swordfish supports four different accuracy enhancement techniques: (1) analytical variation-aware training (VAT) (offline), (2) knowledge distillation (KD) training, (3) read-verify-write (R-V-W) training, and (4) random sparse adaptation (RSA) retraining (online).
#### 3.4.1. Analytical Variation-Aware Offline Training
Swordfish supports variation-aware training (VAT) [24, 63, 78, 80] during the training of a target DNN as the simplest method to enhance the accuracy loss due to (1) quantization and (2) possible resistance variations per weight, which can be analytically or experimentally measured. Existing works randomly inject faults into the weights of the DNN [38], or model the potential errors at the end of each layer [38, 80]. Similarly, Swordfish utilizes the crossbar characterization for the errors per VMM (i.e., the error library in the first approach in VMM Model Generator) or an analytical crossbar model for the errors per VMM (i.e., as in the second approach in VMM Model Generator). Swordfish injects the modeled errors in the training and considers the rest of the devices unaltered. Swordfish repeats this process for each VMM and every layer and then retrains the basecaller network. This way, Swordfish ensures that its retraining yields a better estimate for the errors arising from non-idealities in the crossbar.
#### 3.4.2. Knowledge Distillation-based Variation-Aware Training
In addition to offline VAT based on injecting random errors or potential errors per layer discussed in Section 3.4.1, Swordfish is capable of supporting the knowledge distillation (KD) approach as a VAT as well, i.e., Swordfish exploits knowledge/weights that exist in an ideal (typically a FP32-based) basecaller baseline to guide the training of SwordfishAccel, our memristor-based CIM design for Bonito. In KD, two models exist: (1) the teacher (an ideal implementation using high precision data format, e.g., FP32-bit format) and (2) the student (SwordfishAccel quantized to 16-bit-width fixed-point presentation for both weights and activations). The goal is to mimic the teacher's output in the student by minimizing a loss function where the target is the result of applying the softmax on the quantile function associated with the standard logistic distribution (i.e., logit) of the teacher's training [47]. We refer the reader to previous works on KD [22, 47] for further detail on how a loss function can be implemented in such a system to minimize the difference of SwordfishAccel's output and the teacher model's softmax output.
Figure 4. An overview of the VMM Model Generator’s second approach: using analytical models.
Figure 5. An overview of modeling crossbar non-idealities in Swordfish.
#### 3.4.3. Read-Verify-Write (R-V-W) Training
Read-Verify-Write (R-V-W) is a conventional error mitigation technique for non-ideal memristor-based memories that provides cell-by-cell error compensation. R-V-W is used in open-loop-off-device (OLD) [77] where R-V-W programming and sensing loop help the actual resistance of the device to converge to the expected target resistance. This method involves many read-and-write operations and feedback control for memristors, making R-V-W a slow technique to mitigate accuracy loss. Note that to improve the accuracy in R-V-W, we need to increase the fraction of the retrained weights (memristor devices in our case), increasing the cost of the mitigation technique.
#### 3.4.4. Random Sparse Adaptation Online Retraining
Swordfish uses random sparse adaptation (RSA) [22] to map the learned DNN model to SwordfishAccel. RSA is used to mitigate the performance overhead of R-V-W technique [49, 77]. RSA by itself prevents only some of the non-idealities from being materialized as inaccuracies and can be an offline mechanism. However, SwordfishAccel combines it with an online training mechanism.
For its online retraining using RSA, Swordfish places a small on-chip SRAM-based memory next to memristor-based crossbars and distributes the learned DNN model (i.e., weights) between this SRAM and memristor-based crossbars. The key idea Swordfish uses is to map the weights that otherwise would map to error-prone memristor devices to reliable SRAM cells. If one has access to the exact profile of the underlying memristor-based memory crossbars, one can exploit the knowledge on which memristors and columns are more error-prone and use this knowledge to decide which weight to map into the crossbar and which one to the SRAM. In our evaluations of Swordfish, we use this knowledge whenever we use the chip measurements already used in the first approach of the VMM Model Generator. However, Swordfish can also randomly choose memristor devices in the crossbar and map (i.e., hardware) them to the SRAM. Random choice is the next best option without knowledge about the exact error pattern of a memristor-based crossbar. We used this method whenever we used the second approach (i.e., analytical model) in the VMM Model Generator (Section 3.3).
Fig. 6 presents how SwordfishAccel adopts RSA with an online retraining mechanism (e.g., KD) in a three-step approach:
1. In the first step (), SwordfishAccel trains the original Bonito and loads the initial weights from the Bonito DNN model into the assigned memristor crossbar and the SRAM (). SwordfishAccel considers this model as the initial model for the student in KD.
2. In the second step (), SwordfishAccel performs a VMM operation as usual. However, whenever one or more of the assigned weights to SRAM (i.e., error-prone memristors or randomly chosen ones in Swordfish) is involved, SwordfishAccel reads the value from the SRAM memory instead of the memristor device. Swordfish does this by passing the inputs of corresponding devices through the SRAM value instead of the crossbar, zeroing the input for that particular memristor in the crossbar, and then summing up the values of both paths ().
3. In the third step (), SwordfishAccel returns the results of the VMM operation of each crossbar () to the retraining component (KD in our example in Fig. 6) and performs online training on only the weights that are mapped to the SRAM memory to improve the accuracy loss due to non-idealities. Note that SwordfishAccel considers the non-ideality models of crossbars, ADCs, and DACs to the student model for every training batch and trains the student. This includes both the initial training in Step () and retraining in Step ().
4. SwordfishAccel then loads the new weights to the SRAM near the crossbars () and repeats Steps () and (). SwordfishAccel uses KD-based variation aware training for its Step () in Fig. 6 online retraining. However, any other retraining method can also replace KD in our example. Note that all the parameters are already quantized to 16-bit fixed-point precision to present the model in SwordfishAccel accurately. Swordfish leverages the weights from the converged teacher model to improve the convergence of the student model.
RSA in Swordfish comes at the price of extra area overhead for the considered on-chip SRAM memory, storage in the memory controller for mapping metadata, summation of the output from the crossbar with on-chip memory, and some additional control logic evaluated in Section 5.
### System Evaluator
The System Evaluator module puts the results of all previous modules of Swordfish together to evaluate the target DNN.
As inputs, this module takes the execution time for each VMM operation, the accuracy of each VMM operation for the last layer of the DNN (as it determines the final accuracy of the DNN), the number of active crossbars in each step of Swordfish, and information in peripheral circuitry.
The System Evaluator module has 3 outputs:
1. **Accuracy:** The System Evaluator module outputs an accuracy number for the evaluated DNN. In SwordfishAccel, this number shows the accuracy of the basecaller, commonly known as _read accuracy_, which is the fraction of the total number of exactly matching bases of a read to a reference to the length of their alignment (including insertions and deletions).
2. **Bascelling throughput:** The System Evaluator module outputs a number for inference throughput of the target DNN. In SwordfishAccel, this number is the basecalling throughput, defined as kilo-baspearing generated by the basecaller per second (\(\frac{Kbp}{s}\)). The higher the basecalling throughput, the better. This is the most important metric to evaluate a basecalling accelerator's performance. Our throughput evaluations in SwordfishAccel include the time required for read and write time for the inputs and outputs, respectively.3 Footnote 3: We use this command line in Linux://usr/bin/time -v.
3. **Area overhead.** The System Evaluator module of Swordfish also reports area overhead based on the underlying architecture
Figure 6. Swordfish’s online error mitigation via RSA.
to account for the overheads of a dedicated accelerator, e.g., SwordfishAccel.
### Swordfish Evaluation Challenges
Comprehensive, fair, and practical evaluation of Swordfish is challenging for two main reasons. First, most of the SotA basecallers are either not open-source (Swordfish, 2017; SwordfishAccel, 2018; SwordfishAccel, 2018) or support only specific reads (SwordfishAccel, 2018). Second, current simulators and frameworks mimicking memristor-based CIM designs are either not open-source, do not consider the underlying non-idealities of the devices, or only support a very limited number of non-idealities, emerging technologies, or neural networks (SwordfishAccel, 2018; SwordfishAccel, 2018).
To evaluate Swordfish despite these challenges, we take two representative examples. Specifically, for the first challenge, we primarily compare our method with Bonito (Bonito, 2018), an open-sourced, universally applicable tool currently under active development and maintenance by ONT (Section 2.1). Bonito stands out for its exceptional accuracy and performance over its predecessors like Guppy (SwordfishAccel, 2018) and does not face the limited support for reads (e.g., Dordao (2018)) or lack of open-source implementation and training code (e.g., Helix (Helix, 2018), Halcyon (Halyon, 2018), Guppy (SwordfishAccel, 2018), and SACall (SwordfishAccel, 2018)). For the second challenge, we consider PUMA architecture as the baseline architecture for the two reasons mentioned in Section 2.4.
## 4. Evaluation Methodology
### Implementations and Models
For the performance and area studies, we significantly extended the PUMA simulator and PUMA compiler to account for (1) Bonito's DNN architecture, (2) updated configurations in Core Architecture of PUMA (Bonito, 2018) based on our memory models and the TSMC 40 nm (SwordfishAccel, 2018) technology node used for peripheries, and (3) performance and area overheads introduced by non-idealities of memristors and their mitigation techniques. Note that we use Synopsys Design Compiler (SwordfishAccel, 2018) and synthesize the additional components of our design in the target technology to obtain their execution time, power, and area. We apply the prominent technology scaling rules (SwordfishAccel, 2018) to the configuration numbers of the PUMA architecture to ensure all of our design components are based on the same technology node.
For accuracy analysis (in both training and inference phases), we also extensively modified Bonito's open-source implementation (Bonito, 2018) to consider the device characteristics and limitations of the architecture. Unfortunately, PUMA does not allow us for such analysis as it considers the effects of only quantization and write variations on accuracy.
We utilize prototyped cross-array memristors as our memory arrays and capture the variations in their spatiotemporal conductivity, execution time, and area overhead of necessary operations. We project our characterization results of real memories to our DNN evaluations. We also build a statistical model from our measurements to capture the full picture of a larger memory model for large-scale variations, timing, and area parameters. This model contains four types of variations: (1) input DACs, (2) synaptic variations, (3) wire resistance, and (4) output ADCs. The memory prototypes and models used for evaluations and simulations are based on the results of the EU project MNEMOSENE (Mexico, 2018), concluded in 2020, generously provided by the involved parties. The results have been tested heavily during the project and by various metrics found in the related literature. Table 1 shows the main parameters of our memristor-based crossbars.
Our study specifically evaluates Swordfish on ReRAM memristors for three reasons. First, the availability of actual chip measurements is essential for our non-ideality-centered study. Second, lower energy costs for writing/programming than alternatives like PCM. Third, ReRAM's established status within the memristor family provides reliable baselines and intuitions for device-level features, enhancing the credibility of our proposal.
### Simulation Infrastructure
We ran our baseline Bonito basecaller and software implementation of Swordfish on a 128-core server with AMD EPYC 7742 CPUs (Bouil, 2018), 500GB of DDR4 DRAM, and 8 NVIDIA V100 (Vaswani, 2017) cards. We train and evaluate Swordfish accuracy and software results on our NVIDIA cards (with 32-bit floating-point precision). We use the nvprof profiler (SwordfishAccel, 2018) for the profiling experiments on GPU.
### Evaluation Metrics
We use metrics output by the System Evaluator module for our comparisons. Section 3.5 clarifies these metrics.
### Datasets and Workloads
Table 2 provides datasets from a MinION R9.4.1 flowcell (SwordfishAccel, 2018; SwordfishAccel, 2018) we use in our evaluations.
## 5. Swordfish Evaluation
We first use Swordfish to investigate the impact of constraints and non-idealities of a PUMA-based architecture (Section 2.3) on the accuracy of the Bonito basecaller (Bonito, 2018). We call this design the Ideal-SwordfishAccel, as it achieves the highest performance for our memristor-based hardware accelerator without any accuracy enhancement technique. We then explore the effect of the accuracy enhancement mechanisms in Swordfish applied to deal with the inaccuracies of the memristor-based accelerator as it affects the Bonito basecaller's accuracy. The results of this design are presented under Realistic-SwordfishAccel.
### Effect of Quantization on Accuracy without Accuracy Enhancement
Since both the weights and activations in the original DNN are in FP32 format, Swordfish can opt for quantizing one or both of them. The degree of the quantization can differ depending on how much
\begin{table}
\begin{tabular}{|l|l|} \hline
**Technology and device** & ReRAM \(H/O_{2}/TiO_{2}\)(SwordfishAccel, 2018) \\ \hline \multicolumn{2}{|l|}{**Cell configuration**} & 1T1B (NANOCS, 1.460 mm; 40 mm) \\ \hline \multicolumn{2}{|l|}{**HR3/18s**} & 1 M210R1 \\ \hline \multicolumn{2}{|l|}{**\(\textbf{n_{min}}/\textbf{n_{max}}\)**} & 0.63,30 \\ \hline \multicolumn{2}{|l|}{**Array Sites**} & 40\(\times\)40 (2563\(\times\)256) \\ \hline \multicolumn{2}{|l|}{**SA V\({}_{min}\)**} & 40 mV \\ \hline \end{tabular}
\end{table}
Table 1. Our array and device configurations.
\begin{table}
\begin{tabular}{|l|l|l|} \hline & Dataset (Organism) & \# Reads & Reference Genome Size (bp) \\ \hline \hline \multirow{2}{*}{D1} & Attected digit & 4,467 & 3,814,719 \\ \cline{2-3} & 16-377-6081 & \multirow{2}{*}{8,669} & 2,042,591 \\ \cline{2-3} & Haemplits & & \\ \cline{2-3} & Mic132, & & \\ \hline \multirow{2}{*}{D3} & Rebasis & \multirow{2}{*}{11,047} & 5,134,281 \\ \cline{2-3} & NUH29 & & \\ \cline{2-3} & Rlexible & & \\ \hline \multirow{2}{*}{D4} & Rebasis & \multirow{2}{*}{11,278} & 5,337,691 \\ \cline{2-3} & & & \\ \hline \end{tabular}
\end{table}
Table 2. Read and Reference Datasets for our Basecalling Evaluation.
each parameter impacts the overall accuracy. Swordfish considers seven different configurations: the default configuration (DFP 32-32), where weights and activations use the FP324 format, and 6 FPP X-Y5 formats, where X and Y denote the fixed-point precision of weights and activations, respectively. Swordfish currently only supports power-of-two precision levels for its quantized configurations.
Footnote 4: FP stands for floating point.
We make two major observations. First, Bonito's architecture can tolerate some quantization level without accuracy loss. More specifically, across all evaluated datasets, quantization down to 16 bits does not affect the accuracy at all, and quantization down to 8 bits reduces the accuracy by less than 9% even in extreme cases. We conclude that Ideal-SwordfishAccel can still reduce the precision of its network from a 32-bit FP format to 16-bit-width fixed point precision without accuracy loss. This way, Ideal-SwordfishAccel can (1) accelerate the network on a platform limited to fixed point format representation and (2) improve the energy efficiency of the network via lower data precision. This observation is on par with similar studies [54, 111, 120] exploiting quantization as a technique to improve the performance and energy efficiency of a DNN with a negligible accuracy loss.
Second, tolerance to quantization varies depending on the input dataset. This makes the effect of quantization on accuracy workload-dependent. However, the accuracy drop for different quantization configurations follows more-or-less a similar trend irrespective of the dataset, i.e., they all follow a decreasing trend with reduced data representation. We conclude that Swordfish's understably network (Bonito) tolerates some quantization but will offer very low accuracy for extreme quantization (i.e., lower than 4-bit precision) irrespective of the dataset. We note that an accuracy drop of \(\sim\)5% and higher is considered unacceptable for a future basecaller, as accuracy is the most critical metric in SotA basecallers. This observation is consistent with prior works on smaller [55] or different types of networks [120].
We conclude that quantization is a viable solution to tackle data representation constraints in hardware accelerators and, therefore, can be used in a framework such as Swordfish. However, accuracy loss due to quantization (applied with the expectance of accuracy loss due to variations and non-idealities) leads us to consider only down to 16 (or possibly 8) bits of precision for both weights and activations before a significant accuracy drop occurs. Therefore, the following studies consider only a 16-bit integer as the quantization level.
### Effect of Non-idealities on Accuracy without Accuracy Enhancement
We examine the effect of four non-idealities on basecalling accuracy. The results presented in this section belong to the second approach of modeling non-idealities in the VMM Model Generator module, i.e., using analytical modeling (see Section 3.3).
#### 5.2.1. Effect of Write Variation on Accuracy
Write variation can single-handedly impact the accuracy results of a VMM operation [22, 54]. Therefore, we analyze it separately.
Fig. 7 presents the effects of write variations on accuracy. The x-axis sweeps the write variation rate. The error bars account for the accuracy variations on different write variation rates over 1000 runs of the model. Since the models for write variation are circuit-dependent and have varying probabilities of affecting the stored/programmed data, this methodology provides us with a better insight into the effect of this non-ideality on accuracy.
We make two main observations. First, slight write variation can lead to a significant drop in the accuracy of end-to-end basecalling. To a great extent, this is on par with previous works' observation of the write variation impact on VMM accuracy [22, 54]. For example, the accuracy drops vary from 3.30% to 87.34% for D1 and from 3.24% to 85.76% for D4.
Second, the exact accuracy loss depends on the input dataset, i.e., the accuracy is workload-dependent and varies for the same write variation among different subfigures in Fig. 7. For example, for the same write variation rate of 25%, the accuracy on our two datasets (i.e., D2 and D4) can vary by 0.93%.
We conclude that write variation in Ideal-SwordfishAccel can debilitate the basecalling process significantly. In other words, write variation can eliminate all the potential performance and energy efficiency benefits of such a memristor-based design if not mitigated correctly. Therefore, unlike the quantization constraint, we should closely control the write variations in any future design for an acceptable basecaller. Fortunately, some previous works [22, 37, 100] propose mitigation techniques that, when combined, can provide us with reasonable (e.g., amount of \(\leq\) 10%) write variation. From now on, we consider only up to 10% write variation (as defined in Section 2.3) in our evaluations.
#### 5.2.2. Effect of Combined Non-idealities on Accuracy
Fig. 8 and Fig. 9 show the accuracy after considering all other sources of non-idealities (see Section 2.3) for our four datasets on two different crossbar sizes of 64x64 and 256x256, respectively. The error bars show the distribution when considering 10% write variation over 1000 runs. For each dataset, Fig. 8 and Fig. 9 present the accuracy results for five configurations presented as individual bars in the figures. The first three bars from the left present the results for individual non-idealities, i.e., synaptic+wire resistances (_Synaptic+Wires_), sensing+ADC circuitry (_Sense+ADC_), and DAC+driver circuitry (_DAC+Driver_), respectively, that Swordfish accounts for in
\begin{table}
\begin{tabular}{|c||c|c|c|c|c|c|c|} \hline
**IDF 32-32/FP 46-1/FP 48-1/FP 5-4/FP 48-1/FP 4-1/FP 4-2** & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \hline
**D1** & 97.32\% & 97.32\% & 97.12\% & 97.12\% & 97.12\% & 96.42\% & 96.42\% & 93.62\% \\ \hline
**D2** & 97.32\% & 97.32\% & 96.72\% & 96.72\% & 96.72\% & 94.62\% & 94.24\% \\ \hline
**D3** & 97.32\% & 97.32\% & 96.02\% & 96.32\% & 96.42\% & 99.12\% & 93.25\% \\ \hline
**D4** & 97.32\% & 97.32\% & 96.42\% & 96.42\% & 94.22\% & 93.32\% & 93.62\% \\ \hline \end{tabular}
\end{table}
Table 3. Accuracy evaluation after quantization.
Figure 7. Accuracy after taking into account write variation.
its second approach of modeling non-idealities in the VMM Model Generator module, i.e., using analytical modeling (Section 3.3). The fourth bar, _Combined_, accounts for all the non-idealities from the same analytical model simultaneously. The fifth and last bar, _Measured_, considers all the non-idealities from the library of real chip measurements in the first approach of modeling non-idealities in the VMM Model Generator (see Section 3.3).6 We make six main observations.
Footnote 6: We leave the exploration of every possible combination of individual non-idealities to future work.
(1) A combination of non-idealities (i.e., each of the bars labeled with "Combined" or "Measured" or the 4th and the 5th bar per dataset in Fig. 8 and Fig. 9) leads to a significant accuracy loss irrespective of the dataset or crossbar size. For example, observe the accuracy loss when considering all the non-idealities in an analytical way (bars labeled as "Combined"). The accuracy loss varies from 18.32% to 31.32% ( in Fig. 8) across different datasets (i.e., D1 to D4). The same trend can be observed in Fig. 9.
(2) The impact of individual non-idealities (i.e., _Synaptic+Wires, Sense+ADC_, or _DAC+Driver_) on the accuracy (loss) is different. For example, observe the accuracy loss of _DAC+Driver_ versus _Synaptic+Wires_ in D1 ( in Fig. 8). For the same dataset, the accuracy loss varies from 13.32% for _DAC+Driver_ to 15.34% for _Synaptic+Wires_. A similar difference also exists in crossbars of size 256\(\times\)256 in Fig. 9.
(3) The accuracy loss for combined non-idealities is non-additive. For example, in D1, the total accuracy loss of _Measured_ is 35.96% ( in Fig. 8) yet the simple addition of numerical accuracy loss of _Synaptic+Wires_, _Sense+ADC_, and _DAC+Driver_ totals 20.32%. We conclude that certain errors mask others.
(4) Accuracy loss values follow a similar trend irrespective of the dataset. See the trendlines in Fig. 8 for D2 and D3. However, absolute accuracy loss values vary from one dataset to another.
(5) The smaller the crossbar, the lower the accuracy loss. For example, for D1, we have lower accuracy loss (of 20.32% versus 26.33%) when using a 64\(\times\)64 crossbar compared to a 256\(\times\)256 crossbar ( in Fig. 8 vs. in Fig. 9 for the _Measured_ configuration). This is because a smaller crossbar has mostly smaller accumulative noise induced in wires of a smaller array.
(6) Different non-idealities affect the same dataset differently for different crossbar sizes. For example, the accuracy loss due to non-idealities in _DAC+Driver_ is more dominant than those in _Sense+ADC_ on a 64\(\times\)64 crossbar, while this is the opposite for a 256\(\times\)256 crossbar. See in Fig. 8 and Fig. 9.
Even for small yet practical crossbars of size 64\(\times\)64, the accuracy loss observed in this section under both _Combined_ and _Measured_ configurations in Fig. 8 and Fig. 9 is still significant (e.g., from 22.19% to 24.32%) and unacceptable for a basecalling step that affects many other steps of a genome sequencing pipeline. We conclude that non-idealities in the memristor-based CIM designs, especially when combined, can be detrimental to basecalling accuracy and must be accounted for and mitigated before considering such a design useful in any other aspect.
### Effect of Accuracy Enhancement on Quantized Basecallers
Fig. 10 shows the results of applying Swordfish's accuracy enhancement techniques to a quantized Bonito basecaller. The x-axis presents six configurations for quantization as defined in Section 5.1. For each quantization configuration, we evaluate five accuracy enhancement techniques, namely _VAT_, _KD_, _R-V-W_, _RSA+KD_ (see Section 3.4), and a combination of all techniques labeled as _All_. The y-axis shows the accuracy of each technique for the corresponding quantization configuration. The horizontal line marked as Baseline (DFP 32-32) is the baseline accuracy as defined in Section 5.1.
We observe that retraining with quantization is an effective way to mitigate the accuracy loss induced by quantization. Our results show that with only 150 extra retraining epochs, accuracy improves by 5% on average, for a basecaller quantized down to 8-bit. By applying all quantization-aware retraining methods that we discuss in Section 5.1, Swordfish can retain the same accuracy as the Bonito basecaller with 32-bit floating point precision. This result is in agreement with the prior work on different types of
Figure 8: Accuracy after taking into account non-idealities on 64\(\times\)64 crossbars for the 4 datasets.
Figure 9: Accuracy after taking into account non-idealities on 256\(\times\)256 crossbars for the 4 datasets.
neural networks (Wang et al., 2017). However, Swordfish is the first work to show this result for genomic basecalling. From now on, we use 16-bit precision quantization for all evaluations we show in the remainder of this paper. We conclude that the proposed mitigation mechanisms effectively mitigate the accuracy loss due to a reasonable amount of quantization, e.g., from 32-bit to 16-bit in the Bonito basecaller.
### Effect of Accuracy Enhancement on Non-idealities
#### 5.4.1. Effect of Accuracy Enhancement on Write Variation
Fig. 11 presents the effects of our accuracy enhancement techniques (see Section 3.4) considering different write variation rates across our four datasets (D1-D4). The horizontal dotted line shows the baseline accuracy using DFP 32-32 (see Section 5.1) for the Bonito basecaller in all figures in Fig. 11. Fig. 11-(a)-(d) evaluate the effect of _VAT_, _KD_, _R-V-W_, _RSA+KD_ separately. Fig. 11-(e) considers all of our accuracy enhancement mechanisms together (_Combined_), and Fig. 11-(f) averages the results of each accuracy enhancement technique over all the datasets (_Averaged_).7 We make four major observations from Fig. 11.
Footnote 7: The results in Fig. 11 consider the cases in which Swordfish maps only 5% of weights to the SRAM in our RSA-based online retraining approach (see Section 3.4.4). We will revisit this number in Section 5.5.
First, individual accuracy enhancement mechanisms evaluated in Fig. 11-(a)-(d) all improve the accuracy. However, their effectiveness reduces as the write variation rate increases.
Second, the online mechanism (_RSA+KD_) in Fig. 11-(d) outperforms all the offline techniques in Fig. 11-(a)-(c). _R-V-W_ in Fig. 11-(c) comes second in terms of accuracy. However, the difference between _RSA+KD_ and _R-V-W_ widens as the write variation rate increases.
Third, combining all the accuracy enhancement mechanisms (_Combined_ in Fig. 11-(e)) outperforms any individual technique over every single dataset and write variation rate.
Fourth, averaged over all the datasets (_Averaged_ in Fig. 11-(f)), _Combined_ mitigation techniques always produces the highest accuracy on average as well. However, on average, our online _RSA+KD_ technique achieves a close accuracy (less than 0.001% difference) for low write variation rates, i.e., write variation less than 10%.)
These results suggest that even with multiple accuracy enhancement techniques, only minor write variations (e.g., less than 10%) can be tolerated. We conclude that a memristor-based CIM-enabled accelerator for basecalling can be effective even with write variations, but such variations must be kept low (e.g., up to 10%). Fortunately, the projected write variation rate for memristor-based devices (Sandel et al., 2017; Wang et al., 2017) suggests the likelihood of achieving this percentage rate. For the rest of this manuscript, we assume a write variation of 10%.
#### 5.4.2. Effect of Accuracy Enhancement for Combined Non-idealities
Fig. 12 presents the accuracy of basecalling with different accuracy enhancement techniques in crossbars of 64\(\times\)64 for the modeled non-idealities. For the non-idealities, we consider the five variations of _Synaptic+Wires_, _Sense+ADC_, _DAC+Driver_, _Combined_, and _Measured_ defined in Section 5.2.2. In Fig. 12, we evaluate five accuracy enhancement techniques of _VAT_, _KD_, _R-V-W_, _RSA+KD_, and _All_ (as defined in Section 5.4.1) per non-ideality. Fig. 13 presents the same experiments for crossbars of 256\(\times\)256. As we conclude in Section 5.4, we assume 10% write variation and 5% of the weights are mapped to the SRAM in the online retraining approach (see Section 3.4.4). We present our accuracy results averaged across all the evaluated datasets. We make four main observations from Fig. 12.
1. Combining of individual accuracy enhancement techniques does not improve the accuracy in an additive manner. For example, each of _VAT_, _R-V-W_, and _RSA+KD_ in Fig. 12 improves accuracy due to _Synaptic+Wires_ by 6.85%, 10.64%, 10.85%, respectively. However, when we consider all non-idealities together in the _All_ configuration, accuracy improves by only 11.84% ( in Fig. 12).
2. The effectiveness of an individual accuracy enhancement technique depends on the underlying error and non-ideality it targets. For example, _VAT_ is as effective as _RSA+KD_ for non-idealities due to _DAC+Driver_ (94.22% vs. 94.32%). However, the gap between the two approaches widens for non-idealities due to _Synaptic+Wires_ (87.32% vs. 91.32%). See 1 in Fig. 12.
Figure 11. Accuracy after combining enhancement techniques over different write variations.
Figure 10. Accuracy enhancement after quantization.
3. Accuracy enhancement techniques improve accuracy with a similar trend over different crossbar sizes ( in Fig. 12 and Fig. 13). Although these results are averaged over our datasets, one can make the same observation on each dataset as well.
4. Accuracy enhancement techniques are more effective for larger crossbars than for smaller ones (e.g., 256x256 compared to 64x64). This is expected because there is more room for accuracy improvement for these larger crossbars, as their inaccuracies are higher. For example, we observe 22.07% improvement in accuracy for 256x256 crossbars ( in Fig. 13) compared to 16.24% for 64x64 ( in Fig. 12), after all of the accuracy enhancement techniques are applied (_All_) over all existing non-idealities (i.e., the _Measured_ configuration).
We conclude that the basecalling accuracy of SwordfishAccel can match SotA levels by using robust techniques that build on each other employing reasonable crossbar sizes (e.g., 64x64) and successfully accounting for substantial circuit variations, like write variations.
### Throughput Analysis of SwordfishAccel
Fig. 14 shows the inference throughput for Bonito on a GPU (Bonito-GPU) card discussed in Section 4.2, Ideal-SwordfishAccel, Realistic-SwordfishAccel-RVW, Realistic-SwordfishAccel-RSA, and Realistic-SwordfishAccel-RSA+KD. We show the results for each of the four datasets and the average results over all datasets. The results are for a crossbar of size 64x64 and a write variation rate of 10%, and assuming 5% of weights are placed in SRAM for Realistic-SwordfishAccel-RSA and Realistic-SwordfishAccel-RSA+KD.
We make four key observations. First, Ideal-SwordfishAccel improves the basecalling throughput over Bonito-GPU for all datasets, by 413.6\(\times\) on average ( in Fig. 14). We expect such a large improvement in throughput because SwordfishAccel is highly optimized for the main dominant kernel in the underlying DNN of Bonito, namely VMM, and avoids unnecessary data movement while harvesting the maximum parallelism.
Second, all versions of Realistic-SwordfishAccel (i.e., Realistic-SwordfishAccel-RVW, Realistic-SwordfishAccel-RSA, and Realistic-SwordfishAccel-RSA+KD) have lower performance than Ideal-SwordfishAccel, irrespective of the dataset. Performance loss with a realistic Swordfish accelerator is expected because each realistic version adds overheads to mitigate accuracy loss due to realistically-modeled non-idealities, which directly affect the performance of a VMM operation. For example, RSA adds overheads due to (1) the extra checks when reading some weights from the on-chip SRAM memory and (2) additional logic for combining the results from the memristor-based crossbar and on-chip memory readout.
Third, not all versions of Realistic-SwordfishAccel outperform Bonito-GPU. More specifically, if we use R-V-W for mitigating non-idealities (Realistic-SwordfishAccel-RVW in Fig. 14), the overhead due to additional verifications and writes significantly reduces the performance of basecalling throughput compared to Bonito-GPU by 30% on average ( in Fig. 14).
Fourth, Realistic-SwordfishAccel-RSA and Realistic-SwordfishAccel-RSA+KD provide, on average, 5.24% and 25.7\(\times\) higher throughput compared to Bonito-GPU, respectively ( and in Fig. 14). Note that, for the same accuracy, Realistic-SwordfishAccel-RSA+KD requires fewer weights inside the SRAM than Realistic-SwordfishAccel-RSA due to the retraining using KD. Hence, Realistic-SwordfishAccel-RSA+KD is faster.
We conclude that a realistic basecalling accelerator designed using Swordfish by taking into account and mitigating all non-idealities of memristor-based CIM can significantly accelerate basecalling, yet its benefits are much lower than a corresponding accelerator that does not mitigate such non-idealities and thus has much lower accuracy.
Figure 14. Throughput comparison of Swordfish variations.
Figure 12. Accuracy after enhancement mechanisms for evaluated non-idealities on 64x64 crossbars.
Figure 13. Accuracy after enhancement mechanisms for evaluated non-idealities on 256x256 crossbars.
### Area vs. Accuracy Analysis
Fig. 15 shows the tradeoff between accuracy and area in Realistic-SwordfishAccel-RSA+KD (see Section 5.5) for two different crossbar sizes (64\(\times\)64 on the left and 256\(\times\)256 on the right), with four different percentages of weights (i.e., 0%, 1%, 5%, and 10%) assigned to the SRAM memory (see Section 3.4.4). The area numbers show the absolute area for implementing Realistic-SwordfishAccel-RSA+KD considering the overhead of RSA+KD discussed in Section 3.4.4. The red dashed line shows the accuracy of the original Bonito basecaller. We make three main observations.
First, the more weights are assigned to SRAM, the higher the accuracy of Realistic-SwordfishAccel-RSA+KD. This is expected because we effectively reduce the non-idealities of the system by using more SRAM cells to remap non-ideal memristors.
Second, the area of extra SRAM cells used in Realistic-SwordfishAccel-RSA+KD increases significantly with the percentage of weights assigned to SRAM. In contrast, the accuracy improvement saturates and does not increase significantly beyond 5% of weights assigned to SRAM.
Third, assigning only 5% of weights to SRAM is sufficient to be within 5% of Bonito-GPU's accuracy for the 64\(\times\)64 crossbar.
We conclude that accounting for non-idealities in different ways exposes tradeoffs between accuracy and area overhead, which our Swordfish framework enables the designer to rigorously explore.
## 6. Discussion and Future Work
### Applicability of Swordfish Looking forward
Swordfish emphasizes the importance of a framework for evaluating multiple metrics when designing a memristor-based CIM accelerator targeting large DNNs that require throughput acceleration while having stringent bound for another metric, e.g., accuracy (in the presence of emerging technologies with many non-idealities).
Swordfish's realistic results, Realistic-SwordfishAccel, for Bonito, a large DNN, challenge the notion that DNN-based applications naturally thrive on memristor-based CIM due to the inherent redundancy present in large neural networks. Although Realistic-SwordfishAccel might not currently offer basecalling accuracy on par with state-of-the-art methods, its large (25.7\(\times\)) enhancement in performance (Section 5.5) at a much higher accuracy than baseline CIM marks it as an advantageous development. Even in the presence of memristor-based CIM non-ideality, Swordfish still shows promise, and Realistic-SwordfishAccel still maintains a competitive accuracy in basecalling by deploying a unique synergy of mitigation strategies (against non-idealities and variations) on moderately-large crossbar designs (e.g., 64\(\times\)64 or 256\(\times\)256). Our results in Section 5 detail this. Given our results, we believe it is productive and important to find more solutions to the memristor-based CIM non-idealities going forward; we believe some solutions will come with memristors becoming more mature, and some will come with more potent accuracy enhancement techniques and HW/SW co-design methods.
### Other DNN-based Applications
Our paper discusses Swordfish as a framework for accelerating basecalling using a memristor-based CIM architecture. Our results (Section 5) show the unique nature of the large DNN in Bonito, which, despite its inherent redundancy, does not quite reach SotA accuracy on memristor-based CIM, thus presenting an exciting challenge. This intriguing finding encourages a deeper exploration into CIM designs for large DNNs, reminding us not to rely solely on the scalability assumptions based on small network evaluations, such as simple CNNs for MNIST. Our results also demonstrate a large acceleration opportunity for basecalling using SwordfishAccel if we can mitigate the memristor-induced accuracy loss through HW/SW co-design approaches. We believe other DNN-based applications that use memristor-based CIM accelerators (e.g., [22, 55, 146]) can also benefit from our approach and Swordfish. For example, large DNN models in autonomous driving (e.g., [64, 75, 146]) that require accurate yet high-throughout and low-latency execution can use a Swordfish-like approach to build memristor-based CIM accelerators for their underlying large DNNs. We believe and hope that Swordfish can aid such applications in terms of both accuracy and performance.
### Better Accuracy Enhancement Techniques
Our results show that accuracy enhancement can pave the way toward SwordfishAccel becoming a reliable solution. Our online retraining mechanism shows the highest potential to improve the accuracy loss. We believe there needs to be more research on better mitigation techniques for existing and future non-idealities in memristor-based designs. Specifically, we suggest hardware/software co-design solutions such as our RSA+KD technique in Section 3.4.4. Hardware-based solutions to mitigate non-idealities [25] that are orthogonal to our RSA+KD approach is also an example of possible avenues of future work.
## 7. Related Work
To our knowledge, Swordfish is the first framework that enables evaluating the acceleration of large Deep Neural Networks (DNNs) on memristor-based Computation-In-Memory (CIM) designs considering hardware non-idealities. We have already compared Swordfish extensively to the currently-used version of the Bonito basecaller in Section 5 in terms of accuracy, throughput, and area overhead. This section briefly discusses related prior works on basecallers and CIM accelerators.
### Genomic Basecallers
Several recent works propose approaches and techniques to either improve the accuracy of basecalling or accelerate it with minimum accuracy loss. These works take three main approaches: (1) new DNN architectures (e.g., [95, 96, 97, 92, 120, 134, 140]), (2) new
Figure 15. Accuracy vs. Area evaluation of Realistic-SwordfishAccel-RSA+KD.
hardware platforms and designs such as GPUs and FPGAs to execute previously-proposed basecallers with minimum modifications (e.g., [81, 120]), and (3) software techniques such as quantization to reduce the computation and storage overhead (e.g., [32, 35, 52, 74, 120, 124, 140]).
In contrast to these approaches, Swordfish is a framework for the _evaluation_ of DNN-based (basecalling) accelerators. As such, Swordfish is orthogonal to prior works in basecalling, enabling proper evaluation of relevant works in the context of memristor-based in-memory acceleration.
### Computation-In-Memory Accelerators
Many previous works investigate how to provide new functionality using compute-capable memories based on conventional (e.g., [1, 2, 36, 40, 43, 72, 99, 108, 110]) and emerging memory technologies (e.g., [9, 27, 56, 59, 68, 73, 101, 111, 116, 117, 119, 121, 139, 144]) to help solve the data movement overheads in today's systems. These works propose new functionality in at least three major categories: (1) support for logical operations (e.g., [26, 73, 86, 110, 119, 121, 139, 144]), (2) support for complex operations, functions, and applications (e.g., [1, 36, 72, 89, 111, 112, 116, 117, 143]), and (3) programming and system support for the integration and adoption of such accelerators (e.g., [2, 3, 9, 19, 27, 55, 86, 111, 144, 145]).
Several prior works(e.g., [22, 51, 55, 128]) investigate the new requirements, tradeoffs, and challenges that arise from using the CIM paradigm (e.g., dealing with non-idealities in the analog operations). To our knowledge, no work has proposed a complete solution or framework for these challenges; thus, this area requires further investigation.
Swordfish aligns with these works as it provides (1) new functionality for compute-capable memristors at the application level for accelerating genomic basecalling and (2) a framework for evaluating the practical challenges posed by the non-idealities in the memristor computation through mitigation techniques.
## 8 Conclusion
This paper introduces Swordfish, a modular and extensible framework for accelerating the evaluation of genomic basecalling via a memristor-based Computation-In-Memory architecture. Swordfish includes a strong evaluation methodology, mitigation strategies for hardware non-idealities, and characterization results to guide the modeling of memristors. Using Swordfish, we demonstrate the significant challenges of using non-ideal memristor-based computations for genomic basecalling and how to solve them by combining multiple mitigation techniques at the circuit and system levels. We demonstrate the usefulness of our findings by developing SwordfishAccel, a concrete memristor-based CIM design for our target basecaller Bonito that uses accuracy enhancement techniques guided by Swordfish. We conclude that the Swordfish framework effectively facilitates the development and adoption of memristor-based CIM designs for basecalling, which we hope will be leveraged by future work. We also believe that our framework is applicable to other DNN-based applications and hope future work takes advantage of this.
## Acknowledgments
We thank the anonymous reviewers of MICRO 2023 for their valuable feedback. We thank the members of the QCE department at TU Delft and the SAFARI Research Group at ETH Zurich for valuable feedback and the stimulating intellectual environment they provide. We acknowledge the generous gifts provided by our industrial partners, including Google, Huawei, Intel, Microsoft, and VMware. This research was partially supported by the EU Horizon project BioPIM (grant agreement 101047160), the AI Chip Center for Emerging Smart Systems Limited (ACCESS), the Swiss National Science Foundation (SNSF), Semiconductor Research Corporation (SRC), and the ETH Future Computing Laboratory (EFCL).
|
2310.03409 | Detection Sensitivity Limit of Hundreds of Atoms with X-Ray Fluorescence
Microscopy | We report X-ray fluorescence (XRF) imaging of nanoscale inclusions of
impurities for quantum technology. A very bright diffraction-limited focus of
the X-ray beam produces very high sensitivity and resolution. We investigated
gallium (Ga) dopants in silicon (Si) produced by a focused ion beam (FIB).
These dopants might provide 3/2-spin qubits or p-type electrical contacts and
quantum dots. We find that the ion beam spot is somewhat larger than expected,
and the technique provides a useful calibration for the resolution of FIBs.
Enticingly, we demonstrate that with a single shot detection of 1 second
integration time, the sensitivity of the XRF would be sufficient to find
amongst background a single isolated inclusion of unknown location comprising
only 3000 Ga impurities (a mass of just 350 zg) without any need for
specialized nm-thickness lamellae, and down from >105 atoms in previous reports
of similar work. With increased integration we were able to detect 650
impurities. The results show that planned facility upgrades might achieve
single atom sensitivity with a generally applicable, non-destructive technique
in the near future. | Mateus G. Masteghin, Toussaint Gervais, Steven K. Clowes, David C. Cox, Veronika Zelyk, Ajith Pattammattel, Yong S. Chu, Nikola Kolev, Taylor Z. Stock, Neil Curson, Paul G. Evans, Michael Stuckelberger, Benedict N. Murdin | 2023-10-05T09:29:20Z | http://arxiv.org/abs/2310.03409v1 | # Detection Sensitivity Limit of Hundreds of Atoms with X-Ray Fluorescence Microscopy
###### Abstract
We report X-ray fluorescence (XRF) imaging of nanoscale inclusions of impurities for quantum technology. A very bright diffraction-limited focus of the X-ray beam produces very high sensitivity and resolution. We investigated gallium (Ga) dopants in silicon (Si) produced by a focused ion beam (FIB). These dopants might provide 3/2-spin qubits or p-type electrical contacts and quantum dots. We find that the ion beam spot is somewhat larger than expected, and the technique provides a useful calibration for the resolution of FIBs. Enticingly, we demonstrate that with a single shot detection of 1 second integration time, the sensitivity of the XRF would be sufficient to find amongst background a single isolated inclusion of unknown location comprising only 3000 Ga impurities (a mass of just 350 zg) without any need for specialized nm-thickness lamellae, and down from >10\({}^{5}\) atoms in previous reports of similar work. With increased integration we were able to detect 650 impurities. The results show that planned facility upgrades might achieve single atom sensitivity with a generally applicable, non-destructive technique in the near future.
synchrotron X-ray fluorescence, single-atom detection, gallium impurities, silicon-based quantum technology. +
Footnote †: Corresponding author’s e-mail: [email protected]
+
Footnote †: Corresponding author’s e-mail: [email protected]
+
Footnote †: Corresponding author’s e-mail: [email protected]
## I Introduction
Impurities in solids are important for the (opto-)electronic properties they give, and canonical semiconductor dopants are a prime example. Recently, there has been a drive towards controlled incorporation of single impurities for quantum technologies, and in the case of silicon dopants these impurities carry a spin for use in quantum sensing or computation or memory [1]. In other materials, single photon emission in quantum communications is provided by impurities and defect complexes [2]. Ion implantation is frequently used in such technologies and a variety of strategies is being pursued to ensure single impurities are incorporated with high positioning precision with focussed ion beams [3, 4, 5, 6]. In none of these cases has a single ion been detected post-implant and been verifiably shown to be produced by deterministic incorporation. For example, multiple ions have been deliberately implanted at each site because of the multiple possible host sites [3], or where single defects have been detected, their occurrence is random [2]. Even when quantum devices are produced based on a precise number of impurities, the location of those impurities and the number of unintentionally implanted impurities not involved in the device operation is not described [7]. One extremely precise, deterministic lithography technique based on scanning probes has been established [8], but only for two species of impurity (phosphorus and arsenic) in one host (silicon), and ion implantation is more rapid and flexible. Imaging of single impurities for post-implant validation with chemical specificity is challenging and has so far been established only for specialised X-ray techniques involving very thin lamellae [9, 10, 11] and surfaces [12]. X-ray Fluorescence (XRF) may provide a crucial tool, since it allows chemically selective imaging on a nm scale and does not require special (destructive) sample preparation. Regarding implantation of materials for quantum technology, XRF does not rely on
activation of the impurity and incorporation into an optically addressable defect, which can be far below the surface. It has already been used to detect deterministically incorporated impurities [13] and biomedically relevant trace elements [14], although only with micron scale resolution, and a sensitivity sufficient to detect only \(10^{5}\) or more atoms. Single atom imaging has been achieved with other techniques, but none have the generality of application of XRF, which would be very attractive. For example, single defect fluorescence at visible wavelengths is common [2] (as is single molecule fluorescence [15]), but this is restricted to specific defects/molecules with high luminescence yield and has resolution limited by the much longer wavelengths used.
In this work, we investigate the sensitivity limits of a non-destructive and generally applicable XRF microscope. We chose to investigate Ga implants because they are very easy to produce with controlled density in high resolution patterns, and we used silicon as the host because it can be obtained with very high chemical purity. Ga in silicon also has a conveniently large mass difference meaning that there is good contrast between the XRF photon energies of the impurity and host. This combination is not particularly favoured as a quantum technology, but it could provide a \(3/2\)-spin qubit and single Ga placement has been attempted to that end [16], and it allows a convenient demonstration of the principles.
In an XRF experiment, fluorescence photons are counted and binned according to their energy, which allows identification of the source elements in the beam. The identification is confounded by background consisting primarily of signals from other elements with nearby XRF energies, which may come from the sample or the environment (such as gas in the sample chamber or the X-ray optics etc). The ability to identify a pixel containing a specific chemical species requires a choice of a count threshold, \(C\), such that pixels with count \(c>C\) are identified as containing impurities and those with \(c<C\) produce emission only from background. The choice of \(C\) should be such that there is a low probability of both false positives, \(p(\mathrm{false}+\mathrm{ve})=p(c<C|0)p(0)\), and false negatives, \(p(\mathrm{false}-\mathrm{ve})=\sum_{N}p(c<C|N)p(N)\) where \(N\) is the number of impurities. If \(p(c|N)\) is Poissonian with mean (\(c(N)\)), we can calculate the conditional probabilities for false positives and negatives for a given threshold: \(p(c>C|0)=1-\Gamma(C,\langle c(0)\rangle)/\Gamma(C)\) and \(p(c<C|N)=\Gamma(1+C,\langle c(N)\rangle)/\Gamma(1+C)\), in which \(\Gamma(z)\) is the Euler Gamma function and \(\Gamma(a,z)\) is the incomplete Gamma function. A full discussion of the optimization of \(C\) is application specific because the prior probabilities of positives, \(p(N)\), and negatives, \(p(0)\), i.e., the proportion of pixels containing impurities (and whether or not they are in a pattern) is sample specific. For a simplified discussion, let us suppose that we require that the probability distributions for \(p(c|N)\) and \(p(c|0)\) should have small overlap, which we can specify from:
\[z(N)=\frac{\langle c(N)\rangle-\langle c(0)\rangle}{\sqrt{\sigma_{n}^{2}+ \sigma_{0}^{2}}}\]
Pixels containing impurities can be identified correctly if \(z\) is large. For example, requiring that the distributions are \(2\sigma\) apart would be obtained with \(z\)=2, which gives a one-tailed test probability of 2.3% that we might use as a proxy for the false detection error rate. The objective of this study is to provide \(p(c|N)\) and \(\langle c(N)\rangle\).
## II Experimental procedure
### Samples
The sample was prepared from a silicon-on-insulator (SOI) substrate (SOITEC). The top silicon (device) layer had a thickness of 450 nm and a buried oxide (box) layer of 1000 nm. The device layer is boron doped with a resistivity of \(10^{4}\) ohm cm. Initially, for ease of navigation to regions of interest, chromium (Cr) markers were created via positive electron beam lithography (EBL). We used baked poly(methyl methacrylate), PMMA A6, exposed to small electron doses that following development has delimited regions where a 30 nm Cr layer was deposited by e-beam evaporation. Subsequently, singly charged gallium ions (Ga') with an energy of 30 keV were implanted using a dual beam Focused Ion Beam microscope and Scanning Electron Microscope (FIB-SEM, FEI Nova Nanolab). The ion current was measured before and after the implantation with a Faraday cup and determined to be \(i\)=2.4 pA on both readings and, therefore, was assumed constant throughout the implantation and to have an uncertainty less than 4%. Simulations (SUSPRE, UKNIBC) [17] show that the implantation of Ga at this energy produces a straggle with rms radius of 9 nm, and the rms spot size of the FIB was nominally 15nm. The implanted areas were produced by implanting spots with a pitch of \(p\)=15.64 nm, i.e. similar to the nominal radial distribution of Ga ions from each spot. The average 2D dose, n, in impurity ions per unit area was determined with the dwell time for each spot, \(t\), from \(n=it/ep^{2}\) where \(e\) is the fundamental unit of charge. The dwell times were varied typically in the range from 0.25 us to 25 us, with sub-ns precision (i.e. below 1% uncertainty). Imaging of the high-fluence implant squares with a 5 keV SEM beam (Fig. 1a) was used to quantify the effects of
backlash and drift etc show that the actual area is sufficiently close to the nominal area that the dominant contribution to the uncertainty in \(n\) is the precision on the current measurement. The sample was not annealed, i.e., no attempt was made to activate the Ga ions substitutionally and the collateral damage due to the implantation was not repaired. The sample implantation array was designed in order to provide regions of varying density and feature size, for tests of sensitivity and resolution, respectively. Each individual sample area contained rows of gallium implanted rectangles of varying width and density with spacing of 310 nm. All rectangles were _m_=100 FIB spots high (99_p_=1550nm). Each row began and ended with a 100x100 spot square = 1550x1550 nm\({}^{2}\) marker square with implantation density of _n_=80.1 ions nm\({}^{-2}\), to aid in location of the row region. We will concentrate discussion in this work on three rows in particular. The sensitivity test row (which we label sample S) was composed of rectangles of fixed width (_m_=100, _d_=1550nm) and varying density ranging from 40.1 to 0.06 nm\({}^{-2}\) (see Table 1). The resolution test row (sample R) had fixed density of 40 nm\({}^{-2}\) and rectangles of varying width from 50 down to a single line (with each rectangle repeated three times). Finally, the "lines" row (sample L) consisted of single lines (_m_=1) of FIB spots of decreasing density. In Table 1 the densities for sample L are specified such that _np\({}^{2}\)_ is the number of ions per spot, but the actual peak density depends on the point spread function (PSF) for a single FIB spot. Although the second thinnest feature in sample R was specified as 2 lines, there was probably a rounding error in software that made this feature a single line.
A similar sample was prepared on a 30 nm thick single-crystal silicon membrane for use in energy dispersive X-ray spectroscopy (EDX) measurements in a Scanning Transmission Electron Microscope (STEM). The implanted regions consisted of 100x100 spot squares with _n_=8.3 to 1.2 ions nm\({}^{2}\) at a logarithmic step down in factors of 1.383. Other parameters were kept the same as the bulk implants such as _p_=15.64 nm, and no chromium marks were required.
The sample was also imaged with Scanning Probe Microscopy (Kelvin Force - KPFM), Fig. 1b. The KPFM contrast mechanism is unclear since KPFM is sensitive to the work function and thus to subsurface charge, but the samples are unannealed and the dopants are unactivated. The images show clear contrast where the implants have occurred, and this could be due to a combination of the presence of dopants and the implantation damage. It is notable that there is a clear halo around the features in both the SEM and the KPFM, indicating that the FIB spot includes a component that is significantly larger than the nominal spot size, most often called ion beam tail.
### XRF Experiments
Our nanoimaging experiments were conducted at the Hard X-ray Nanoprobe (HXN) beamline within NSLS-II, utilizing the Multi-Layer Laue (MLL) optics for nanofocusing. Detailed information on our instrumentation is available elsewhere[18, 19, 20]. In brief, HXN used an IUV20 undulator as a photon source and higher-order harmonics were rejected using an Rh-striped collimating mirror. A Si-111 double-crystal monochromator created a monochromatic beam with an energy of 13.5 keV (above the Ga K-edge), which was subsequently focused onto a secondary source aperture using both vertical and horizontal focusing mirrors. A nano-focused X-ray beam was generated by positioning MLLs vertically and horizontally. The resulting spot is approximately square, and fitting to knife-edge scans in each direction yields a full-width-at-half-maximum (FWHM) focus size of 15.3 \(\times\) 16.9 nm\({}^{2}\), while using ptychography reconstruction[20] we obtain a FWHM size of 13.9 \(\times\) 12.3 nm\({}^{2}\). An order sorting aperture (OSA) was placed between the sample and MLLs to block
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Sample** & **Varying density, \(n\) (ions nm\({}^{-2}\)), or** & **Fixed \(n\) or _m_** \\ & **width, \(m\) (number oflines)** & \\ \hline S & _n_= 40.1, 21.0, 11.0, 5.76, 3.02, 1.58, & _m_= 100, \\ & 0.829, 0.434, 0.228, 0.0630 & _d_= 1550nm \\ & [log step down in factors of 1.9] & \\ \hline R & _m_= 50, 25, 10, 5, 3, 2, 1 & _n_= 40.1 \\ & [_d_= (_m_-1)_p_] & \\ \hline L & _n_= 40.1, 35.9, 32.2, 28.8, 25.9, 23.2, & _m_= 1 \\ & 20.8, 18.6... 0.063 & \\ & [log step down in factors of 1.116] & \\ \hline \end{tabular}
\end{table}
Table 1: Sample design.
Figure 1: **Sample. a)** Scanning electron microscopy (SEM) image containing the regions of interest. Chromium marks can be seen as brighter crosses around the ion implanted regions. The greyscale gradient observed in the ion implanted geometries is a combination of minor height difference due to sputtered silicon atoms as well as changes in secondary electrons yield. Regions marked as sample “R”, “S”, and “L” correspond to patterns used for, respectively, XRF resolution, sensitivity, and limit of detection experiments described in the text. The overlaid orange-colored region corresponds to the Kelvin Probe Force Microscopy (KPFM) scanned area, which output in the potential channel signal with a drive amplitude of 7.5V can be seen in **b)**.
higher-order diffractions from the lenses. We performed raster scans of the sample stage to collect X-ray Fluorescence (XRF) and transmission data at each position, with an exposure time of typically 1 second. A three-channel silicon drift detector (Vortex, Hitachi) was positioned at a 90-degree angle relative to the sample to collect the XRF spectrum at each point. A transmission area detector (MERLIN, Quantum Detectors) was also placed at 0.5 m from the sample to capture far-field probe intensity. The integrated pump flux used was 5x10\({}^{8}\) photons s\({}^{-1}\).
Before locating the region of interest, the sample was cleaned with acetone to remove any surface contaminants and cleaved to a manageable size. Filtering scans for Cr markers allowed to locate the arrays of interest after reshaping and refocusing of the X-ray beam. The Cr markers are expected to have very sharp edges due to the hard-mask lithography technique used, and these were used to get the sample into the focal plane of the beam. Scanning across a Cr marker line we obtained a 1D error function for the Cr XRF signal with minimum \(W\) of 61 nm. Translating the sample along the beam increased \(W\) by 5 nm per micron of translation.
### EDX Experiments
EDX spectra were acquired (Bruker X-Flash dual-detectors) with electron-induced photons generated in a STEM (Thermo Scientific Talos F200i). The STEM was operated at 200 keV (quoted nominal imaging resolution equals to 136.5 pm) using a 70\(\mu\)m condenser lens aperture, resulting in an on-screen current of 0.386 nA. The EDX acquisition field of view was fixed at 1000x1000 nm\({}^{2}\), centered on the implanted squares, and the total integrated spectrum for an acquisition time of 300s was accumulated.
## III Results
First, we performed resolution tests using sample R. As shown in Fig. 2, \(W\)=120 nm which is somewhat higher than the value obtained from scanning over Cr markers above, and higher also than the nominal FIB implantation spot size (rms=15 nm convolved with rms straggle of 9 nm, produces an expected FWHM \(W\)=29nm - NB: for circular Gaussian spots of rms radius \(w\), the edge of a 2D region produced with a pitch of \(p\)\(<\)\(w\) has a 1D error-function profile corresponding to a Gaussian with FWHM of \(W=2ln2w=1.66w\)). Evidently there is a significant tail in the beam profile around the FIB spot. It is unlikely that the X-ray beam spot is the cause of this large PSF since the focus was checked by scanning over an EBL-defined chromium marker about 5 \(\upmu\)m away shortly beforehand and defocusing to a 120 nm spot would have required 24 \(\upmu\)m translation along the beam.
We then investigated sample S for sensitivity tests as shown in Fig. 3. The Ga fluorescence from the implanted regions is obvious in the map of Ga fluorescence intensity as a function of position in the inset of Fig. 3. The mask regions chosen are at least 160 nm from the edge of the implanted area in order to avoid edge effects caused by the finite PSF
Figure 3: **Implant point spread function.** A line scan across Sample R showing the Ga-K\(\alpha\) counts from totaling the XRF spectrum within (9.10,9.40) keV at each position (circles). The x-scale is in microns. The nominal profile is shown convoluted with a 1D Gaussian PSF of different values of W (the FWHM) indicated, in nm (lines). The upper panel shows a high-resolution scan centered on the three rectangles of width m=5 (62.4 nm wide). The step sizes in two panels were 100nm and 10nm respectively.
Figure 2: **XRF spectrum for different implant densities.** Sample S was imaged with a 1 s integration time per pixel, and at each pixel the acquired XRF spectrum was fitted with Gaussian peaks for the K and L transitions of the elements indicated (NB the heights of the gray labels for x-ray transitions have no meaning). The inset shows the integrated Ga-K\(\alpha\) counts from the fits as a function of position. The main figure shows the mean spectra averaged over the 5 mask regions indicated on the inset, with Ga densities in impurities/nm\({}^{2}\) indicated by the legend. The integrated counts have been divided by the channel width to obtain a count density, so that the area under the Ga-K\(\alpha\) peak in counts corresponds with the pixel values in the inset. No smoothing has been applied.
mentioned above. The Ga spectral weight increases monotonically with the implant density, while the other background features (due to metal components in the focusing optics etc) remain constant. Sample S was imaged three times in succession (with slightly varying field of view in each image), and the results are shown in Fig. 4.
The experiment is repeatable with the distributions plainly overlapping. The distribution of pixel values is very well separated from the background for a density of 40 nm\({}^{-2}\), meaning that if a single pixel contained an average of 40 impurities per nm\({}^{2}\) then it would be easily distinguishable from background. For 21 and 11 nm\({}^{-2}\) the distributions become decreasingly spaced from the background.
When implantation features are extended and regularly repeated, they become easier to see. Sample L consists of single lines of FIB spots. As before, assuming that the FIB spot size is much larger than the pitch the implantation feature is a line with uniform density along the line and approximately Gaussian profile in the transverse direction of FWHM \(W\). If the spots each contain \(np^{2}\) impurities, the peak density is \(n_{0}=4n\frac{p}{W}\sqrt{\ln^{2}/n}\). The values of \(n\) in sample L are given in Table 1. Let us assume that the PSF of 120 nm from Fig. 2 is equal to \(W\), which leads to peak densities of 9.8, 8.6, 7.5, 6.5, 5.7, 4.9, 4.3, 3.8 nm\({}^{-2}\), which, being below the background threshold determined above, are rather hard to see in the raw image (not shown). However, when the image is integrated in the direction parallel to the lines (16 pixels), the features become obvious as seen in Fig. 5. Effectively the acquisition time has been increased to 16 sec, and the fact that the features repeat regularly makes them clear. The weakest feature in Fig. 5 has 650 impurities in the X-ray spot at its peak.
Figure 4: **Distribution of Ga-K2 XRF signal at T=1 s integration and the dependence on Ga density.** a) Histogram of counts for different Ga densities. The XRF spectrum for each pixel of Sample S was fitted with Gaussians for the different elements (just as for Fig 3). The Ga-K\(\alpha\) signals (the total areas under the fits, in counts) were selected according to the masks shown on Fig. 3 inset, and the distributions are shown as probability densities for different values of n indicated in nm\({}^{-2}\). Histograms have been smoothed with a kernel of bandwidth 0.5 counts. The sample was imaged 3 times (with differing field of view, allowing 3 repeats for the two highest densities and the background, 2 repeats of 21 nm\({}^{-2}\) and 1 distribution for 11 nm\({}^{-2}\)). b) The pixel values used in the histograms of (a) are shown as a function of n (and a small horizontal scatter has been introduced for visual clarity). The linear regression shows how the mean count number varies with implant density. The upper inset shows that the pixel count variance (separate points for each histogram in a) is equal to the mean count, consistent with a Poisson process. The grey dashed line indicates a slope of 1. The lower inset shows STEM-EDX measurements performed with a 300 s integration time.
Figure 5: **Integrated signal from Sample L.** The sample consists of lines of constant spacing and decreasing implant density according to Table 1. A 2D image was taken and the end marker is clearly visible, but the lines are indistinct (not shown). Integrating the image over the vertical direction along the lines (with a small shear of 10\({}^{\circ}\) due to the slight misalignment of the sample to the scan axis), produces the data shown (the inset is a zoom). The red line is a convolution of the nominal density profile with a PSF of FWHM 120nm. The smallest density feature in Fig. 5 has peak density 3.8 nm\({}^{-2}\), corresponding to 650 Ga impurities for a 171 nm\({}^{2}\) spot.
## IV Discussion
The mean count rate detected for a uniform density, \(n\), follows \((c(n))/T=b+an\) where \(T\) is the integration time, \(b\) is the mean background count rate and \(a\) is the areal sensitivity. The best fit line on Fig. 3 for \(T\)=1 s gives \(b\)=7.38\(\pm\)0.12 s\({}^{-1}\) and \(a\)=0.558\(\pm\)0.003 nm\({}^{2}\) s\({}^{-1}\). Note that the inferred value of \(a\) is not systematically affected by the X-ray beam size when the density of impurities is uniform - a defocussed spot produces a weaker pumping of a correspondingly larger number of impurities and ultimately the same number of fluorescence photons - and nor does the spot size affect the background. For comparison, in the EDX measurement of Fig. 4, \(a\)=0.42 nm\({}^{2}\) s\({}^{-1}\) and \(b\)=4.4 s\({}^{-1}\) which is strikingly similar.
The areal sensitivity, \(a=\eta\sigma P\), is simply proportional to the quantum efficiency, \(\eta\), the cross-section, \(\sigma\), and the integrated pump flux in photons per unit time, \(P\). For a single isolated cluster of \(N\) impurities much smaller than the X-ray spot, \((c(N))/T=b+N/\tau\) where \(\tau\) is the average time between fluorescence photons from a single impurity, and \(1/\tau=\eta\sigma F_{0}\) where \(F_{0}\) is the photon flux in the centre of the pump beam (where the cluster is), i.e. \(\tau=A/a\) where \(A\) is the X-ray spot size (defined so that \(F_{0}=P/A\)). For the optimum XRF spot mentioned above, \(A=171\) nm\({}^{2}\), then one Ga atom gives a count every \(\tau\)=306 s on average, which indicates the pixel integration time needed to detect a single impurity. In this experiment \(b\gg 1/\tau\), meaning that for a single Ga impurity the background would swamp the signal.
Using the equation for \(z\) above, and the fact that as shown in the inset in Fig. 4b the variance is equal to the count rate as expected for a Poisson point process, and solving for \(N\) to obtain the detection limit,
\[N=\frac{z^{2}\tau}{2T}\Big{(}1+\sqrt{1+8bT/z^{2}}\Big{)}\]
This formula has two regimes: the high background regime (large \(b\)) in which case \(N=z\tau\sqrt{2b/T}\) so that the detection limit scales inversely with the root of the integration time; and the low background regime (small \(b\)) where \(N=z^{2}\tau/T\) which scales inversely with integration time. For \(z\)=2 and our experimental situation from Fig. 4, \(T\)=1 s \(>z^{2}/8b=0.068\)\(s\), we are in the high background regime. For \(\tau\) =306 s and \(T\)=1 s we obtain a minimum detectable \(N\) of 3000 impurities in a pixel (\(N/A\)=18 nm\({}^{-2}\)). This number of impurities is equivalent to 350 zeptogrammes or a 4 nm cube of solid gallium.
Single atom detection is achieved with an integration time of \(T=z^{2}\tau(1+2b\tau)\), which can be decreased by decreasing \(b\) or decreasing \(\tau\), either by increasing the incident X-ray beam irradiance, \(F_{0}\), or the collection efficiency through \(\eta\). For example, the gold fluorescence seen in Fig. 3 (dominates \(b\)) might be reduced by a factor of 20 by choosing different materials in the X-ray optics. Instruments planned for new multiband-achromat storage ring light sources (that have either been recently constructed or are under construction) promise increased brilliance. Improvements in the focal spot flux are expected of the order of \(10^{12}\) photons s\({}^{-1}\) in a 20 nm spot [21], which would lead to an improvement in \(F_{0}\) by a factor of 800. Improving the flux density in the focus (and hence also \(\tau\)) by this much would then allow single atom detection \(N\)=1 with an integration time of just \(T\)=2 s (still background limited). Even this very high X-ray flux is not expected to produce radiation damage in bulk Si [22; 23], but any damage that is produced will be far less than that caused by the implantation, which must anyway be healed and the impurities activated by annealing.
Although the requirements on \(b\) and \(\tau\) just described for imaging single atoms with unknown placement are stretching, the outlook for imaging regular patterns of impurities also often used in quantum technologies [24; 25] is much more optimistic, since that allows better image processing techniques such as Bayesian inference to locate and analyse the pattern, and often structures with many atoms are used. Furthermore, patches of approximately \(10^{3}\) impurities in a sheet of density approximately 1-2 nm\({}^{-2}\) have been used to construct quantum dots used in sensing of the charge/spin state of neighbouring isolated single impurity qubits [26], and it is just as important to characterize the dot as the qubit. This density is below the sensitivity limit determined from Fig. 4, but very close to the density of the weakest features observed in Fig. 5.
## V Conclusion
XRF provides a sensitive and high-resolution capability for materials for quantum technology. Although other techniques are available with higher sensitivity (e.g., the EDX measurement of Fig. 4, or the KPFM result of Fig. 1 has visible features even well below 1 nm\({}^{-2}\), or X-ray induced STM [12], which senses single atoms on surfaces), XRF gives chemical specificity and sensitivity to buried features without destructive sample preparation. In addition to providing the means to locate dopants in qubits and other devices, X-ray characterization in the limit of very few, or even single, dopant atoms have the potential to provide structural and chemical insight that has not previously been possible. A particular challenge has been that the structure of defect complex structures, for example G-, T- W- and C-complexes in Si, which have been explained theoretically [2] but which have not been imaged directly, and which can
have many related configurations of varying degrees of energetic favorability [27, 28]: X-ray spectroscopy at the single-ion level has the potential to allow the structure predicted by calculations to be compared with experimental X-ray spectroscopy results. Ultimately X-ray studies could be conducted on samples or compared with single-ion optical or electrical studies. Extended X-ray absorption fine structure (EXAFS) measurements have, to date, required large ensembles of ions, with doping levels of 10\({}^{20}\) cm\({}^{-3}\) or higher [29]. EXAFS requires a large range of incident energies and may thus be regarded as a long-term goal for the few-ions-scale nanoprobe. Studies drawing on near-edge spectroscopy require a narrower X-ray range of incident energies and would thus be more immediately available. Near edge studies can be used to probe the charge state of impurities, for example, but previously only at high dopant levels in MgO [30]. X-ray fluorescence therefore provides an important tool for chemical species-specific imaging at the nanoscale, and significant improvements are expected with near-future facility upgrades.
## Acknowledgment
MGM and SKC acknowledge financial support from Engineering and Physical Sciences Research Council (EPSRC) [Grant No. EP/X019989/1]. TG and VZ acknowledge support from EPSRC [DTP, Grant No. EP/T518050/1]. ATI-based authors would like to thank the UK National Ion Beam Centre [EPSRC, Grant No. EP/X015491/1] and UCL-based authors were financially supported by EPSRC [Grant No. EP/W000520/1, EP/R034540/1 and EP/V027700/1]. NK thanks the EPSRC Centre for Doctoral Training in Advanced Characterisation of Materials [Grant No. EP/S023259/1]. PGE acknowledges support from the U.S. Department of Energy Office of Basic Energy Sciences through contract DE-FG02-04ER46147. EDX analysis was supported by EPSRC [Grant No. EP/V036327/1].
|
2306.06297 | Protect Your Prompts: Protocols for IP Protection in LLM Applications | With the rapid adoption of AI in the form of large language models (LLMs),
the potential value of carefully engineered prompts has become significant.
However, to realize this potential, prompts should be tradable on an open
market. Since prompts are, at present, generally economically non-excludable,
by virtue of their nature as text, no general competitive market has yet been
established. This note discusses two protocols intended to provide protection
of prompts, elevating their status as intellectual property, thus confirming
the intellectual property rights of prompt engineers, and potentially
supporting the flourishing of an open market for LLM prompts. | M. A. van Wyk, M. Bekker, X. L. Richards, K. J. Nixon | 2023-06-09T23:23:26Z | http://arxiv.org/abs/2306.06297v1 | # Protect Your Prompts:
###### Abstract
With the rapid adoption of AI in the form of large language models (LLMs), the potential value of carefully engineered prompts has become significant. However, to realize this potential, prompts should be tradable on an open market. Since prompts are, at present, generally economically non-excludable, by virtue of their nature as text, no general competitive market has yet been established. This note discusses two protocols intended to provide protection of prompts, elevating their status as intellectual property, thus confirming the intellectual property rights of prompt engineers, and potentially supporting the flourishing of an open market for LLM prompts.
## 1 Introduction
LLMs, including those in the generative pre-trained transformer (GPT) family, are known to exhibit _emergent properties_[1].+ Emergent behavior in such complex nonlinear adaptive systems manifests in a seemingly stochastic manner [3], which impacts directly on an LLM's responses to instructions for performing tasks given in the form of prompts. Consequently, querying an LLM repeatedly with the same prompt may yield different responses, while a tweak in a prompt may result in either no difference in an LLM's response, or a significant change. For critical applications, for example, in assistive surgery [4], a substantial amount of time is spent on ensuring that the performance achieved is within an acceptable tolerance. Therefore, the monetary value of a well-crafted prompt (regardless of the field) which has painstakingly been developed through trial and error, including several hundred versions of iterative phrasing, and possibly also exploiting a particular LLM's architecture, will be considerable [5]. This has led to the emergence of a new field, called _prompt engineering_, which refers to the art and science of engineering incantations that will evoke the desired response from an LLM [6, 7].
Footnote †: The term _emergent property_, refers to the fact that a system exhibits novelties that are not due to the properties of any single part or subsystem of the system, but due to interactions among its subsystems [2].
This has underscored a simple fact that since the end of 2022, prompts themselves have become valuable. A prompt thus does not represent the desire of a user for an artifact that an LLM might produce, instead it stands as a proxy for the artifact it will "unlock". Due to the inherent value situated in prompts, the risks of intellectual property (IP) and copyright violations have become real [8, 9]. Protecting a prompt from being publicly viewable using some method, will enable prompt engineers (ranging from casual enthusiasts to large companies) to protect their competitive advantage in the market. In this note we put forward suggestions on how prompts and associated IP protection may be accomplished.
## 2 Terminology and Assumptions
For the sake of brevity, we will refer to LLM prompts, i.e., the input or instruction sent to the model to generate a response, as "task prompts" or simply as "prompts". The prompt acts as a starting point or context for the model to generate a response. Similarly, the generated output, as returned by an LLM, in response to a prompt, will be called a "response", and will be termed an "artefact" if it is deemed to have commercial value. We refer to the creator, coder, or engineer of a prompt as the "prompt engineer", and to any third party using the prompt as the "user", regardless of
the distribution of IP-related rights. Therefore, the initial estimation of the commercial value of a prompt should be made by the prompt engineer. Finally, we use "protocol" to refer to a set of systematic guidelines that explain the ideal relations between different entities.
The protocols below assume that the targeted LLM is connected to the internet and permitted to access online content in real-time, i.e., beyond the online content on which it was trained. We assume that while the artifact related to a prompt is LLM-version contingent, the artifact can consistently and faithfully be returned, given the appropriate specifications contained within a prompt.
## 3 The Problem of Prompt Protection
In its present format of plain text, prompts are easily replicable and distributable. The utility of, and demand for, good prompts, and low barriers to entry into the domain, should have resulted in a lively and competitive market, with firms, scholars and enthusiasts representing a sizable proportion of the market participants. However, the monetary value of a well-designed prompt, the prompt engineer's competitive advantage, and the opportunity to profit financially from their creation, is undermined by the fact that once a prompt has been sold, it is effectively in the public domain, and open to the user to duplicate, modify, distribute, or resell it. This includes the possibility of a user undercutting the price, by selling it at a lower cost to potentially thousands of buyers. In theory, this contingency has had the effect of ostensibly driving the price for prompts down to zero, but in practice has prevented a market to be established at all. Prompts, as currently traded (or shared) are open to theft, and to alteration in nefarious ways, unintended by the original prompt engineer. Differently put, the value of a well-crafted prompt should be considerable, but is undermined by the problem of the lack of excludability.
Instead of producing prompts free of charge, the alternative approach a prompt engineer might take is to sell prompts for an unreasonably high price, i.e., for first access, which may compensate for the inherent opportunity cost associated with non-excludable goods. However, this would have a detrimental impact, choking what would otherwise have become a vibrant part of an LLM-related economy.
These two considerations suggest that there would be significant value in approaches that lead to the protection of IP as they relate to prompts, in the service of a voluntary and competitive market for prompts. Below, we propose two protocols as ways to protect the intellectual property associated with a prompt.
## 4 Proposed Prompt Protection Protocols
In this section we suggest two protocols for protecting the IP of prompt engineers in increasing order of sophistication.
### Prompt Protection Protocol 1
This is the most basic protocol of the two proposed, requiring minimal addition computational resources and hence minimal additional cost to be passed on to the end user.
#### 4.1.1 The Protection Mechanism
Here, the prompt prepared by the prompt engineer, consists of two parts concatenated into a single ASCII string, namely a human-legible preamble followed by a human-illegible core. The human illegible core is essentially the AI task prompt that forms the IP to be protected, and for this reason it exists in the form of an encrypted message. The composite prompt is depicted in Figure 1.
In order for the user to unlock the potential of the AI prompt purchased, this encrypted message firstly has to be decrypted. This is the purpose of the legible preamble as it contains part of the information that is needed by the LLM to decrypt the task prompt. The remaining part of this information is stored elsewhere. Once decrypted, the LLM then processes the recovered task prompt which it was designed for, and upon completion of assimilating the task, the LLM is then required to completely forget the text describing the task (the prompt) to be performed. This requires the task prompt to possess an epilogue which explicitly instructs the LLM to forget the explicit task prompt. As a consequence, the LLM will have the ability to reason in accordance with, and execute, the task concisely described by the original task prompt, but will be unable to accidentally disclose the decrypted task prompt.
#### 4.1.2 Further Aspects on Protocol 1
Protocol 1 requires a decryption bridge to be inserted between the user and the AI service provider. The preamble will be set up to contain decryption key which the user received from the prompt engineer (or their agent, such as a intermediary, who may be the IP owner) after having purchased the prompt. The decryption bridge may be located in one of three possible locations. The simplest option would be for it to be located on the user's server. However, this poses a significant security vulnerability toward the IP owner particularly in the form of prompt interception [8].
A better alternative, would be for the decryption bridge to be located on the IP owner's server. The user would then effectively communicate with the AI service provider via IP owner's server. However, spread out geographical locations of the three parties, e.g., over distant continents, could have a detrimental impact on the user's experience.
The best alternative would be for the IP service provider to be allocated space on the AI service provider's server. The IP service provider's decryption bridge then intercepts all messages from the end user, decrypts these, and then send them to the AI server to be processed. This minimizes security risks as well as the adverse effects by avoiding having to route data through the IP owner's server. Although still exposed to the risk of poor network performance, this is the best that can be achieved.
As far as the decryption process itself is concerned, a broadcast or multicast decryption algorithm would be one option. Even though such algorithms allow for multiple decryption keys, the cost of these grow rapidly as the number of required keys increases. A more viable alternative, though, would be to encode an issued user identification code together with the single decryption key into a virtually unique user key. Upon receipt of the user key, the decryption bridge then extracts both the user identification code and decryption key. Only if the user key is valid, will the decryption bridge proceed to decrypt the encrypted message (the LLM task prompt), and only if the decryption key is correct will decryption be successful. Finally, the LLM then assimilates the instructions contained in the decrypted task prompt. Once the assimilation process of the task prompt has been completed, i.e., the artifact is produced, the task prompt will be deleted. The LLM is now able to respond to user instructions and queries, in the context of the assimilated task prompt, without any risk to the IP owner. However, methods have been reported for extracting the original prompt from the LLM either by [8, 9] and consequently a robust forget strategy is essential for the success of this protocol.
### Prompt Protection Protocol 2
Protocol 2 approaches the problem of IP protection by referring the LLM to a website which holds the key/cipher to the encrypted prompt. Under this model, prospective users purchase an encrypted prompt, based on a description of the resultant artifact, along with an instruction to the LLM regarding where to go to find the key.
The key can thereby be located at a secure website that is accessed via an application processing interface, the details of which are included in the instruction. This arrangement allows the decryption-housing website to register a state-change once the key has been used, and alter the key, in order to prevent multiple calls on a single purchase, yet permitting a single prompt to be sold ad infinitum by the prompt engineer (or another agent with permission to do so). The schematic given in Figure 2 illustrates such a process.
#### 4.2.1 Further Aspects on Protocol 2
A variation on this protocol is to host a platform that operates as a secure intermediary, where the prospective user is not required to access an LLM's website. Under such an arrangement, (encrypted) prompts are sold, accompanied by a single-use bearer token. The owner of a purchased prompt would access the intermediary platform (which may be the same site as where the purchase takes place), producing both the prompt and the token (which may be concatenated into a single
Figure 1: Suggested prompt composition for Protocol 1.
string, or kept separately, allowing for two-step security). The token would verify whether the veracity of the purchase, and simultaneously trigger (1) a decryption, (2) an API call to the appropriate LLM, (3) returning the artifact and (4) invalidating the prompt for future use.
## 5 Conclusion
We have highlighted the increasing value of well-crafted LLM prompts. These prompts, carefully honed through iterative processes, have emerged as valuable tools that serve as proxies for the artifacts they generate. Their significance is evident in the growing demand for prompts and prompt engineering. To foster a thriving market for prompts, it is imperative to establish an open, transparent, and well-protected environment that encourages prompt engineers and users to engage in buying and selling activities, and which facilitates the responsible and ethical utilization of prompts across diverse domains.
However, a significant barrier to the establishment of such a market lies in the current non-excludability of prompts, and the absence of secure and exclusive mechanisms for their sale. This raises concerns about the protection of IP associated with prompts. Addressing this issue requires the development of protocols that can safeguard prompt-related IP, thereby enabling prompt creators to maintain ownership over their creations while ensuring fair access and utilization.
Within this note, we have explored two protocols for implementing IP protection for prompts across a wide range of applications. These protocols offer potential avenues for establishing a framework that balances the interests of prompt creators, users, and the broader market. By providing secure and exclusive mechanisms for prompt sale and ensuring the protection of prompt-related IP, these protocols lay the foundation for a vibrant and sustainable prompt market. Ultimately, the establishment of a protected prompt market will contribute to the advancement of LLM technology and its transformative impact on various fields.
Future work will focus on utilizing digital rights management (DRM) technology in the quest to protect IP and copyright associated with LLM prompts.
|
2301.02056 | Benchmarking universal quantum gates via channel spectrum | Noise remains the major obstacle to scalable quantum computation. Quantum
benchmarking provides key information on noise properties and is an important
step for developing more advanced quantum processors. However, current
benchmarking methods are either limited to a specific subset of quantum gates
or cannot directly describe the performance of the individual target gate. To
overcome these limitations, we propose channel spectrum benchmarking (CSB), a
method to infer the noise properties of the target gate, including process
fidelity, stochastic fidelity, and some unitary parameters, from the
eigenvalues of its noisy channel. Our CSB method is insensitive to
state-preparation and measurement errors, and importantly, can benchmark
universal gates and is scalable to many-qubit systems. Unlike standard
randomized schemes, CSB can provide direct noise information for both target
native gates and circuit fragments, allowing benchmarking and calibration of
global entangling gates and frequently used modules in quantum algorithms like
Trotterized Hamiltonian evolution operator in quantum simulation. | Yanwu Gu, Wei-Feng Zhuang, Xudan Chai, Dong E. Liu | 2023-01-05T13:18:19Z | http://arxiv.org/abs/2301.02056v3 | # Benchmarking universal quantum gates via channel spectrum
###### Abstract
Noise remains the major obstacle to scalable quantum computation. Quantum benchmarking methods provide key information on noise properties for quantum processor calibration, quantum error mitigation, and quantum error correction. However, current benchmarking methods, such as randomized benchmarking or its variants, can only evaluate the performance of some particular subsets of quantum gates. Moreover, due to the randomization inherent in these protocols, the figure of merit that they actually measure is not the fidelity of individual target gate but the average of the fidelities of certain random circuit cycles incorporating the target. To overcome these limitations, we propose the _channel spectrum benchmarking_ (CSB), a method to infer the noise properties of the target quantum process, including process fidelity, stochastic fidelity, and some important unitary parameters of native gates, from the eigenvalues of its noisy quantum channel. The noisy eigenvalues can be estimated by the circuits of control-free phase estimation, which is insensitive to the state-preparation and measurement errors. Importantly, our method can benchmark universal quantum processes and is scalable to many-qubit quantum processes. Finally, we demonstrate the performance of our method using simulated experiments, including the single-qubit Pauli rotations, 2-qubit fermionic simulation gates, a 3-qubit cycle implementing the Toffoli gate, and a 10-qubit cycle implementing the Ising Hamiltonian evolution operator. Our method will pave an important way for the development of cleaner and large-scale quantum devices.
## I Introduction
The performance of today's quantum computers is severely affected by noise and the limited number of qubits [1]. Quantum error correction and fault-tolerant schemes may somedaylock the full potential of quantum computation [2; 3; 4; 5; 6; 7], but more precise gate operations must be developed beforehand. It is crucial and necessary to obtain information on the gate noise characteristics and their performance benchmarks in order to calibrate and optimize these gate operations [8; 9; 10]. Nonetheless, there is a trade-off between noise information obtained and the resource overhead for their testing experiments [11]. Process tomography [12; 13] is a typical technique for reconstructing the matrix representation of a quantum process, with which the full information of noise is at hand. However, process tomography has exponentially increasing experimental costs and suffers from state-preparation and measurement (SPAM) errors. Although its variant, the gate-set tomography [14; 15; 16; 17; 18], can handle SPAM errors, the experimental costs cannot be reduced.
In reality, for probing noise strength or noise types of a gate, the full reconstruction of the noisy process is not necessary [19; 20]. For instance, the average gate fidelity, that measures the average performance of the implemented noisy gates, can be efficiently obtained by randomized benchmarking (RB) [21; 22; 23; 24; 25; 26]. The RB protocol is insensitive to SPAM errors and its variants [8; 27; 28; 29] can be applied to benchmark devices with larger system size. It is important to note that protocols like randomized benchmarking do not directly measure the fidelity of individual quantum gates, but rather the average fidelity of some random circuit fragments [30; 31; 32]. To determine the fidelity of a specific target gate (in this paper, we use the phrase "target gate" for any target unitary including a circuit fragment, and later use the phrase "native gate" for a single operational quantum gate), additional techniques such as interleaved RB [33] or modifying the sampling distribution of random circuits [27; 29] must be used, which can induce more experimental cost and is prone to a large systematic uncertainty [34]. Additionally, to simplify the functional form of measured signals in RB methods, it is often necessary to use group twirling, which limits the types of gates that can be benchmarked. As a consequence, the RB protocols based on random Clifford circuits can only be applied to benchmark the Clifford gates; however, the important non-Clifford gates have to rely on more complicated random circuit sets in which their native gates belong to other groups instead of Clifford group, e.g. dihedral groups [35; 36].
In order to overcome the two limitations, we introduce channel spectrum benchmarking (CSB), a scalable protocol to estimate the individual noise properties of a universal quantum process from the noisy eigenvalues of its corresponding quantum channel. We estimate the noisy eigenvalues by control-free phase estimation circuits [40; 41; 44; 45; 46] that is robust to SPAM errors. With a relationship between ideal and noisy eigenvalues, which is derived from the first order perturbation theory
[47], we can infer the diagonal entries of the matrix of pure noise process under a basis composed of the eigen-operators of the ideal gate. From these diagonal entries, we can estimate some noise properties, for examples, process fidelity, stochastic fidelity (a quantity similar to unitarity [39; 48]) and some important unitary parameters of native gates. We demonstrate the performance of our method with certain typical simulated experiments, 1-qubit Pauli rotation gates, 2-qubit fermionic-simulation (Fsim) gates, 3-qubit circuit fragment implementing Tofofoli gate, and 10-qubit circuit fragment implementing Tofling evolution operator. In all experiments, our CSB method can accurately estimate the noise properties. To get a more clear picture of the performance of our CSB, in Table 1, we compare our CSB protocol with other leading benchmarking protocols under three aspects: (1) what gates they can benchmark; (2) what type of fidelity they actually measure; (3) under what conditions they can be scalable to many-qubit systems.
## II Quantum channel, fidelity, and channel spectrum
In this section, we provide some preliminaries about quantum channel, the fidelity of implemented noisy gates, and the relationship between the fidelity of a gate and the channel spectrum of its noisy implementation.
Consider a quantum gate \(U\) acting on a \(d\)-dimensional space with eigenvalues \(e^{i\lambda_{a}}\) and eigenstates \(\ket{\phi_{a}}\) such that \(U\ket{\phi_{a}}=e^{i\lambda_{a}}\ket{\phi_{a}}\). Because of noise, the actual implementation of the gate should be denoted as a quantum channel \(\widetilde{\mathcal{U}}=\mathcal{E}\mathcal{U}\), or say completely-positive and trace-preserving (CPTP) map [12], where \(\mathcal{U}\) is the corresponding quantum channel of the ideal gate \(U\) and \(\mathcal{E}\) is a pure noise process. Quantum channels are usually denoted by a set of Kraus operators, for example, \(\mathcal{U}(\rho)=U\rho U^{\dagger}\) and \(\mathcal{E}(\rho)=\sum_{k}E_{k}\rho E_{k}^{\dagger}\) where \(\rho\) is an arbitrary operator. Quantum channels can also be represented by a matrix on the basis of \(d^{2}\) dimensional operator space, for example, Pauli operators. We will use the two representations interchangeably and the same symbols for both the abstract quantum channels and their matrix representations.
One can use some fidelity measures to assess the performance of the implemented noisy gate \(\widetilde{\mathcal{U}}\), such as the process fidelity (or referred to as entanglement fidelity) which is defined as
\[F(\mathcal{U},\widetilde{\mathcal{U}})=\text{tr}\Big{\{}\mathcal{I}\otimes \mathcal{U}(\ket{\alpha}\bra{\alpha}\ket{\mathcal{I}\otimes\widetilde{ \mathcal{U}}(\ket{\alpha}\bra{\alpha}}\ket{\alpha}\ket{\alpha}\ket{\alpha} \ket{\alpha}\Big{)}\Big{\}} \tag{1}\]
where \(\ket{\alpha}=\frac{1}{\sqrt{d}}\sum_{i=1}^{d}\ket{i}\otimes\ket{i}\) is the maximally entangled state. The process fidelity is closely related to another
\begin{table}
\begin{tabular}{c|c|c|c} \hline & Gates & Fidelity & Conditions for Scalablity \\ \hline \multirow{2}{*}{CSB} & \multirow{2}{*}{universal} & \multirow{2}{*}{\(\bullet\)} & \multirow{2}{*}{Grenet case: target} & \multirow{2}{*}{\(\bullet\)} \\ & & & \\ \cline{4-4} & & & \(\bullet\) \\ \hline Clifford RB [22; 23] & Clifford & ave. among Clifford gates & not scalable due to compilation issue [27] \\ \hline Mirror RB [29] & Clifford & ave. among rand. cycles & only applicable to Clifford gates \\ \hline CB [28] & \(U^{m}=I\) & target + twirling gates & target gate is Clifford \\ \hline XEB [8] & universal & ave. among rand. cycles & circuits can be classically simulated \\ \hline \end{tabular}
\end{table}
Table 1: Comparison with other leading benchmarking protocols. We compare our CSB protocol with other benchmarking protocols under three aspects: (1) what gates they can benchmark; (2) what type of fidelity they actually measure; (3) under what conditions they can be scalable to many-qubit systems. Usually, our CSB measures the fidelity of the target gate. But for strong unitary error, we need to perform randomized compiling (RC) [37; 38] with twirling gates to convert unitary error to stochastic error in order to obtain a better performance. When benchmarking a circuit fragment, the twirling gates can be merged into target gate, our method still measures the individual fidelity of the target as shown in Sec. IV.3 and IV.4. But, when benchmarking native gates, the twirling gates can not be merged (see Appendix. C). In this case, our method measures the average fidelity of the compositions of the target gate and twirling gates. Our CSB is scalable as long as eigen-decomposition of target is possible and the number of single and two-qubit gates in the circuits preparing initial states scales at most polynomial with the number of qubits. Clifford RB and Mirror RB use random Clifford circuits to simplify noise and thus only apply to Clifford gates. The fidelity they actually measure is the average of fidelities among random Clifford cycles. Mirror RB can be scalable but Clifford RB cannot. For cycle benchmarking (CB), the gate or cycle \(U\) that can be benchmarked must satisfy \(U^{m}=I\) where \(m\) is an integer. CB uses Pauli twirling to simplify noise and thus measures the fidelity of composition of target and twirling gates. It needs to compute the output Pauli operators of ideal circuits, which is possible only when the target gate is Clifford for large systems. XEB uses random universal circuits to simplify noise, so it measures the average of fidelities among some random circuit cycles generated with a same sampling distribution. It requires the classical simulation of circuits to obtain the ideal probabilities of sampled bit strings, which limits its scalability. Additionally, our CSB and XEB can directly measure how close the noise is to unitary error, while RB methods need extra procedures to measure this information [39]. Finally, our CSB can also measure the actual values of some unitary parameters of native gates similar to robust phase estimation [40] and Floquet calibration [41; 42; 43], and the measures considered in those protocols are more sensitive to unitary errors than the fidelity measures.
ubiquitous measure, the average gate fidelity [49]
\[F_{\rm ave}(\mathcal{U},\widetilde{\mathcal{U}}) =\int d\psi\,{\rm tr}\Big{\{}\mathcal{U}(|\psi\rangle\langle\psi|) \,\widetilde{\mathcal{U}}(|\psi\rangle\langle\psi|)\Big{\}}\] \[=\frac{dF+1}{d+1}\,. \tag{2}\]
It has been proven that the process fidelity only depends on the trace of the pure noise \(\mathcal{E}\)[49], that is
\[F(\mathcal{U},\widetilde{\mathcal{U}})=\frac{{\rm tr}\Big{\{}\mathcal{U}^{ \dagger}\widetilde{\mathcal{U}}\Big{\}}}{d^{2}}=\frac{{\rm tr}\{\mathcal{E}\} }{d^{2}}. \tag{3}\]
Current benchmarking methods, for example, randomized benchmarking and its variants, measure the information of \({\rm tr}\{\mathcal{E}\}\) on a basis composed of Pauli operators. In these protocols, Clifford twirling or Pauli twirling are used to simplify the noise matrix \(\mathcal{E}\), that is, only diagonal entries of \(\mathcal{E}\) on the Pauli basis are kept, such that the relevant figure of merit can be extracted easily from measured signals. The twirling operations need to be performed by running some random circuits. This causes RB type of methods only apply to benchmark some subsets of quantum gates (e.g. Clifford gates for Clifford RB) and only measure the average fidelity of a set of gates including both the target gate and the twirling gates.
Distracting from the Pauli operator basis, one can note that the ideal channel \(\mathcal{U}\) also induces a natural operator basis composed of its eigen-operators \(|\phi_{a}\rangle\langle\phi_{b}|\) (corresponding eigenvalues are \(e^{i(\lambda_{a}-\lambda_{b})}\)). If we can measure the diagonal entries of noise \(\mathcal{E}\) in this basis, we can also estimate the gate fidelity. This can be seen from the relationship between the eigenvalues of noisy gate \(\widetilde{\mathcal{U}}\) and those of ideal gate \(\mathcal{U}\)[47], that is
\[g_{ab}e^{i\lambda_{ab}}\approx e^{i(\lambda_{a}-\lambda_{b})}{\rm tr}\big{\{}( |\phi_{a}\rangle\langle\phi_{b}|)^{\dagger}\mathcal{E}(|\phi_{a}\rangle \langle\phi_{b}|)\big{\}} \tag{4}\]
where \(g_{ab}\) and \(\lambda_{ab}\) is the amplitude and phase of an eigenvalue of \(\widetilde{\mathcal{U}}\) with eigen-operator \(M_{ab}\), that is \(\widetilde{\mathcal{U}}(M_{ab})=g_{ab}e^{i\lambda_{ab}}M_{ab}\). For the spectrum of quantum channels, there are some useful properties [50]: (1) the eigenvalues lie in the unit disc of complex plain, i.e., \(0\leq g_{ab}\leq 1\) (2) the eigenvalues and eigen-operators always come in conjugate pairs, i.e., for every eigenvalue \(g_{ab}e^{i\lambda_{ab}}\) we have \(\widetilde{\mathcal{U}}(M_{ab}^{\dagger})=g_{ab}e^{-i\lambda_{ab}}M_{ab}^{\dagger}\).
The relationship Eq. (4) is derived from the first order perturbation theory [47] (also see Appendix A). Thus a diagonal entry of \(\mathcal{E}\) in the basis composed of \(|\phi_{a}\rangle\langle\phi_{b}|\) can be obtained
\[\mathcal{E}_{ab,ab}\approx g_{ab}e^{i\lambda_{ab}}e^{-i(\lambda_{a}-\lambda_{ b})}\,. \tag{5}\]
As long as we can measure the noisy eigenvalues \(g_{ab}e^{i\lambda_{ab}}\) of \(\widetilde{\mathcal{U}}\) and identify their corresponding ideal eigenvalues \(e^{i(\lambda_{a}-\lambda_{b})}\), we obtain the diagonal entries of \(\mathcal{E}_{ab,ab}\) by Eq. (5). If we can uniformly at random sample some noisy eigenvalues \(g_{ab}e^{i\lambda_{ab}}\) or equivalently \(\mathcal{E}_{ab,ab}\), then we can use the average of these samples to obtain an estimate of process fidelity \(F={\rm tr}\{\mathcal{E}\}/d^{2}\). Because all the diagonal entries have amplitude smaller than \(1\), we can infer the number of samples needed from the Hoeffding's inequality [51], that is, let \(X_{1},\cdots,X_{K}\) be independent bounded random variables with \(a_{i}\leq X_{i}\leq b_{i}\) for all \(i\in[K]\) and denote their average \(\overline{X}=\frac{1}{K}\sum_{i}X_{i}\), then for any \(\epsilon>0\) it holds that
\[P\left(\left|\overline{X}-\frac{1}{K}\sum_{i}\mathbb{E}(X_{i})\right|\geq \epsilon\right)\leq 2\exp\left(-\frac{-2K^{2}\epsilon^{2}}{\sum_{i}(b_{i}-a_{i})^{2}} \right). \tag{6}\]
This inequality bounds the probability that the empirical average \(\overline{X}\) deviates from the average of expectation values of these random variables with a distance \(\epsilon\). Assume we have \(K\) samples of diagonal entries \(\mathcal{E}_{ab,ab}\) sampled from a uniform distribution, so the expectation value of each sampled diagonal entry is \(\mathbb{E}(\mathcal{E}_{ab,ab})=\frac{{\rm tr}\{\mathcal{E}\}}{d^{2}}=F\). We take the average value of these samples as our estimate of the process fidelity, that is
\[\hat{F}=\frac{1}{K}\sum_{ab}\mathcal{E}_{ab,ab}. \tag{7}\]
Thus, the needed number of diagonal entries \(\mathcal{E}_{ab,ab}\) to estimate the process fidelity within an error \(\epsilon\) with the probability \(1-\delta\), or say \(P(|\hat{F}-F|\leq\epsilon)=1-\delta\), is
\[K=\frac{\log(2/\delta)}{2\epsilon^{2}}, \tag{8}\]
which is independent of the system dimension. Here, we take a very conservative bound of \(\mathcal{E}_{ab,ab}\), i.e., \(0\leq|\mathcal{E}_{ab,ab}|\leq 1\). But, the difference between the upper bound and lower bound of \(\mathcal{E}_{ab,ab}\) is usually much smaller than \(1\), so the number of samples needed is much smaller than that in Eq. (8).
Besides the process fidelity, the noisy eigenvalues can also be used to infer the noise strength of stochastic noise only. Since the amplitudes of eigenvalues are only affected by stochastic noise and not changed under unitary noise, we can use those amplitudes to define a quantity referred as stochastic fidelity
\[F_{\rm sto}=\sqrt{\frac{1}{d^{2}}\sum_{ab}g_{ab}^{2}}\,. \tag{9}\]
to assess the impact of stochastic noise only.
We can also estimate the actual values of some unitary parameters of a native gate (i.e. unitary errors), from the phases \(\lambda_{ab}\) of noisy eigenvalues. This is achieved by identifying the relationship between these unitary parameters and some eigenvalues of the gate, which is similar as the robust phase estimation [40] and Floquet calibration [41; 42; 43]. We emphasize that, compared to the stochastic errors, the unitary errors may cause more subtle and complicated problems in quantum error correction and fault-tolerant quantum computation [52; 53; 54; 55; 56]. As a result, differentiating between stochastic and unitary errors can assist us in recognizing their respective impacts, and in addition, can help to calibrate and tailor the error types.
## III Channel spectrum benchmarking
In this section, we present a practical procedure, which we refer as _Channel Spectrum Benchmarking (CSB)_, to measure the individual fidelity of a universal process \(U\), which can be either a native gate or a circuit fragment.
The estimate of fidelity of the gate \(U\) requires a uniform sample of diagonal entries of \(\mathcal{E}\), which is identical to a uniform sample of noisy eigenvalues \(g_{ab}e^{i\lambda_{ab}}\). The noisy eigenvalues can be estimated by the circuits of control-free phase estimation depicted in Fig. 1. In these circuits, we first prepare a state \(\rho\), then repeatedly apply the target gate \(U\) for \(L\) times, and finally measure the expectation value of an operator \(O\). We denote the noisy version of \(\rho\) and \(O\) as \(\widetilde{\rho}\) and \(\widetilde{O}\). The noisy eigen-operators \(M_{ab}\) of \(\widetilde{\mathcal{U}}\) can be used as a basis (not necessarily orthonormal) to expand the initial state \(\widetilde{\rho}\), that is
\[\widetilde{\rho}=\sum_{ab}\mathrm{tr}\Big{\{}G_{ab}^{\dagger}\widetilde{\rho }\Big{\}}M_{ab} \tag{10}\]
where \(G_{ab}\) is the corresponding left eigen-operator of \(M_{ab}\) and they satisfy \(\mathrm{tr}\Big{\{}G_{ab}^{\dagger}M_{a^{\prime}b^{\prime}}\Big{\}}=\delta_{ ab,a^{\prime}b^{\prime}}\). Under the first order perturbation, the noisy eigen-operators \(M_{ab},G_{ab}\) are equal to their corresponding unperturbed eigen-operators \(M_{ab}^{\dagger},G_{ab}^{0}\), i.e., the ideal eigen-operators of \(\mathcal{U}\), see Appendix A. For ideal eigen-operators with non-degenerate eigenvalue, we have \(M_{ab}^{0}=G_{ab}^{0}=|\phi_{a}\rangle\langle\phi_{b}|\); for ideal eigen-operators with degenerate eigenvalue, the \(M_{ab}^{0},G_{ab}^{0}\) are superposition of eigen-operators \(|\phi_{a}\rangle\langle\phi_{b}|\) in the corresponding degenerate subspace. Then we can show that the expectation value of \(O\) at length \(L\) under noise is
\[\left\langle\widetilde{O}\right\rangle_{L} =\mathrm{tr}\Big{\{}\widetilde{O}\,\widetilde{\mathcal{U}}^{L}( \widetilde{\rho})\Big{\}}\] \[=\sum_{ab}\mathrm{tr}\Big{\{}\widetilde{O}M_{ab}\Big{\}}\mathrm{ tr}\Big{\{}G_{ab}^{\dagger}\widetilde{\rho}\Big{\}}\,(g_{ab}e^{i\lambda_{ab}} )^{L} \tag{11}\]
This is a damping oscillating function. From the time series data \(\left\langle\widetilde{O}\right\rangle_{L}\) at different depth \(L\), we can extract the noisy eigenvalues via signal processing methods, such as matrix pencil method [57; 58; 59].
By selecting an appropriate initial state \(\rho\) and measurement operator \(O\), we can control the number of eigenvalues presented in the resulting signals. The presence of too many different eigenvalues in the signals can pose some difficulties. These include: (1) the requirement for a large amount of data or equivalently a larger depth \(L\) (which is limited by the damping rate \(g_{ab}\)), and the difficulties to extract the eigenvalues from the limited measured signals, (2) the difficulties to identify the corresponding ideal eigenvalue for a given noisy counterpart, (3) the difficulties to maintain a uniform sample of the diagonal entries of \(\mathcal{E}\). To address these issues, we prepare the initial state and measurement operator as follows:
\[|\psi\rangle=c_{a}|\phi_{a}\rangle+c_{b}|\phi_{b}\rangle\quad\rho=O=|\psi \rangle\langle\psi| \tag{12}\]
which is a superposition of two eigenvectors only. For this type of initial state and measurement operator, there are only several non-trivial damping oscillating modes, i.e., with a large coefficients \(\mathrm{tr}\Big{\{}\widetilde{O}M_{ab}\Big{\}}\mathrm{tr}\Big{\{}G_{ab}^{ \dagger}\widetilde{\rho}\Big{\}}\) in the measured signals \(\left\langle\widetilde{O}\right\rangle_{L}\). These non-trivial modes are from the eigen-operators \(\{|\phi_{a}\rangle\langle\phi_{b}|,|\phi_{b}\rangle\langle\phi_{a}|,|\phi_{ a}\rangle\langle\phi_{a}|,|\phi_{b}\rangle\langle\phi_{b}|\}\) shown in the selected initial state and measurement operator.
Thus, as illustrated in Fig. 1, we propose the procedures of channel spectrum benchmarking below.
1. Uniformly at random sample \(K\) pairs of eigenstates \(\{|\phi_{a}\rangle,|\phi_{b}\rangle\}\) of target unitary operator \(U\).
2. For each pair of eigenstates, do step 3, i.e., running phase estimation circuits.
3. In phase estimation circuits, one first prepares the initial state \(|\psi\rangle=c_{a}|\phi_{a}\rangle+c_{b}|\phi_{b}\rangle\), then repeatedly apply the target gate \(U\) for \(L\) times where \(L\) takes successive integers in \([0,L_{\mathrm{max}}]\), finally measure the probability \(\left\langle O\right\rangle_{L}\) of obtaining \(O=|\psi\rangle\langle\psi|\). Then, we process the measured data using the following steps: 1. Estimate the noisy eigenvalues \(g_{ab}e^{i\lambda_{ab}}\) (amplitudes and phases) from the time series data \(\left\langle\widetilde{O}\right\rangle_{L}\) by matrix pencil method. 2. Identify the ideal counterparts of the measured noisy eigenvalues. 3. Compute the diagonal entries of \(\mathcal{E}\) by Eq. (5).
4. Compute the process fidelity by Eq. (14) and stochastic fidelity by Eq. (15).
Step 1 ensures the estimated diagonal entries are uniform samples. We require the amplitude of two coefficients \(c_{a},c_{b}\) are comparable and the initial state \(|\psi\rangle\) can be efficiently prepared. In the simulated experiments, we always choose \(c_{a}=c_{b}=\frac{1}{\sqrt{2}}\). The number of initial states \(K\) is independent of system dimension \(d\) and only depends on desired precision referring to Eq. (8), which is guaranteed by Hoeffding's inequality. So our method is applicable to multi-qubit systems.
In the phase estimation circuits of step 3, we choose the length \(L\) from \([0,L_{\mathrm{max}}]\). The maximum length \(L_{\mathrm{max}}\) and the number of initial states \(K\) determine the total number of benchmarking circuits \(N_{c}=K(L_{\mathrm{max}}+1)\). In order to collect enough statistics, we need to run each circuit for \(N_{s}\) shots, and therefore, the total experimental cost is \(N_{c}N_{s}=K(L_{\mathrm{max}}+1)N_{s}\). The choice of \(L_{\mathrm{max}}\) and \(N_{s}\) also depends only on the desired precision and not on the system dimension. Previous work has shown that the uncertainty of estimated eigenvalues is inversely proportional to the length \(L\), the so-called Heisenberg scaling [40; 41]. Therefore, if higher precision is desired, it is generally better to increase \(L_{\mathrm{max}}\) rather than the
number of shots \(N_{s}\) per circuit, before the signals are completely degraded.
In step 3a, the noisy eigenvalues are estimated using the matrix pencil (MP) method [57, 58, 59]. MP method is well-suited for our task because MP involves a singular value decomposition (svd) of the data Hankel matrix. This svd procedure allows us to keep only the components with non-trivial singular values, i.e., damping oscillating modes caused by noisy eigenvalues of ideal eigen-operators shown in the selected initial state. MP method can reduce some sampling errors and eliminate unwanted eigenvalues (with small coefficients) due to SPAM errors or noisy eigen-operators with degenerate ideal eigenvalue. In our simulated experiments, when using an initial state with unequal phases \(\lambda_{a},\lambda_{b}\), the number of obtained noisy eigenvalues is at most four.
In step 3b, our goal is to match the obtained noisy eigenvalues from matrix pencil method to their corresponding ideal counterparts such that we can compute the diagonal entries of \(\mathcal{E}\) by Eq. (5). For a initial state, if the two decomposed eigenstates \(|\phi_{a}\rangle,|\phi_{b}\rangle\) have equal eigenvalues, this process of step 3b is not needed because all ideal channel eigenvalues are 1. On the other hand, if a initial state consists of two eigenstates with unequal eigenvalues, there are three ideal channel eigenvalues \(\{e^{i(\lambda_{a}-\lambda_{b})},e^{-i(\lambda_{a}-\lambda_{b})},1\}\) for estimated noisy eigenvalues to match with. To match the obtained noisy eigenvalues to the three ideal ones, we calculate the distance between the phases of the estimated noisy eigenvalues and the ideal eigen-phase \(\lambda_{a}-\lambda_{b}\) for the corresponding eigen-operator \(|\phi_{a}\rangle\langle\phi_{b}|\). The noisy eigenvalue with the smallest distance is chosen as the noisy counterpart of the ideal eigenvalue \(e^{i(\lambda_{a}-\lambda_{b})}\). Similarly, the noisy counterpart of \(e^{-i(\lambda_{a}-\lambda_{b})}\) is also determined. The remaining noisy eigenvalues are considered as the counterparts of the ideal eigenvalue 1. This criterion assumes that the magnitude of the actual phase error \(\delta\lambda=\lambda_{ab}-(\lambda_{a}-\lambda_{b})\) is small, more precisely we require
\[|\delta\lambda|\ll|\lambda_{a}-\lambda_{b}|. \tag{13}\]
If this criterion is not met, which is possibly due to a very large unitary error, we may mismatch the noisy eigenvalues with the ideal ones. Combined with the error mitigation technique for phase estimation in Ref. [47], where randomized compiling is introduced to reduce the phase error (unitary error is transformed to stochastic error and the total noise strength is not changed), this issue can be fixed.
After calculating the diagonal entries using Eq. (5), we divide them into two categories based on the ideal eigenvalue of the associated basis \(|\phi_{a}\rangle\langle\phi_{b}|\): one is the trivial operator subspace (dimension \(d_{\rm us}\)) with \(\lambda_{a}=\lambda_{b}\) (or say the operator subspace spanned by the eigen-operators with eigenvalue 1), the other is the non-trivial operator
Figure 1: The procedures of channel spectrum benchmarking. The benchmarking circuits are composed of three parts: the first part \(U_{s}\) prepares the initial state \(|\psi\rangle=c_{a}|\phi_{a}\rangle+c_{b}|\phi_{b}\rangle\), which is a superposition of two eigenstates of target gate \(U\); then the target gate \(U\) is repeated \(L\) times, where \(L\) is an integer in \([0,L_{\rm max}]\); finally, the operator \(O=|\psi\rangle\langle\psi|\) is measured. The choice of coefficients \(c_{a},c_{b}\) in initial state is flexible as long as they are comparable and admit an efficient preparation of the initial state. Throughout this work, we choose \(c_{a}=c_{b}=\frac{1}{\sqrt{2}}\). For each initial state, we estimate several noisy eigenvalues from the time series data \(\langle O\rangle_{L}\) at different depth \(L\) using matrix pencil method.
subspace (dimension \(d_{\rm ns}\)) with \(\lambda_{a}\neq\lambda_{b}\). We should separately compute the average values of diagonal entries in the two subspace and then combine the two averages to get the estimator of the process fidelity. Because, to get a uniform sample of diagonal entries of \(\mathcal{E}\), we should assign the sampling probability \(\frac{d_{\rm ns}}{d^{2}}\) for trivial subspace and probability \(\frac{d_{\rm ns}}{d^{2}}\) for non-trivial subspace. However, in step 1, we assign the same probability for the two subspaces, that is the sampling probability \(\frac{1}{2}\) for each subspace. The dimension of trivial subspace \(d_{\rm ts}\) is usually very different from the dimension of non-trivial subspace \(d_{\rm ns}\), the probability of sampling an entry in the two subspace are very different. For example, for a many-qubit gate \(U\) with non-degenerate operator spectrum, the trivial subspace is spanned by all the eigen-operators with the form \(|\phi_{a}\rangle\langle\phi_{a}|\), whose dimension \(d_{\rm ts}=d\) is much smaller than \(d_{\rm ns}=d^{2}-d\). If there are some degeneracy in the spectrum of the operator \(U\), that is \(\lambda_{a}=\lambda_{b}\) for two different eigenstates \(|\phi_{a}\rangle,|\phi_{b}\rangle\), the trivial subspace can include the eigen-operators of the form \(|\phi_{a}\rangle\langle\phi_{b}|\). The average value in each subspace can be used to estimate the sum of diagonal entries in the corresponding subspace. Finally, the estimator of the process fidelity is obtained by combining these two averages, that is
\[\hat{F}=\frac{d_{\rm ts}\,\overline{\mathcal{E}_{ab,ab}}|_{\lambda_{a}= \lambda_{b}}+d_{\rm ns}\,\overline{\mathcal{E}_{ab,ab}}|_{\lambda_{a}\neq \lambda_{b}}}{d^{2}} \tag{14}\]
where \(\overline{\mathcal{E}_{ab,ab}}\) is the average value of sampled entries. Similarly, the estimator for stochastic fidelity is
\[\hat{F}_{\rm sto}=\sqrt{\frac{d_{\rm ts}\,\overline{g^{2}_{ab,ab}}|_{\lambda_{ a}=\lambda_{b}}+d_{\rm ns}\,\overline{g^{2}_{ab,ab}}|_{\lambda_{a}\neq \lambda_{b}}}{d^{2}}}\,. \tag{15}\]
## IV Simulated experiments
In this section, we perform simulated experiments to show the performance of our CSB protocol, including single-qubit Pauli rotation gates, two-qubit fermionic-simulation (Fsim) gates, three-qubit Toffoli gate, and an Ising Hamiltonian evolution operator with 10 qubits. Throughout this work, each benchmarking circuit is repeated \(N_{s}=10^{4}\) times to collect enough statistic. We will report infidelity (\(1-\text{fidelity}\)) instead of fidelity, because it's more intuitive to understand the presented results.
### Single-qubit Pauli rotation gates
Here we measure the infidelity of single-qubit rotation gates, that is
\[R_{\sigma}(\theta)=e^{-i\frac{\theta}{2}\sigma} \tag{16}\]
where \(\theta\) is the rotational angle and \(\sigma\) is a Pauli matrix describing the direction of the rotational axis. This type of unitary operator has two eigenvalues \(e^{-i\frac{\theta}{2}}\) and \(e^{i\frac{\theta}{2}}\). The dimension of the trivial eigen-operator subspace is 2, which is the same as the dimension of the non-trivial eigen-operator subspace. The corresponding operator (i.e. \(\frac{1}{2}(|\phi_{a}\rangle\langle\phi_{a}|+|\phi_{b}\rangle\langle\phi_{b}|)\)) associated with the trivial part of our initial state choice could happen to be very close to one of noisy eigen-operators of \(\mathcal{U}\). This means that we may only obtain one noisy eigenvalue in this subspace, potentially leading to an inaccurate estimation of the process fidelity. To address this issue, we also prepare another initial state, that is one of the eigenstates of \(R_{\sigma}(\theta)\) in addition to the superposition state, and then we run phase estimation circuits again for this initial state. Therefore, we have \(K=2\) here. At the same circuit length, we sum the measured probabilities of the two types of circuits (with the two initial states), allowing us to extract all the noisy eigenvalues simultaneously.
Fig. 2 shows the results for benchmarking \(R_{Z}(\frac{\pi}{4})\) gate (also known as \(T\) gate). In this simulation, the noise model consists of a combination of stochastic errors (including \(T_{1}\) and \(T_{2}\) errors with equal probabilities \(\delta p\)) and over/under-rotation errors with angle \(\delta\theta\). In Fig. 2(a), we fix the unitary error (\(\delta\theta=-0.01\)) and vary the probability of stochastic error. In Fig. 2(b), we fix the stochastic error (\(\delta p=0.001\)) and vary the angle of unitary error. In
Figure 2: Benchmarking of \(T\) gate. In (a), we fix the unitary error (\(\delta\theta=-0.01\)) and vary the probability of stochastic error. In (b), we fix the stochastic error (\(\delta p=0.001\)) and vary the angle of unitary error. The actual process infidelity and stochastic infidelity is obtained by first computing the channel of noisy gate and then using Eq. (3) and Eq. (9). In both cases, we accurately estimate process infidelity, stochastic infidelity and the angle of unitary error. The accuracy of estimation can be further improved by increasing the circuit length or shots for each circuit.
both cases, we are able to accurately estimate the process and stochastic fidelity of the gate. As a byproduct, we can also estimate the angle of the unitary error by comparing the phases of noisy eigenvalues to their corresponding ideal values. This scheme for unitary error estimation is a more sensitive probe than infidelity measures, as shown in Fig. 2(b), where the process infidelity remains almost unchanged when \(\delta\theta\) is varied from \(10^{-3}\) to \(10^{-2}\).
In this simulation, we set \(L_{\max}=50\), except when stochastic probability \(\delta p=10^{-3}\), where \(L_{\max}=100\). It is worth noting that the accuracy of the estimation can be further improved by increasing the length of the benchmarking circuits. However, increasing \(L_{\max}\) directly also increases the number of circuits used, which leads to higher costs. Instead, we can repeat the target gate \(U\) a certain number of times (\(N_{\text{rep}}\) times) to create a new target gate, \(U^{\prime}=U^{N_{\text{rep}}}\). Correspondingly, the noisy eigenvalue we estimate becomes \((g_{ab}e^{i\lambda_{ab}})^{N_{\text{rep}}}\). But remember we need to determine the ideal eigenvalue from phase difference, thus as a result of Eq. (13), we require
\[N_{\text{rep}}|\delta\lambda|\ll|(N_{\text{rep}}\lambda_{a})\,\text{mod}\,2 \pi-(N_{\text{rep}}\lambda_{b})\,\text{mod}\,2\pi|\,. \tag{17}\]
### Two-qubit Fsim gates
Here, we benchmark the two-qubit fermionic-simulation (Fsim) gates [8], i.e.,
\[\text{Fsim}(\theta,\phi)=\begin{bmatrix}1&0&0&0\\ 0&\cos\theta&-i\sin\theta&0\\ 0&-i\sin\theta&\cos\theta&0\\ 0&0&0&e^{i\phi}\end{bmatrix} \tag{18}\]
where \(\theta\) is the iswap angle and \(\phi\) is the control phase angle. We omit some phase parameters that can be freely adjusted by \(Z\) rotations.
For the preparation of initial states, we consider all pairs of eigenstates (\(K=6\)). The choice of \(L_{\max}\) is 50 or 100 (for \(\delta p=10^{-3}\)). In this simulation, the noise model includes \(T_{1},T_{2}\) noise with equal probabilities \(\delta p\) for all single-qubit gates. For two-qubit gates, each qubit experiences the same errors as single-qubit gates, as well as an over-rotation unitary error with angle errors \(\delta\theta\) and \(\delta\phi\).
We benchmark a specific Fsim gates with \(\theta=\frac{\pi}{4},\phi=\frac{\pi}{2}\), as shown in Fig. 3. In Fig. 3(a), we fix the unitary error with \(\delta\theta=-0.01,\delta\phi=-0.02\) and vary the probability of stochastic error \(\delta p\). We accurately estimate all infidelities in this case. However, the estimations of the angles of unitary errors become less accurate when the stochastic error is too strong, as the signal decays too quickly to accumulate enough information to estimate the angles. In Fig. 3(b), we fix the probability of stochastic error with \(\delta p=0.001\) and vary the angles of unitary error with \(\delta\theta=0.5\delta\phi=10^{-3}\sim 10^{-1}\). Again, we accurately estimate all infidelities and angles of the unitary error.
### Three-qubit Toffoli gate
In this study, we evaluate the performance of the three-qubit Toffoli gate, which is not a native gate but rather a circuit fragment composed of 1-qubit and 2-qubit gates as shown in Fig. 4(c). We randomly select \(K=10\) pairs of eigenstates as the initial state and set \(L_{\max}=50\). In the simulated noise model, all single-qubit gates are subject to \(T_{1},T_{2}\) noise with equal probability \(\delta p\). For the two-qubit gates, each qubit experiences the same type of stochastic error as the single-qubit gates, followed by a unitary error of the Fsim type with error angles \(\delta\theta=\delta\phi\).
The Toffoli operator has a highly degenerate spectrum, which creates two challenges for our method. First, when sampling noisy eigen-operators, we need them to be uniformly distributed, but for degenerate ideal eigenvalues, the corresponding noisy eigen-operators are superpositions of ideal ones in the degenerate subspace, which are determined by the details of the noise, see Appendix A. This makes it difficult to generate a uniform sample of noisy eigen-operators. Second, the degenerate eigenvalue may be split by noise into many eigenvalues in the signal, making it harder to extract the noisy eigenvalues and each eigenvalue may only occupy a small portion of the
signal, making them more susceptible to errors. The impact of the highly degenerate spectrum on the estimate of gate noise is demonstrated by the simulated results in Fig. 4(a),(b).
Usually, some of degeneracy can be removed by appending a layer of single-qubit gates to the target gate or circuit fragment. For the Toffoli circuit, we append \(R_{Z}(\frac{\pi}{2})\otimes R_{Z}(\frac{2\pi}{3})\otimes R_{X}(\frac{4\pi}{5})\) to the Toffoli circuit and combine this layer with the last layer of the Toffoli circuit. The choice of appended layer should keep the state preparation of the new target gate efficient. In the current example, our choice does not change the eigenstates. For the angle parameters in the appended gates, one can design an optimization algorithm to choose the parameters that maximize the distance between eigenvalues. The appended layer of gates results in a varied circuit with a similar structure to the original Toffoli circuit (only the last layer is changed) and they should possess similar noise properties. In the case of strong stochastic error and weak unitary error (\(\delta\theta=0.01\)) in Fig. 4(a), the benchmarking of the varied circuit provides a very accurate estimate of the process infidelity and the stochastic infidelity of the original Toffoli circuit.
However, there is a significant difference between the estimated and actual process infidelity when the unitary error is very strong, as shown in Fig. 4(b) (with fixed stochastic error \(\delta p=0.001\)). In the Appendix B, we show that our method may under-estimate the process infidelity in the presence of certain strong unitary errors.
One way to address this issue is to introduce random gates into the benchmarking circuits to convert the unitary errors to stochastic errors [37, 38, 60]. The Appendix C describes a procedure for transforming noise in the native gates to stochastic errors using random gates from the symmetry group of the target \(U\). For benchmarking circuit fragments, we use a technique called randomized compiling [37, 38] to achieve this. Randomized compiling (RC) is a method that transforms the noise in the circuit into stochastic Pauli errors while maintaining the circuit structure and depth. After RC, the noise type of a circuit cycle is changed, but the fidelity of the cycle and the circuit structure remains unchanged. As long as there is no repeated structure in \(U\) where unitary error can coherently build up and increase the infidelity quadratically with the circuit depth [61] (this is a case where RC should be introduced to suppress the unitary noise), we expect the fidelity of the circuit \(U\) to remain unchanged after RC. For each original circuit, we generate \(N_{r}=10\) random circuits by RC and each random circuit is run \(10^{3}\) times to keep the cost unchanged. As shown in Fig. 4(b), after RC the varied circuit can accurately estimate the process infidelity of Toffoli circuit under unitary noise.
### Ten-qubit Ising evolution operator
Our method is practically scalable if the following two requirements are met:
1. The eigenvalues and eigenvectors of target unitary operator \(U\) can be efficiently computed.
2. The initial state can be efficiently prepared, i.e., the number of 1-qubit and 2-qubit gates needed for the preparation should at most scale polynomial with the number of qubits.
In general, these two requirements are not always satisfied. However, for certain types of unitary operators, such as the evolution operator of an Ising Hamiltonian, these requirements can be met. For an Ising Hamiltonian, the eigenvectors are known and are simply the computational basis states. Given an eigenstate, the eigenvalue can be efficiently computed.
Figure 4: Bechmarking of Toffoli circuit fragment. We fix the unitary error (\(\delta\theta=0.01\)) and vary stochastic error in (a), and fix stochastic error (\(\delta p=0.001\)) and vary unitary error in (b). The circuit implementing Toffoli gate is presented in (c). Due to the highly degenerate spectrum of the Toffoli gate, the estimate of the infidelity is unreliable. However, the degeneracy can be removed by changing the last layer of single-qubit gates. With the varied circuit, we accurately estimate the infidelity of the Toffoli circuit under weak unitary error in (a). For strong unitary error, we perform randomized compiling to the benchmarking circuits, converting the unitary error into stochastic error. As a result, the varied circuit also accurately estimates the process infidelity of Toffoli circuit under strong unitary error, as shown in (b).
The initial state of a superposition of two computational basis states \(|x\rangle=|x_{0},\cdots,x_{i},\cdots,x_{N-1}\rangle\), \(|y\rangle=|y_{0},\cdots,y_{i},\cdots,y_{N-1}\rangle\) can be prepared as follows: first, for the qubit \(i\), if \(x_{i}=y_{i}\), the state can be prepared by an \(X\) gate if \(x_{i}=y_{i}=1\); then, for the state of remaining qubits with \(x_{i}\neq y_{i}\), if we only have one such qubit, a Hadamard gate \(H\) can be applied; if there is more than one qubit with \(x_{i}\neq y_{i}\), one can first prepare a GHZ state on these qubits and then apply some \(X\) gates to obtain the target state. Therefore, the preparation of such states cost at most \(N\) 1-qubit and \(N\) 2-qubit gates. Additionally, for the evolution operator of the Hamiltonian that can be obtained by performing local unitary transformation on an Ising Hamiltonian, i.e., \(H=\bigotimes_{i}U_{i}H_{\text{Ising}}\bigotimes_{i}U_{i}^{\dagger}\), the initial states can also be obtained in the similar way with additional two layers of single-qubit gates \(\bigotimes U_{i}\), \(\bigotimes U_{i}^{\dagger}\). Thus, this type of evolution operators is a good example for benchmarking many-qubit quantum systems.
Here we benchmark the evolution operator of a 1-dimensional Ising ring \(H=\sum_{i=1}^{10}h_{i}Z_{i}+J_{i,i+1}Z_{i}Z_{i+1}\), where \(h_{i},J_{i,i+1}\) are randomly chosen. The circuit is shown as in Fig. 5(c). We sample \(K=10\) pairs of eigenstates and set \(L_{\text{max}}=50\). The noise model is the same as the case in Sec. IV.3. The actual infidelity is inferred from the infidelity of single-qubit and two-qubit gates, because our computer is not powerful enough to compute the quantum channel of a 10-qubit circuit. Our method accurately estimates process infidelity under both weak and strong unitary error (with RC), as shown in Fig. 5(a),(b).
## V Conclusion and outlook
In this work, we introduced a procedure called channel spectrum benchmarking, which infers the noise properties of a quantum gate from the eigenvalues of noisy channel representing the gate. In the protocol, we first choose the initial state using a superposition of randomly sampled pair of eigenstates of the target gate. Then, we use control-free phase estimation circuits to estimate the noisy eigenvalues in a SPAM error-resistant manner. This choice of initial state simplifies the data processing because the measured signals only contain a few eigenvalues, which can be extracted using signal processing methods such as the matrix pencil method. By comparing the noisy eigenvalues to their ideal counterparts, we can estimate noise properties such as the process fidelity, stochastic fidelity, and some unitary parameters of the target gates. Our method can be applied to any quantum gate, but performs better on gates with non-degenerate operator spectrum. For gates with highly degenerate spectrum, we can append a layer of single-qubit gates to remove the degeneracy while maintaining a similar circuit structure. Some types of unitary error can also affect the performance, which can be addressed using randomization techniques like randomized compiling. Our method is scalable to many-qubit systems as long as the eigen-decomposition can be computed and the initial state can be efficiently prepared, such as the evolution operator of an Ising-type Hamiltonian.
The requirements for the scalability of our method could be relaxed. In principle, we do not need to obtain the complete set of the eigenmodes for the target gate operator, a few samples of eigenvalues and eigenstates are sufficient. For initial state preparation, there are existing methods for preparing arbitrary states [62, 63, 64, 65], but it would be interesting to develop a more efficient algorithm for preparing the particular type of initial states in our method. A variational algorithm [66] may be able to efficiently prepare these states for most target gates, because we have the freedom to choose the coefficients of the superposition states and do not need perfect preparation. Our method can be scaled up in a way similar to simultaneous randomized benchmarking [67, 68], where some few-qubit gates are simultaneously benchmarked on different subsets of a many-qubit system such that the ef
Figure 5: Benchmarking of a 10-qubit Ising evolution operator. We fix unitary error (\(\delta\theta=0.01\)) and vary stochastic error in (a), and fix stochastic error (\(\delta p=0.001\)) and vary unitary error in (b). The circuit implementing Ising evolution operator is presented in (c). The actual fidelity is not computed from the channel of the circuit, but rather inferred from the product of the fidelity of all single-qubit and two-qubit gates. However, the actual stochastic infidelity can not be reliably inferred by this procedure. We accurately estimate infidelity of the Ising evolution operator under weak unitary error (a) and strong unitary error with RC (b).
fect of crosstalk [69] can be detected.
One immediate use of benchmarking is to calibrate quantum gates using the measured figures of merit as a cost function [8; 9; 10]. Our method can provide more specific information (process infidelity, stochastic infidelity and some unitary parameters of the target gate) about the calibrating gate, so it is expected to perform better on this task than other benchmarking methods. A detailed comparison of different benchmarks for calibration will be a topic for future research. Additionally, our method can be used to calibrate universal gates, including not only 1 or 2-qubit native gates, but also many-qubit native gates such as MS gates [70; 71] used in ion trap systems. It may also be interesting to use our method to calibrate certain circuit fragments that are commonly used in algorithms, such as the trotterized Hamiltonian evolution operator in quantum simulation and the Grover iteration operator in Grover's search algorithm.
###### Acknowledgements.
The work is supported by the National Natural Science Foundation of China (Grant No. 12147123 and 11974198) and Beijing Natural Science Foundation (No. Z220002). Source code for the simulated experiments is available at this site [https://github.com/yanvu-gu/channel-spectrum-benchmarking](https://github.com/yanvu-gu/channel-spectrum-benchmarking).
## Appendix A The relationship between noisy and ideal eigenvalues of quantum channels
In this section, we derive the relationship between the noisy channel eigenvalues of a gate and its corresponding ideal counterparts with the first order perturbation theory.
Consider a gate \(U\) acting on a \(d\)-dimensional space with eigen-decomposition \(U|\phi_{a}\rangle=e^{i\lambda_{a}}|\phi_{a}\rangle\). As the actual implementation of a gate \(U\) is inevitably associated with some noise, it is more convenient to use quantum channels rather than quantum operators. Quantum channels are completely-positive trace-preserving (CPTP) maps, which transform one operator to another. The action of a quantum channel \(\mathcal{E}\) on an arbitrary operator \(O\) can be characterized by a set of Kraus operators \(E_{k}\), i.e., \(\mathcal{E}(O)=\sum_{k}E_{k}OE_{k}^{\dagger}\). We denote the corresponding unitary channel of the unitary operator \(U\) as \(\mathcal{U}\), whose action on an operator \(O\) is \(\mathcal{U}(O)=UOU^{\dagger}\). Thus, the unitary channel \(\mathcal{U}\) has the eigen-decomposition
\[\mathcal{U}(|\phi_{a}\rangle\langle\phi_{b}|)=U|\phi_{a}\rangle \langle\phi_{b}|U^{\dagger}=e^{i(\lambda_{a}-\lambda_{b})}|\phi_{a}\rangle \langle\phi_{b}|. \tag{10}\]
Quantum channels are linear maps that can be represented as matrices under a set of the basis operators of the operator space, such as eigen-operators \(|\phi_{a}\rangle\langle\phi_{b}|\). Meanwhile, operators are represented as vectors. The associated inner product between two operators \(A\) and \(B\) is the Hilbert-Schmidt inner product \(\mathrm{tr}\big{\{}A^{\dagger}B\big{\}}\). Therefore, in this representation \(\mathcal{U}\) is a unitary matrix.
Let us append a noise channel \(\mathcal{E}\) to \(\mathcal{U}\), with the noisy version of \(\mathcal{U}\) denoted as \(\widetilde{\mathcal{U}}=\mathcal{E}\mathcal{U}\). We investigate the relationship between the eigenvalues of \(\widetilde{\mathcal{U}}\) and those of \(\mathcal{U}\). If the noise is relatively weak, the problem is an eigenvalue perturbation of unitary matrix. Given the close relationship between unitary and Hermitian matrices, one can use Hermitian matrix perturbation theory to get the correction of eigenvalues and eigenstates, assuming a diagonalizable noisy gate \(\widetilde{\mathcal{U}}\). In most cases, the assumption should be met in actual devices, since diagonalizable matrices are dense in the space of all matrices, meaning that any non-diagonalizable matrix can be deformed into a diagonalizable one by a small perturbation. In the following, we apply Hermitian perturbation theory to obtain the first order correction of the eigenvalues, and obtain the relationship between the noisy eigenvalues and ideal ones.
Define the eigenvalues and eigen-operators of \(\widetilde{\mathcal{U}}\) as \(g_{ab}e^{i\lambda_{ab}}\) and \(M_{ab}\), that is
\[\widetilde{\mathcal{U}}(M_{ab})=g_{ab}e^{i\lambda_{ab}}M_{ab}\,. \tag{11}\]
The perturbation matrix is
\[\Delta=\widetilde{\mathcal{U}}-\mathcal{U}=(\mathcal{E}-\mathcal{I})\, \mathcal{U}\,. \tag{12}\]
We assume the perturbation is small in terms of some matrix norm, such as the diamond norm \(\|\Delta\|_{\circ}=\delta\)[49]. Then, for a non-degenerate eigenvalue \(e^{i(\lambda_{a}-\lambda_{b})}\) with eigen-operator \(|\phi_{a}\rangle\langle\phi_{b}|\), the first order correction is
\[\epsilon_{1}^{ab} = \mathrm{tr}\big{\{}(|\phi_{a}\rangle\langle\phi_{b}|)^{\dagger} \Delta(|\phi_{a}\rangle\langle\phi_{b}|)\big{\}} \tag{13}\] \[= \mathrm{tr}\big{\{}(|\phi_{a}\rangle\langle\phi_{b}|)^{\dagger} (\mathcal{E}-\mathcal{I})\,\mathcal{U}(|\phi_{a}\rangle\langle\phi_{b}|)\big{\}}\] \[= e^{i(\lambda_{a}-\lambda_{b})}\left[\mathrm{tr}\big{\{}(|\phi_{a} \rangle\langle\phi_{b}|)^{\dagger}\mathcal{E}(|\phi_{a}\rangle\langle\phi_{b}| )\big{\}}-1\right].\]
Thus, the noisy eigenvalue is approximated as
\[g_{ab}e^{i\lambda_{ab}} \approx e^{i(\lambda_{a}-\lambda_{b})}+\epsilon_{1}^{ab} \tag{14}\] \[= e^{i(\lambda_{a}-\lambda_{b})}\mathrm{tr}\big{\{}(|\phi_{a} \rangle\langle\phi_{b}|)^{\dagger}\mathcal{E}(|\phi_{a}\rangle\langle\phi_{b}| )\big{\}}.\]
In this case, the noisy eigen-operator \(M_{ab}\) is approximated by
\[M_{ab}\approx|\phi_{a}\rangle\langle\phi_{b}|+O(\delta) \tag{15}\]
where \(O(\delta)\) are some correction terms with the first order of \(\delta\).
For degenerate eigenvalues \(e^{i\lambda_{n}}\) with eigen-operators \(|\phi_{a}\rangle\langle\phi_{b}|\) satisfying \(\lambda_{a}-\lambda_{b}=\lambda_{n}\), these eigen-operators span a subspace. The \(ab,a^{\prime}b\)-entry of the perturbation matrix projected in this subspace is
\[\Delta^{(n)}_{ab,a^{\prime}b^{\prime}} = \mathrm{tr}\big{\{}(|\phi_{a}\rangle\langle\phi_{b}|)^{\dagger}( \mathcal{E}-\mathcal{I})\,\mathcal{U}(|\phi_{a^{\prime}}\rangle\langle\phi_{b^ {\prime}}|)\big{\}} \tag{16}\] \[= e^{i\lambda_{n}}\left[\mathrm{tr}\big{\{}(|\phi_{a}\rangle \langle\phi_{b}|)^{\dagger}\mathcal{E}(|\phi_{a^{\prime}}\rangle\langle\phi_{b^ {\prime}}|)\big{\}}-\delta_{aa^{\prime}}\delta_{bb^{\prime}}\right]\] \[= e^{i\lambda_{n}}(\mathcal{E}^{ab}_{ab,a^{\prime}b^{\prime}}- \mathcal{I}^{(n)}_{ab,a^{\prime}b^{\prime}})\]
where \(\mathcal{E}^{(n)}_{ab,a^{\prime}b^{\prime}}\) and \(\mathcal{I}^{(n)}_{ab,a^{\prime}b^{\prime}}\) are the entries of pure noise map \(\mathcal{E}\) and Identity map \(\mathcal{I}\) projected in this degenerate subspace. In the degenerate case, the first order corrections to the eigenvalue \(e^{i\lambda_{n}}\) of \(\mathcal{U}\) are the eigenvalues of the perturbation matrix \(\Delta^{(n)}\). It's easy to find that the matrix \(\Delta^{(n)}\) and the matrix \(\mathcal{E}^{(n)}\) have the same eigen-operators \(M^{0}_{pq}\), which are the superposition of eigen-operators \(|\phi_{a}\rangle\langle\phi_{b}|\) in this degenerate subspace. They are also the corresponding unperturbed eigen-operators of noisy eigen-operator \(M_{pq}\) of \(\widetilde{\mathcal{U}}\), that is
\[M_{pq}\approx M^{0}_{pq}+O(\delta)\,. \tag{10}\]
In this case, the eigenvalue \(g_{pq}e^{i\lambda_{pq}}\) of \(\widetilde{\mathcal{U}}\) is
\[g_{pq}e^{i\lambda_{pq}} \approx e^{i\lambda_{n}}+\text{tr}\Big{\{}G^{0\dagger}_{pq}\Delta^{(n )}(M^{0}_{pq})\Big{\}} \tag{11}\] \[=e^{i\lambda_{n}}+e^{i\lambda_{n}}\left[\text{tr}\Big{\{}G^{0 \dagger}_{pq}\mathcal{E}^{(n)}(M^{0}_{pq})\Big{\}}-1\right]\] \[=e^{i\lambda_{n}}\text{tr}\Big{\{}G^{0\dagger}_{pq}\mathcal{E}^{ (n)}(M^{0}_{pq})\Big{\}}\]
where \(G^{0}_{pq}\) is the corresponding left eigen-operator of \(M^{0}_{pq}\) and they satisfy \(\text{tr}\big{\{}G^{0\dagger}_{pq}M^{0}_{p^{\prime}q^{\prime}}\big{\}}= \delta_{pq,p^{\prime}q^{\prime}}\). Therefore, the Eq. (11) has the same form as Eq. (10) but with a basis of a different form.
## Appendix B Perturbation of channel eigenvalues under pure unitary error
Here we consider the noisy eigenvalues of a quantum gate under a pure unitary error
\[V=e^{-iH_{e}\delta} \tag{12}\]
where \(H_{e}\) is the Hamiltonian of error and \(\delta\) characterize the error strength. Assume the target gate \(U=e^{-iH\theta}\). Thus the operator of noisy gate is \(\widetilde{U}=VU\). In this case, the process fidelity is
\[F =\frac{|\text{tr}\{V\}|^{2}}{d^{2}}\] \[\approx\frac{|\text{tr}\big{\{}I-iH_{e}\delta-\frac{1}{2}H_{e}^{2 })\delta^{2}\}|^{2}}{d^{2}}\] \[=\frac{d^{2}-d\text{tr}\big{\{}H_{e}^{2}\big{\}}\delta^{2}}{d^{2}}\] \[=1-\frac{\text{tr}\big{\{}H_{e}^{2}\big{\}}}{d}\delta^{2} \tag{13}\]
where we use the property \(\text{tr}\{H_{e}\}=0\) and keep the term up to \(O(\delta^{2})\). This is a well-known result that unitary error with some matrix norm \(\delta\) has process infidelity of order \(O(\delta^{2})\)[49].
We then analyze how eigenvalues of a quantum channel \(\mathcal{U}\) change under such unitary error with perturbation theory. Now the effect of the noisy gate \(\widetilde{\mathcal{U}}\) on an operator \(O\) is \(\widetilde{\mathcal{U}}(O)=VUOU^{\dagger}V^{\dagger}\). Represent all quantum channels as matrices, the perturbation matrix is
\[\Delta=\widetilde{\mathcal{U}}-\mathcal{U}\,. \tag{14}\]
For a non-degenerate eigen-operator \(|\phi_{a}\rangle\langle\phi_{b}|\) with eigenvalue \(e^{i(\lambda_{a}-\lambda_{b})}\), the first order correction is
\[\epsilon^{ab}_{1} =\text{tr}\big{\{}(|\phi_{a}\rangle\langle\phi_{b}|)^{\dagger} \Delta(|\phi_{a}\rangle\langle\phi_{b}|)\big{\}}\] \[=e^{i(\lambda_{a}-\lambda_{b})}\left[\text{tr}\big{\{}(|\phi_{a} \rangle\langle\phi_{b}|)^{\dagger}V|\phi_{a}\rangle\langle\phi_{b}|V^{\dagger }\big{\}}-1\right]. \tag{15}\]
To further expand this equation, we will use the Baker-Hausdorff (BH) lemma
\[e^{X}Ye^{-X}=e^{\text{ad}_{X}}(Y)=\sum_{n=0}^{\infty}\frac{\text{ad}_{X}^{n}(Y) }{n!} \tag{16}\]
where ad is a map on operators with the effect \(\text{ad}_{X}(Y)=[X,Y]\). Then the first order correction is
\[\epsilon^{ab}_{1} =e^{i(\lambda_{a}-\lambda_{b})}\left[\text{tr}\big{\{}(|\phi_{a} \rangle\langle\phi_{b}|)^{\dagger}e^{-i\delta\text{ad}_{H_{e}}}(|\phi_{a} \rangle\langle\phi_{b}|)\big{\}}-1\right]\] \[\approx e^{i(\lambda_{a}-\lambda_{b})}\left[\text{tr}\bigg{\{}(|\phi_{a} \rangle\langle\phi_{b}|)^{\dagger}\left(\mathcal{I}-i\delta\text{ad}_{H_{e}}- \frac{1}{2}\delta^{2}\text{ad}_{H_{e}}^{2}\right)(|\phi_{a}\rangle\langle\phi_{ b}|)\bigg{\}}-1\right]\] \[=e^{i(\lambda_{a}-\lambda_{b})}\left[\underbrace{-i\delta\left( \langle\phi_{a}|H_{e}|\phi_{a}\rangle-\langle\phi_{b}|H_{e}|\phi_{b}\rangle \right)}_{\epsilon^{ab}_{1,1}}\underbrace{-\frac{1}{2}\delta^{2}\text{tr} \big{\{}(|\phi_{a}\rangle\langle\phi_{b}|)^{\dagger}\text{ad}_{H_{e}}^{2}(|\phi _{a}\rangle\langle\phi_{b}|)\big{\}}}_{\epsilon^{ab}_{1,2}}\right]\,. \tag{17}\]
If we consider only the first-order correction, the noisy eigenvalue is
\[g_{ab}e^{i\lambda_{ab}}\approx e^{i(\lambda_{a}-\lambda_{b})}+\epsilon^{ab}_{1} =e^{i(\lambda_{a}-\lambda_{b})}(1+\epsilon^{ab}_{1,1}+\epsilon^{ab}_{1,2}). \tag{18}\]
From Eq. (3) and Eq. (5), we get the estimate of process fidelity
\[\hat{F}=1+\frac{1}{d^{2}}\sum_{ab}\epsilon^{ab}_{1,1}+\frac{1}{d^{2}}\sum_{ab} \epsilon^{ab}_{1,2} \tag{19}\]
where the term with order \(O(\delta)\) is
\[\frac{1}{d^{2}}\sum_{ab}\epsilon_{1,1}^{ab}\] \[= \frac{1}{d^{2}}\sum_{ab}-i\delta\left(\langle\phi_{a}|H_{e}|\phi_{a }\rangle-\langle\phi_{b}|H_{e}|\phi_{b}\rangle\right)\] \[= 0 \tag{101}\]
and the term with order \(O(\delta^{2})\) is
\[\frac{1}{d^{2}}\sum_{ab}\epsilon_{1,2}^{ab}\] \[= -\frac{\delta^{2}}{2d^{2}}\sum_{ab}\text{tr}\big{\{}(|\phi_{a} \rangle\langle\phi_{b}|)^{\dagger}[H_{e},[H_{e},|\phi_{a}\rangle\langle\phi_{b }|]]\big{\}}\] \[= -\frac{\delta^{2}}{2d^{2}}\sum_{ab}\langle\phi_{a}|H_{e}^{2}|\phi_ {a}\rangle-2\langle\phi_{a}|H_{e}|\phi_{a}\rangle\langle\phi_{b}|H_{e}|\phi_{b }\rangle+\langle\phi_{b}|H_{e}^{2}|\phi_{b}\rangle\] \[= -\frac{\delta^{2}}{2d^{2}}(2d\text{tr}\big{\{}H_{e}^{2}\big{\}}-2 \text{tr}\big{\{}H_{e}\big{\}}^{2})\] \[= -\frac{1}{d}\text{tr}\big{\{}H_{e}^{2}\big{\}}\delta^{2}\,. \tag{102}\]
This coincides with the expression in Eq. (100).
However, due to the first order correction only contributing a term with order \(O(\delta^{2})\), we must also take into account the second order correction to the eigenvalues. The second order correction is
\[\epsilon_{2}^{ab} = \sum_{mn\neq ab}\frac{|\text{tr}\big{\{}(|\phi_{m}\rangle\langle \phi_{n}|)^{\dagger}\Delta(|\phi_{a}\rangle\langle\phi_{b}|)\big{\}}|^{2}}{e^{i (\lambda_{a}-\lambda_{b})}-e^{i(\lambda_{m}-\lambda_{n})}} \tag{103}\] \[= \sum_{mn\neq ab}\frac{|\text{tr}\big{\{}(|\phi_{m}\rangle\langle \phi_{n}|)^{\dagger}V|\phi_{a}\rangle\langle\phi_{b}|V^{\dagger}\big{\}}|^{2}}{e ^{i(\lambda_{a}-\lambda_{b})}-e^{i(\lambda_{m}-\lambda_{n})}}\] \[= \sum_{mn\neq ab}\frac{|\langle\phi_{m}|V|\phi_{a}\rangle|^{2}| \langle\phi_{n}|V|\phi_{b}\rangle|^{2}}{e^{i(\lambda_{a}-\lambda_{b})}-e^{i( \lambda_{m}-\lambda_{n})}}\] \[\approx \sum_{mn\neq ab}\frac{\left(\delta_{am}+(\langle\phi_{m}|H_{e}| \phi_{a}\rangle\langle\phi_{a}|H_{e}|\phi_{m}\rangle-\langle\phi_{m}|H_{e}^{2 }|\phi_{a}\rangle\delta_{am})\delta^{2}\right)\left(\delta_{bn}+(\langle\phi_{n }|H_{e}|\phi_{b}\rangle\langle\phi_{b}|H_{e}|\phi_{n}\rangle-\langle\phi_{n}|H_ {e}^{2}|\phi_{b}\rangle\delta_{bn})\delta^{2}\right)}{e^{i(\lambda_{a}-\lambda_ {b})}-e^{i(\lambda_{m}-\lambda_{n})}}\] \[= \sum_{m\neq a}\frac{\langle\phi_{m}|H_{e}|\phi_{a}\rangle\langle \phi_{a}|H_{e}|\phi_{m}\rangle\delta^{2}}{e^{i(\lambda_{a}-\lambda_{b})}-e^{i( \lambda_{m}-\lambda_{b})}}+\sum_{n\neq b}\frac{\langle\phi_{n}|H_{e}|\phi_{b} \rangle\langle\phi_{b}|H_{e}|\phi_{n}\rangle\delta^{2}}{e^{i(\lambda_{a}- \lambda_{b})}-e^{i(\lambda_{a}-\lambda_{n})}}\,. \tag{104}\]
If the error Hamiltonian \(H_{e}\) is diagonal under the basis of eigenvectors of \(U\), the second order correction is \(\epsilon_{2}^{ab}=0\) up to the second order \(O(\delta^{2})\). There is no problem for our method.
However, except the special case, there is some discrepancy between the process fidelity estimated using our method and the actual value, due to the presence of the term \(\epsilon_{2}^{ab}\) in the noisy eigenvalue \(g_{ab}e^{i\lambda_{ab}}\).
Here, we can directly compute the noisy eigenvalues of the channel \(\widetilde{\mathcal{U}}\) from the eigenvalues of the operator \(\widetilde{U}\) and give the analytical form of estimated process fidelity by our method. We first compute the Hamiltonian of \(\widetilde{U}\) by Baker-Campbell-Hausdorff formula
\[H^{\prime}=\log{(VU)} = \log{(e^{-iH_{e}\delta}e^{-iH\theta})} \tag{105}\] \[\approx -iH\theta-iH_{e}\delta-\frac{1}{2}\text{ad}_{-iH\theta}(-iH_{e} \delta)+\frac{1}{12}\text{ad}_{-iH\theta}^{2}(-iH_{e}\delta)-\frac{1}{720} \text{ad}_{-iH\theta}^{4}(-iH_{e}\delta)+\cdots\]
where we only keep the terms up to the order \(O(\delta)\) and omit some terms with the ad map. We can compute
the eigenvalues of \(H^{\prime}\) comparing to those of the \(-iH\theta\) with the first order perturbation theory. The first order correction to the eigenvalue \(i\lambda_{a}\) with eigenstate \(|\phi_{a}\rangle\) is
\[\epsilon_{1}^{a} = \langle\phi_{a}|H^{\prime}-(-iH\theta)|\phi_{a}\rangle \tag{14}\] \[= -i\delta\langle\phi_{a}|H_{e}|\phi_{a}\rangle\]
where these terms with ad map are all zeros because
\[\langle\phi_{a}|\mathrm{ad}_{-iH\theta}(O)|\phi_{a}\rangle\] \[= -i\theta\langle\phi_{a}|(HO-OH)|\phi_{a}\rangle\] \[= i\lambda_{a}(\langle\phi_{a}|O|\phi_{a}\rangle-\langle\phi_{a}| O|\phi_{a}\rangle)=0 \tag{15}\]
where \(O\) is an any operator. Then the noisy eigenvalue of \(|\phi_{a}\rangle\langle\phi_{b}|\) is
\[g_{ab}e^{i\lambda_{ab}}\approx e^{i\lambda_{a}+\epsilon_{1}^{a}-i\lambda_{b}- \epsilon_{1}^{b}}\,. \tag{16}\]
Thus our estimator for process fidelity is
\[\hat{F} = \frac{1}{d^{2}}\sum_{ab}e^{\epsilon_{1}^{a}-\epsilon_{1}^{b}} \tag{17}\] \[= 1-\frac{\sum_{a}\langle\phi_{a}|H_{e}|\phi_{a}\rangle^{2}}{d} \delta^{2}\,.\]
Because the term \(\sum_{a}\langle\phi_{a}|H_{e}|\phi_{a}\rangle^{2}\) is always smaller than the term \(\mathrm{tr}\!\left\{H_{e}^{2}\right\}\) except when \(H_{e}\) is a diagonal matrix under the basis \(|\phi_{a}\rangle\), our method under-estimates process infidelity under unitary error in general. This problem can be fixed by introducing some randomization procedure into the benchmarking circuits to convert unitary errors to stochastic errors [37, 38, 60].
## Appendix C Randomized compiling with the symmetric group of the target gate
For a circuit composed of single-qubit and two-qubit gates, randomized compiling (RC) is a standard procedure to tailor the noise into stochastic Pauli noise with Pauli twirling. Here, we consider another case that the circuit is repetitions of a native gate \(U\), that is \(U^{L}\). In the spirit of RC, if considering \(U\) as hard gate, we need a twirling group \(\mathbf{T}\), whose element \(T_{i}\) should be transformed to another \(T_{j}\) under the conjugate operation of \(U\), that is \(UT_{i}U^{\dagger}=T_{j}\).
A simple example of this type of groups is a symmetric group of \(U\)
\[\mathbf{T}=\left\{T:U^{\dagger}TU=T\right\}. \tag{18}\]
For the sequence of \(U^{L}\), random gates from the twirling group \(\mathbf{T}\) are introduced before each application of \(U\), but the effect of these random gates should be cancelled before the next \(U\) is applied. Finally, we get a new random sequence
\[T_{L}^{\dagger}U(T_{L}T_{L-1}^{\dagger})\cdots U(T_{i}T_{i-1}^{\dagger}) \cdots U(T_{2}T_{1}^{\dagger})UT_{1} \tag{19}\]
where the gates in parentheses should be implemented as one gate. In actual implementation, all the gates should be associated with a noise, and the gate sequence is denoted as composition of quantum channels
\[\mathcal{T}_{L}^{\dagger}\mathcal{E}_{T}\mathcal{E}\mathcal{U} \mathcal{T}_{L}\mathcal{T}_{L-1}^{\dagger}\mathcal{E}_{T}\cdots\mathcal{E} \mathcal{U}\mathcal{T}_{2}\mathcal{T}_{1}^{\dagger}\mathcal{E}_{T}\mathcal{E }\mathcal{U}\mathcal{T}_{1}\mathcal{E}_{T}\] \[= \mathcal{T}_{L}^{\dagger}\mathcal{E}_{T}\mathcal{E}\mathcal{T}_{L }\mathcal{U}\mathcal{T}_{L-1}^{\dagger}\mathcal{E}_{T}\cdots\mathcal{E} \mathcal{T}_{2}\mathcal{U}\mathcal{T}_{1}^{\dagger}\mathcal{E}_{T}\mathcal{E }\mathcal{T}_{1}\mathcal{U}\mathcal{E}_{T} \tag{20}\]
where we use the property that the gates in group \(\mathbf{T}\) commute with \(U\). We assume the noise of each twirling gate is the same quantum channel \(\mathcal{E}_{T}\) for simplicity (the noisy twirling gates are \(\mathcal{TE}_{T}\)), but this assumption can be relaxed [37].
After averaging many such random sequences we get
\[\left(\frac{1}{N_{T}}\sum_{\mathcal{T}_{L}}\mathcal{T}_{L}^{\dagger}\mathcal{E }_{T}\mathcal{E}\mathcal{T}_{L}\right)\mathcal{U}\cdots\left(\frac{1}{N_{T}} \sum_{\mathcal{T}_{1}}\mathcal{T}_{1}^{\dagger}\mathcal{E}_{T}\mathcal{E} \mathcal{T}_{1}\right)\mathcal{U}\mathcal{E}_{T} \tag{21}\]
where \(N_{T}\) is the number of gates in the twirling group \(\mathbf{T}\). The effect of this randomization procedure is to transform the noise to stochastic noise, that is applying a random quantum channel \(\mathcal{T}^{\dagger}\mathcal{E}_{T}\mathcal{E}\mathcal{T}\) with probability \(\frac{1}{N_{T}}\). One can use group representation theory to get a simpler form of the noise. But in our case, this subtlety is not necessary. This procedure is similar to the Ref. [60]. However, we do not require the twirling group is abelian and do not need the assumption that there is no equal irreducible representation for symmetric group. Thus our method has high flexibility to choose twirling group.
We use a simulated experiment to show the performance of this procedure. We benchmark \(T\) gate under a unitary error \(R_{X}(\delta\theta)\) with varied error angle \(\delta\theta\) and fixed stochastic error \(\delta p=0.001\). We choose \(\mathbf{T}=\left\{I,Z\right\}\) as twirling group. For each original circuit, we generate \(N_{r}=10\) random circuits and each random circuit is run for \(N_{s}=10^{3}\) times.
Figure 6: Benchmarking of \(T\) gate with randomized compiling. In simulation, stochastic error is fixed (\(\delta p=0.001\)) and unitary is \(R_{X}(\delta\theta)\) with varied error angle \(\delta\theta\). The twirling group is \(\mathbf{T}=\left\{I,Z\right\}\). For each original circuit, we generate \(N_{r}=10\) random circuits and each random circuit is run for \(N_{s}=10^{3}\) times.
The theory in Appendix B shows that the process infidelity estimated by our method is of the order \(O(\delta\theta^{4})\) without the use of randomized compiling. However, by introducing randomized compiling, our method can accurately estimate the process infidelity, as demonstrated in Fig. 6. It is important to note that the process infidelity measured using randomized compiling on native gate includes the noise from both the target gate and the twirling gates, since the twirling gates are not merged into the original circuit in the same way as when using randomized compiling on circuit fragments. For simplicity, we did not add noise to the twirling gates in this case. To obtain the infidelity of the target gate alone, it is necessary to benchmark the twirling gates separately and subtract their contribution from the overall infidelity, similar to the process used in interleaved RB [34].
|
2303.17544 | TorKameleon: Improving Tor's Censorship Resistance with K-anonymization
and Media-based Covert Channels | Anonymity networks like Tor significantly enhance online privacy but are
vulnerable to correlation attacks by state-level adversaries. While covert
channels encapsulated in media protocols, particularly WebRTC-based
encapsulation, have demonstrated effectiveness against passive traffic
correlation attacks, their resilience against active correlation attacks
remains unexplored, and their compatibility with Tor has been limited. This
paper introduces TorKameleon, a censorship evasion solution designed to protect
Tor users from both passive and active correlation attacks. TorKameleon employs
K-anonymization techniques to fragment and reroute traffic through multiple
TorKameleon proxies, while also utilizing covert WebRTC-based channels or TLS
tunnels to encapsulate user traffic. | Afonso Vilalonga, João S. Resende, Henrique Domingos | 2023-03-30T17:11:00Z | http://arxiv.org/abs/2303.17544v3 | TorKameleon: Improving Tor's Censorship Resistance With K-anonimization and Media-based Covert Channels
###### Abstract
The use of anonymity networks such as Tor and similar tools can greatly enhance the privacy and anonymity of online communications. Tor, in particular, is currently the most widely used system for ensuring anonymity on the Internet. However, recent research has shown that Tor is vulnerable to correlation attacks carried out by state-level adversaries or colluding Internet sensors. Therefore, new and more effective solutions emerged to protect online anonymity. Promising results have been achieved by implementing covert channels based on media traffic in modern anonymization systems, which have proven to be a reliable and practical approach to defend against powerful traffic correlation attacks. In this paper, we present TorKameleon, a censorship evasion solution that better protects Tor users from powerful traffic correlation attacks carried out by state-level adversaries. TorKameleon can be used either as a fully integrated Tor pluggable transport or as a standalone anonymization system that uses K-anonymization and encapsulation of user traffic in covert media channels. Our main goal is to protect users from machine and deep learning correlation attacks on anonymization networks like Tor. We have developed the TorKameleon prototype and performed extensive validations to verify the accuracy and experimental performance of the proposed solution in the Tor environment, including state-of-the-art active correlation attacks. As far as we know, we are the first to develop and study a system that uses both anonymization mechanisms described above against active correlation attacks.
Keywords:Censorship Circunvention Tor Network Traffic Correlation Attacks WebRTC-based Traffic Encapsulation K-anonymization
## 1 Introduction
Tor [1] is a low-latency anonymous network based on the Onion Routing protocol. It provides anonymity to its users by using network paths (i.e., Tor circuits) consisting of multiple nodes or proxies (i.e., Tor relays) to route traffic from the user to its destination. In theory, this ensures unlikability between the incoming flow and the corresponding outgoing flow leaving the Tor network.
Anonymization systems like Tor aim to provide unobservability and prevent detection while being unblockable for users [2]. However, Tor makes a tradeoff between usability and privacy. Tor alone does not obfuscate traffic characteristics, which allows attackers to use statistical analysis and machine learning models to identify pairs of input and output network flows that share similar characteristics. Therefore, Tor is vulnerable to deanonymization attacks [3, 4, 5, 6, 7, 8, 9], with a large percentage of circuits vulnerable to correlation attacks by network-level and state-level adversaries. Specifically, 40% are vulnerable to network-level adversaries, 42% to colluding network-level adversaries, and 85% to state-level adversaries, with up to 95% in some countries [10].
The Tor project has developed pluggable transports to prevent deanonymization attacks against Tor and relay blocking. Pluggable transports use client-side software to obfuscate Tor traffic on the user's device, along with server-side software on the entry Tor relay, to receive and deobfuscate traffic. This approach randomizes and conceals the metadata and characteristics of incoming traffic and makes it difficult to perform traffic correlation attacks. Pluggable transports improve the privacy of the Tor network and make it more resilient to blocking and censorship efforts.
However, most well-known pluggable transport systems [11, 12, 13] are vulnerable to deanonymization attacks and therefore may be ineffective [14, 15, 16, 17] against a state-level adversary performing statistical analysis of traffic. To counter this, new standalone anonymization systems [18, 19] have been developed that encapsulate traffic in media protocols and can resist passive correlation attacks. However, these systems are independent of Tor and have not yet been tested against active correlation attacks, in which an attacker disrupts the data stream by altering its behavior. The effectiveness of these new systems against these types of attacks has not yet been demonstrated.
To achieve the goal of defending against such attacks, we developed TorKameleon, an Internet censorship evasion tool that can be used as a fully integrated Tor pluggable transport, as a standalone system, or by combining both modes. It uses traffic encapsulation to hide Tor or user traffic in WebRTC [20] video conferencing streams or TLS tunnels and also K-anonymization user input circuits to create networks of TorKameleon proxies and users where user traffic can be fragmented and routed. We show that by encapsulating traffic in WebRTC video conferences alone, TorKameleon can withstand deep-learning-based active correlation attacks from state-level adversaries while maintaining reasonable throughput for low-throughput Internet tasks.
The contributions of this work can be summarised as follows: (1) A full specification of the TorKameleon system based on K-anonymization and WebRTC-based covert channels or secure TLS tunneling; (2) An implementation of the designed solution available as an open-source prototype; (3) A comprehensive experimental evaluation of the system in terms of performance and unobservability against active correlation attacks.
The remainder of the article is organized as follows. Section 2 gives a background and related work on Tor pluggable transport, active correlation attacks,
K-anonymization, and encapsulation in media traffic. Section 3 describes the TorKameleon system model and its main features, and Section 4 describes the prototype implementation. Section 5 presents the experimental evaluation and Section 6 concludes the paper.
## 2 Related Work
Active and passive correlation attacks have become an increasingly pressing problem for the Tor network and other anonymizing systems, especially given the ability of large state organizations, such as intelligence agencies, to access the Internet backbone and the ability of repressive regimes to control large portions of the Internet. In this context, it is important to describe both passive and active correlation attacks, as well as some of the mechanisms and approaches we use that aim to defend against these types of attacks.
#### 2.0.1 Passive and active correlation attacks
Correlation attacks [4, 6, 8, 9, 7] refer to techniques used to extract information and create user profiles of a specific target or deanonymize communicating endpoints in a network. These attacks can be carried out by state-level adversaries who control multiple autonomous systems (AS) regions and collude with organizations such as ISPs. When it comes to Tor, an attacker controlling both inbound and outbound Tor relays in a circuit will attempt to correlate inbound and outbound traffic to determine which pairs of flows are part of the same overall flow. By analyzing metadata such as inter-packet arrival times, packet lengths, and volumes, the censor can confirm with a high degree of probability that a particular user is using a particular Web service. Passive correlation attacks [4, 5, 7] intercept traffic to obtain information, while active correlation attacks [6, 9] inject a watermark into packets to uniquely identify traffic. This involves inserting a recognizable pattern into traffic passing through a specific point in the network in the hope that the manipulated traffic can be identified by the watermark at any network segment the attacker wishes to observe.
#### 2.0.2 Tor pluggable transports
Over the years, there has been significant development of Tor's pluggable transports to mitigate correlation attacks [11, 13, 12]. Currently, the Tor project supports three pluggable transports [11], namely Snowflake, Meek, and Obfs4. Although Meek and Obfs4 use different obfuscation methods, they both suffer from the same problem of being observable or easily detected and blocked [16, 17, 21]. Snowflake also uses WebRTC to encapsulate data but uses data channels (which are normally used to transmit arbitrary data) instead of media channels (which are used by our system). It has been shown that Snowflake is not unobservable [15].
#### 2.0.3 K-anonymization
Samarati and Sweeney [22] proposed K-anonymization in 1998 as a method for anonymizing privacy-sensitive database records that must
be disclosed. To ensure anonymity, the group record set must encompass a minimum of K individuals. The same principle has been applied to different domains so that the probability of an attacker correctly identifying the target is at most 1/K. Efforts have been made to develop K-anonymization systems for Tor traffic, such as TorK [23] and Tir [24]. However, these systems have only been tested against passive correlation attacks and are not currently deployed in the wild.
#### 2.0.1 Media tunneling solutions
Media tunneling encodes data into popular streaming applications' audio or video streams, allowing for covert data transmission, and is commonly used to bypass Internet censorship. As media streaming protocols make up a significant portion of online traffic [25, 26], they are ideal for covert data tunneling. However, many standalone systems [27, 28, 29, 30] have not been proven to withstand statistical analysis and deep packet inspection with machine learning models [2, 21]. Protozoa [18] and Stegozoa [19] are WebRTC-based solutions that are resilient to current machine traffic analysis techniques, but they have not been tested against active correlation attacks, are not integrated into the Tor ecosystem, and are not easily deployable.
## 3 System Model
TorKameleon is a Tor pluggable transport that integrates WebRTC traffic encapsulation and TLS tunneling to securely transmit Tor traffic between the user and TorKameleon Tor Bridges (Tor entry relays). In addition, it can be combined with TorKameleon proxies to form a pre-Tor network of TorKameleon proxies to route traffic between them and decouple user traffic from the user itself before it is forwarded to the Tor network via a TorKameleon Tor Bridge.
The software package consists of three main components, namely the TorKameleon Tor Bridge, the TorKameleon Proxy and the TorKameleon Gateway. The
Figure 1: System model of the TorKameleon transport and the limitations of the state-of-the-art. State-of-the-art on top and our system at the bottom.
TorKameleon Gateway runs on the user's device and serves as the primary interface for accessing the TorKameleon proxies and bridges. The TorKameleon proxy and TorKameleon bridge are used to set up proxies and bridges, respectively.
We can see in Figure 1 that one of the main advantages of the TorKameleon pluggable transport over the state-of-the-art is the Tor integration and the improved resilience of the Tor network to correlation attacks. A censor attempting to correlate incoming and outgoing Tor network flows can render its efforts ineffective through the WebRTC encapsulation mechanism. Attempting to correlate inbound WebRTC encapsulated traffic with outbound TLS traffic is a difficult challenge against modern deep learning-based correlation attacks, even more so if the traffic was routed through the Tor network.
In Figure 2, we see the normal workflow of the TorKameleon solution when used both as a pluggable transport and as a pre-Tor network of TorKameleon proxies (what we call the TorKameleon environment). First (1)) user A sets the network path to be used by the TorKameleon gateway. Then (2)) a connection is established to the first proxy through a TLS tunnel or a covert WebRTC channel embedded in a video conference between the TorKameleon gateway and the TorKameleon proxy. Then (3)) a second connection is established between the first proxy and the second proxy, also over a covert WebRTC channel embedded in a video conference or TLS tunnel. This process is repeated as many times as there are proxies in the network path established by the TorKameleon gateway. Then (4)) the last TorKameleon proxy sends the user traffic, now as Tor traffic, to the TorKameleon Tor Bridge using one of the encapsulation methods described above. Finally (5), 6)), the TorKameleon Tor Bridge forwards the Tor traffic until the final destination is reached.
The pre-Tor network of K proxies allows users to coordinate with K-1 other users to deploy their own proxies while generating covert traffic, creating a larger traffic pool that can mask individual users' traffic and allow them to access desired content while "hiding" in the crowd.
Figure 2: System model and workflow of the TorKameleon ecosystem. When using the pluggable transport without proxies, the user connects directly to the TorKameleon Tor bridge through the TorKameleon gateway.
### Threat Model
We assume a state-level adversary that can cooperate with organizations such as ISPs and other governments. The main goal of the censor is to detect and block TorKameleon usage without affecting legitimate WebRTC and TLS connections. The censor can observe, collect, analyze, and interfere with all network traffic originating from the user, TorKameleon proxies, TorKameleon bridges, and the Tor network provided that all network segments accessed are under its jurisdiction or that of the adversary parties involved.
However, we assume that the software installed on users' devices and TorKameleon Tor bridges and proxies is not tampered with and that the censor does not block or arbitrarily disrupt all WebRTC video conferences or TLS communications due to the potential collateral damage it could cause, as these protocols are widely used by organizations and services that strongly support the regimes financially.
### System Architecture
In Figure 3 we can see how the system architecture of TorKameleon is built. We focus mainly on the architecture of TorKameleon's pluggable transport system since it is the most difficult to understand, but we also explain how the architecture of the TorKameleon proxy works at the end of the section. When a client wants to use the TorKameleon pluggable transport service, it first starts the TorKameleon gateway and the Tor daemon (the Tor software) on its local machine. The Tor daemon connects to the TorKameleon gateway via a SOCKS5 proxy. Traffic can now be routed through the Tor daemon to the TorKameleon gateway. This traffic can be any user traffic that is supported by Tor (the figure uses the Chrome browser as an example, but it can also be any other user application that can use the Tor daemon as a proxy). Tor traffic is now managed by the controller, which is responsible for determining how traffic is encapsulated, the size of packets to be encapsulated, and where traffic should be sent (in the
Figure 3: System architecture of the TorKameleon pluggable transport.
figure, De/Encapsulation request). Once the traffic is encapsulated, it is sent to the TorKameleon Tor bridge (on the right of the figure). The traffic is now decapsulated depending on the type of encapsulation used, and the decapsulated Tor traffic is sent to the Tor network via the Tor daemon running on the bridge through a reverse proxy. The responses sent by the bridge follow the same flow as described above.
When you use the TorKameleon proxies, the workflow is the same, with a few differences. The traffic that goes through the TorKameleon gateway is sent directly from the user application and not from the Tor daemon, so no SOCKS5 connection is needed (a simple connection can be established from the user application to the TorKameleon gateway). The TorKameleon gateway sends traffic to other proxies via WebRTC encapsulation or TLS tunneling through a network path configured by the controller. The last proxy in the path sends traffic locally to the Tor daemon, which also runs on the proxy device, and the workflow in Figure 3 is executed to send traffic to the Tor network.
### Media Traffic Encapsulation
In this section, we turn our attention to the WebRTC encapsulation component (in figure 3). A browser-based videoconferencing web application was developed using the WebRTC technology stack to enable video conferencing between two participants. The application was designed to encapsulate user traffic in WebRTC video frames, with the video serving as the carrier for the traffic.
We now explain how the WebRTC-based application is initialized and how is the data encapsulation process.
#### 3.3.1 WebRTC Initialization
To launch the WebRTC-based web application, a browser must be launched with the web application's code. For this purpose, when TorKameleon is launched, it automatically starts a local server that provides the web page with the scripts and files needed to run the web application. In our system, we have two participant roles for the initialization process, the initiator and the receiver of the connection, and both have different roles in establishing the connection. Therefore, we use two local servers, one that serves the web application in receiver mode and one that serves the web application in initiator mode. To automate browser functions and access the WebRTC web application, the Selenium [31] framework was used. WebSockets were necessary to connect the TorKameleon system to the WebRTC-based web application, as regular socket connections are prohibited in browsers for security reasons. The data received by the TorKameleon system is converted to a Base64 string and sent to the web application via a WebSocket connection. The web application encapsulates the data in video frames and sends the video stream to the other remote peer (e.g., a TorKameleon Tor Bridge). The data received from the remote peer is decapsulated and converted back into bytes from base64 and sent to the TorKameleon system. Finally, a signaling process [32] ensures that the
connection between peers is established and is mandatory for any WebRTC connection. For this purpose, we have also designed a signaling process and a server for communication establishment.
#### 3.3.3 WebRTC Application Initialization
Figure 4 shows the workflow of WebRTC encapsulation. First, we split the video stream to be transmitted into its audio and video tracks so that the video track can be isolated and processed. Next, we extract the video frames from the video track and use one of the developed data encoding mechanisms (see Section 3.4) to insert the data to be encapsulated into the frames (this data is sent from the user application or Tor to the TorKameleon system and transferred to the WebRTC application via Websockets, as mentioned earlier). Finally, we combine the audio track with the new video track containing the encapsulated data to create a new media stream. This stream is then transmitted over the network to the other participant in the video conference, which can be a TorKameleon Tor bridge, a proxy, or a user. The traffic resulting from this process is no different from normal WebRTC traffic to a censor or attacker. Thus, the covert content remains unobservable. It is important to note that the read frames have already been encoded by the video codec, and therefore there is no need to implement a robust method to protect against lossy video codecs.
### Data Encoding
With TLS encapsulation, no additional implementation effort is required other than setting up the secure socket between the communication participants (i.e., an SSL tunnel). However, this is not the case with WebRTC data encapsulation.
Once the web application receives an array of bytes from TorKameleon with a specific size of X bytes (user configurable and managed by the controller), it stores the array and waits for a video frame to be transmitted so it can encapsulate the data. This array of bytes is called a data block.
WebRTC encapsulation mode has two different modes for embedding data blocks in frames: ADD (the data block is added to the end of the frame without removing or replacing any content) and REPLACE (the content of the video
Figure 4: Encapsulation of user traffic in WebRTC video frames workflow.
frame is replaced with the data block to be encoded, leaving the frame header untouched). A brief description of each mode follows.
The ADD mode integrates the entire data block into an ADD mode packet without fragmentation, and the packet is attached to a single video frame. To ensure that the packets are sorted at the receiver, the packet contains three header fields: the packet sequence number, the length of the data block, and a special code that indicates the beginning of the packet and that the frame contains encapsulated data. The data is transmitted in the remaining part of the packet. This mode allows for higher throughput but compromises unobservability due to the increase in frame size.
The REPLACE mode is more complex than the ADD mode because it may require fragmentation of the data block into smaller blocks based on the available size of the video frame used for encapsulation. Due to the possible fragmentation of the data block and the required reassembly at the receiver side, it was necessary to keep the fields used in the ADD packet and add additional new fields. In particular, we included the following fields in the header: the LC flag field, which indicates whether this packet contains the last chunk of a particular data block (if the entire data block can be encapsulated in a single packet, or if the chunk sent is the last of a particular data block, this field is used); and the seg num field, which indicates the sequence number of the chunk so that they can be reordered. Unlike the ADD mode, the REPLACE mode may have lower throughput (depending on the frame size) because we don't increase the frame size. However, it provides better guarantees of unobservability, since the image size remains unchanged.
## 4 Implementation
We developed the TorKameleon prototype using Java and JavaScript. The prototype consists of two main components: the WebRTC-based web application and the TorKameleon core system. It includes about 4,000 lines of code and is publicly available on a GitHub repository for research purposes ([https://github.com/AfonsoVi/TorKameleon](https://github.com/AfonsoVi/TorKameleon)). It can be used to study censorship circumvention techniques and possible extensions to the core solution by practitioners and researchers alike. The TorKameleon system is dockerized and has been developed and tested in Ubuntu 20.04.
We used JavaScript to develop the WebRTC-based web application. The web application is served by a local nodeJS server that uses the EJS framework (version 3.1.8) for HTML templating. We also used the WebRTC JavaScript APIs [33] to develop the base of the web application and the Insertable Streams API [34] to embed the covert data into the video frames. The signaling server was also developed in nodeJS and is a simple backend server that accepts secure WebSocket connections from the web application using the library Socket.IO (version 4.0).
The TorKameleon system core was developed using Java 11 and uses the Selenium framework (version 4.4.0 for Java) to programmatically launch the web
application using the Chrome browser without a GUI to access local nodeJS servers. For the Selenium framework to work correctly, a special browser driver for Chrome must be used. The Chrome driver version 104.0.5112.79 was used, and a WebSockets Java library [35] implementation was utilized for communication and data exchange between the web application and video frames.
## 5 Evaluation
Our experimental evaluation focused on two main goals: first, to determine the resilience of the WebRTC encapsulation mechanism to active correlation attacks; and second, to evaluate the performance impact of using such mechanisms compared to normal Tor usage. In this chapter, we describe the experimental evaluation results in terms of performance, resource utilization, and unobservability.
### Setup
Our experimental setup consisted of four machines, three of which were virtual private servers provided by the OVH service. These servers had the following hardware specifications: an Intel 8-core processor running at 2.4 GHz CPU, 32 GB RAM, and 2 Gbps bandwidth. The fourth machine was a local machine with an Intel i5-9300H CPU (4 cores at 2.4 GHz), and 16 GB RAM. All these machines had Ubuntu 20.04 installed, which served as the operating system for our tests.
The local machine acted as the user/client machine, while one of the VPS servers acted as a TorKameleon Tor bridge deployed in the UK. The second VPS server acted as a TURN /STUN server for the WebRTC connection and was deployed in France. Finally, the last VPS server acted as an HTTP server and was deployed in Canada. To reduce latency and reduce throughput variation between experiments, we fixed the Tor network circuit relays (three relays) from which we made observations. The middle relay was deployed in Germany, while the exit relay was in the Netherlands.
### Metrics and Methodology
In our evaluation, performance was measured using two parameters: Throughput and Latency. Throughput was calculated by downloading a 250 KB file from the HTTP server, while latency was measured using the httping tool to measure the time it took to get the first byte of the response to an HTTP or HTTPS request to the server. We compared our results with those of Tor vanilla (Tor without our solution) to assess the performance impact of our system. To ensure the accuracy of our measurements, we took five samples of download time for the throughput measurement and ten samples of latency for each experiment. We then calculated the average of these samples to determine the final throughput or latency values. We repeated each experiment twice to ensure the consistency and reliability of our results.
The number of daily users of bridges is difficult to estimate because of their nature. Many bridges appear to have a small number of users, while a small number of bridges are used by most users. Based on monthly statistics [36], we decided on a maximum of 50 parallel clients, although we expect TorKameleon Tor Bridges to have fewer users in parallel.
We used TorMarker [9] for unobservability tests and measured FPR, TPR, and accuracy. FPR evaluates the reliability of the model by indicating the percentage of regular data flows that are misidentified as watermarked streams, while TPR measures the ability of the model to detect watermarks in traffic. Accuracy evaluates the model's precision in correctly classifying regular and watermarked traffic.
### Performance
To evaluate the performance of the system, we conducted tests using TorKameleon as a pluggable transport.
#### 5.3.1 Throughput
Graph 5 shows the throughput of TorKameleon when used as a pluggable transport with TLS encapsulation. As the graph shows, there is little difference in throughput when different data block sizes are used.
Graph 5 also illustrates the throughput of a client using WebRTC-based encapsulation mode when 49 clients are added incrementally to download the same file. Of the 49 clients, four use WebRTC-based encapsulation mode, while the remaining 45 use TLS encapsulation mode. For every nine clients using TLS tunneling, the tenth client uses WebRTC-based encapsulation mode. We did this to avoid overloading the TorKameleon Tor bridge (see Section 5.4). The graph depicts the throughput values for the ADD and REPLACE modes for different data block sizes. Since TorKameleon is intended to be used and deployed voluntarily by users, a maximum of 5 WebRTC encapsulation users in a
Figure 5: Throughput graphs for different modes of encapsulation and data block sizes. Left: Throughput graph for TLS encapsulation; Right: Throughput graph for WebRTC encapsulation. A-ADD mode; R-REPLACE MODE.
The results we obtain for TLS encapsulation are similar to those obtained with Tor vanilla (5128 Kbps). The results for WebRTC encapsulation, for both ADD and REPLACE, show reasonable and expected reductions that do not make TorKameleon unusable for low-throughput Internet tasks (especially at higher data block sizes), compared to Tor vanilla.
#### 5.3.1 Latency
The graph 6 shows the latency of TLS encapsulation, WebRTC-based encapsulation in ADD mode, and WebRTC-based encapsulation in REPLACE mode, respectively, for the different data block sizes tested. It is worth noting that all encapsulation modes show similar latency values for the different data block sizes.
Our latency results with TLS encapsulation were comparable to those of Tor vanilla (with a measured latency of 398.2 ms for the Tor vanilla baseline). Additionally, the latency values for WebRTC encapsulation were reasonably and predictably higher than those of Tor Vanilla, in both ADD and REPLACE modes. Nonetheless, these latency increases did not render TorKameleon unusable with either encapsulation mode compared to Tor vanilla.
### Resource Utilization
Relevant metrics, including CPU and RAM usage percentage, can be retrieved using the Linux top command. Table 1 shows the CPU usage of the Chrome browser (running the WebRTC web application) and the TorKameleon Java core for the TorKameleon client-side pluggable transport and the TorKameleon server-side pluggable transport on the TorKameleon Tor Bridge. Our analysis shows that the Chrome browser requires about 40% of a CPU core, while the TorKameleon Java core requires only about 1% for a WebRTC connection. The primary conclusion from the data is that the WebRTC-based web application imposes a higher CPU workload compared to other components of TorKameleon. We can also conclude that a bridge with multiple WebRTC-based encapsulation clients must have high CPU processing power to avoid throttling.
Figure 6: Latency graphs for different modes of encapsulation and data block sizes. Left: Latency graph for TLS encapsulation; Right: Latency graph for WebRTC encapsulation. A-ADD mode; R-REPLACE MODE.
### Unobservability
To test for unobservability, we used TorMarker [9], a tool that allows small delays to be inserted into traffic to create an observable watermark in another segment of the network. To detect watermarked traffic in the network, TorMarker uses models based on deep learning.
TorMarker was trained with data sets of 60,000 packets, the same size used in its experimental evaluation. An amplitude of 120 ms (the delays induced) was chosen for the ingress flows because it provided the best results in terms of accuracy, FPR, and TPR among the tested amplitudes. We also used flow sizes of 150 packets for the same reasons as described above.
To train TorMarker, we first forwarded 30,000 packets to the HTTP server and sent them through the TorKameleon Tor bridge using TorKameleon as a pluggable transport with the WebRTC encapsulation mechanism. These 30,000 packets are referred to as regular traffic. Next, we forwarded the same 30,000 packets and embedded the watermarks into them, resulting in 30,000 watermarked packets. Both the regular traffic and the watermarked traffic, consisting of 30,000 packets each, were collected and used to train the TorMarker detection component.
Two main conclusions can be drawn from the graph 7. First, the size of the data blocks affects the unobservability of TorKameleon, with unobservability being greater for smaller data blocks; smaller blocks increase FPR and decrease TPR rate and accuracy. Second, discrepancies between REPLACE and ADD modes also increase with block size, with REPLACE mode being more resistant to watermarking attacks.
We claim that TorKameleon is unobservable for values of 536 and 1050 bytes for the ADD and REPLACE modes, based on our FPR and accuracy thresholds (false positive rate (FPR) not less than or equal to 10%, and accuracy value not greater than or equal to 85%). These thresholds were derived from the results of the experimental evaluation of TorMarker [9], deepcorr [8], and DeepCoFFEA [7], and we argue that they represent the minimum values required to classify a model as reliable, accurate, and precise. We also propose
\begin{table}
\begin{tabular}{c|c|c|c|c} & Java Core & Java Core & Google Chrome & Google Chrome \\ & (Client) & (Server) & (Client) & (Server) \\ \hline CPU Usage & & & & \\ (\%) & 1 & 0.3 & 42.6 & 40.2 \\ \hline Memory Usage & & & & \\ (\%) & 2.1 & 0.9 & 1 & 0.5 \\ \hline Share Memory Size & & & & \\ (Kb) & 28923 & 29148 & 100444 & 1062245 \\ \hline Physical RAM Usage & & & & \\ (Kb) & 339088 & 276604 & 167404 & 169976 \\ \end{tabular}
\end{table}
Table 1: Resource usage table for TorKameleon pluggable transport on the client side (client) and the server side (server).
that TorKameleon can be considered unobservable for data blocks of 2078 bytes in REPLACE mode since blocking TorKameleon traffic with a data block size of 2078 bytes would cause significant collateral damage to regular WebRTC traffic, although the results are closer to our thresholds.
## 6 Conclusion
We have developed TorKameleon, an Internet censorship evasion tool that uses K-anonymization and encapsulation mechanisms for WebRTC and TLS traffic and can resist modern correlation attacks. We have also conducted an extensive performance and resource consumption evaluation and unobservability testing to measure TorKameleon's performance and resilience against active correlation attacks. As far as we know, TorKameleon is the first tool to integrate these two mechanisms.
TorKameleon provides a state-of-the-art, fully integrated, Tor pluggable transport with WebRTC-based covert channels that can withstand active correlation attacks. Current WebRTC-based encapsulation systems do not have Tor integration and have not been tested against active correlation attacks. We also enable the creation of a pre-Tor network consisting of K-proxies from which user traffic can be routed. This represents two main ideas that have not been combined before.
In the future, we plan to further analyze the code to identify potential optimization areas, as well as investigate different browser options for the prototype web application. We also plan to expand our experimental evaluation to include additional performance and unobservability tests to gain a more comprehensive understanding of the tool's functionality and its ability to withstand correlation attacks and traffic analysis.
|
2304.02850 | Topological spin Hall effect in antiferromagnets driven by vector Néel
chirality | Spin Hall effect of spin-texture origin is explored theoretically for
antiferromagnetic (AF) metals. It is found that a vector chirality formed by
the N\'eel vector gives rise to a topological spin Hall effect. This is
topological since it is proportional to the winding number counted by in-plane
vector chirality along the sample edge, which can be nonvanishing for AF merons
but not for AF skyrmions. The effect is enhanced when the Fermi level lies near
the AF gap, and, surprisingly, at weak coupling with small AF gap. These
features are confirmed numerically based on the Landauer-B\"uttiker formula.
Important roles played by nonadiabatic processes and spin dephasing are pointed
out. | Kazuki Nakazawa, Koujiro Hoshi, Jotaro J. Nakane, Jun-ichiro Ohe, Hiroshi Kohno | 2023-04-06T03:47:56Z | http://arxiv.org/abs/2304.02850v1 | # Topological Spin Hall Effect in Antiferromagnets Driven by Vector Neel Chirality
###### Abstract
Spin Hall effect of spin-texture origin is explored theoretically for antiferromagnetic (AF) metals. It is found that a vector chirality formed by the Neel vector gives rise to a topological spin Hall effect. This is topological since it is proportional to the winding number counted by in-plane vector chirality along the sample edge, which can be nonvanishing for AF merons but not for AF skyrmions. The effect is enhanced when the Fermi level lies near the AF gap, and, surprisingly, at weak coupling with small AF gap. These features are confirmed numerically based on the Landauer-Buttiker formula. Important roles played by nonadiabatic processes and spin dephasing are pointed out.
Spin-charge interconversion has been extensively studied in spintronics with the aim of application to next-generation devices. It is typically achieved by the spin Hall effect (SHE) [1] originating from the relativistic spin-orbit coupling (SOC), mostly in nonmagnetic materials [2; 3; 4; 5; 6]. Ferromagnets (FMs) are another class of materials that enable spin-charge conversion, not just as a simple spin source, but also by emergent electromagnetism due to spatiotemporal magnetization dynamics. In particular, a magnetization texture forming a finite scalar spin chirality simulates a magnetic field that affects electrons' orbital motion but in a spin-dependent way. The resulting Hall effect, often called the topological Hall effect (THE), is thus the SHE in essence [7].
Antiferromagnets (AF) are a material having both aspects, magnetic at the microscopic scale but nonmagnetic at the (semi)macroscopic scale, and offers a unique platform to generate pure spin currents. A large SHE was reported in Ir\({}_{20}\)Mn\({}_{80}\)[9], which originates from SOC. Recently, there are some proposals of SHE that arise from the antiferromagnetic spin texture, providing another means of pure spin-current generation without relying on relativistic SOC.
In this Letter, we explore theoretically the SHE in AF originating from AF spin textures. From the analogy with FMs, an AF with a textured Neel vector \(\mathbf{n}\) is expected to generate a spin Hall current,
\[\tilde{j}_{s,i}^{z}=\tilde{\sigma}_{\rm SH}\mathbf{n}\cdot(\partial_{i}\mathbf{n}\times \partial_{j}\mathbf{n})eE_{j}, \tag{1}\]
under an applied electric field \(E_{j}\) (\(\tilde{\sigma}_{\rm SH}\) is a coefficient, and \(e>0\) is the elementary charge). Because of the scalar chirality, \(\mathbf{n}\cdot(\partial_{i}\mathbf{n}\times\partial_{j}\mathbf{n})\), this effect may be termed as a topological spin Hall (TSH) effect [10; 11; 12]. Such a texture has opposite spin chiralities on different sublattices and deflects spin-up and spin-down electrons in mutually opposite directions; the charge Hall currents then cancel out, but the spin Hall currents add up. However, Eq. (1) is not a macroscopically observable quantity since its sign depends on the definition of sublattice or \(\mathbf{n}\); it changes sign under \(\mathbf{n}\!\rightarrow\!-\mathbf{n}\). We define the physical spin current \(\mathbf{j}_{s,i}\) through \(\tilde{j}_{s,i}^{z}=\mathbf{n}\cdot\mathbf{j}_{s,i}\), hence by
\[j_{s,i}^{\alpha}=\tilde{\sigma}_{\rm SH}(\partial_{i}\mathbf{n}\times\partial_{j} \mathbf{n})^{\alpha}eE_{j}. \tag{2}\]
The factor \((\partial_{i}\mathbf{n}\times\partial_{j}\mathbf{n})^{\alpha}\) may be identified as an emergent magnetic field in spin channel, and interestingly, it can be expressed as \((\partial_{i}a_{j}^{\alpha}-\partial_{j}a_{i}^{\alpha})/2\) with an emergent vector potential,
\[a_{i}^{\alpha}=(\mathbf{n}\times\partial_{i}\mathbf{n})^{\alpha}. \tag{3}\]
This is the vector chirality (\(\sim\mathbf{S}_{1}\times\mathbf{S}_{2}\) for two spins) formed by the Neel vector, and we call it "vector Neel chirality" [13]. Spatially-averaged spin current \((j_{s,i}^{\alpha})\) is proportional to a winding number defined by the vector chirality, hence is "topological." To date, the vector spin chirality is known to induce charge [14; 15] and (equilibrium) spin currents [16; 17; 18], but its AF counterpart in terms of the Neel vector has been less focused on.
In the following, we derive Eqs. (1) and (2) and demonstrate the topological character of Eq. (2). The effect is present in systems with AF merons [19] but not with AF skyrmions [20; 21; 22; 23; 24; 25; 26], and is enhanced in the weak-coupling regime. These results are confirmed numerically based on the Landauer-Buttiker formula.
We consider electrons on a square lattice and coupled to a static spin texture. The Hamiltonian
\[H=-t\sum_{(i,j)}c_{i}^{\dagger}c_{j}-J_{\rm sd}\sum_{i}\mathbf{S}_{i}\cdot(c_{i}^ {\dagger}\mathbf{\sigma}c_{i})+u_{i}\!\!\sum\limits^{\prime}c_{i}^{\dagger}c_{i}, \tag{4}\]
consists of nearest-neighbor hopping (first term), s-d exchange coupling to localized spins \(\mathbf{S}_{i}\) (second term), and
Figure 1: (a) Static magnetic structure considered in this work, a checkerboard type AF on a square lattice with a very slow spatial modulation. The two sublattices (A or B) are indicated by color (red or blue). (b) Electron dispersion in a uniform AF state. Each subband is spin degenerate.
on-site impurity potential (third term), with electron operators \(c_{i}={}^{t}(c_{i\uparrow},c_{i\downarrow})\) at site \(i\) and Pauli matrices \(\mathbf{\sigma}=(\sigma^{x},\sigma^{y},\sigma^{z})\). We assume a slowly-varying checkerboard type AF texture, \(\mathbf{S}_{i}=S(-1)^{i}\mathbf{n}_{i}\), where \(\mathbf{n}_{i}\) is the Neel vector varying slowly in space [Fig. 1 (a)].
With a unitary transformation, \(c_{i}=U_{i}\tilde{c}_{i}\), which diagonalizes the s-d coupling, \(U_{i}^{\dagger}(\mathbf{n}_{i}\cdot\mathbf{\sigma})U_{i}=\sigma^{z}\), \(H\) is transformed into \(H=-t\sum_{(i,j)}\tilde{c}_{i}^{\dagger}\tilde{c}_{i}^{\dagger}\tilde{c}_{j}-J \sum_{i}(-)^{i}\tilde{c}_{i}^{\dagger}\sigma^{z}\tilde{c}_{i}+u_{\rm i}\!\sum_ {i}{}^{\prime}_{\ell}\tilde{c}_{i}^{\dagger}\tilde{c}_{i}\), where \(J=J_{\rm sd}S\), and \(A_{ij}\) is the spin gauge field defined by \(U_{i}^{\dagger}U_{j}={\rm e}^{iA_{ij}}\)[27; 28]. Because of slow variations of the texture, \(A_{ij}\) is small and can be treated perturbatively. The unperturbed state (with \(A_{ij}=0\)) is a uniform AF, and the electron band splits into spin-degenerate upper and lower bands, \(\pm E_{\mathbf{k}}\), with an AF gap \(2|J|\) in between [Fig. 1 (b)]. Here, \(E_{\mathbf{k}}\equiv\sqrt{\varepsilon_{\mathbf{k}}^{2}+J^{2}}\) with \(\varepsilon_{\mathbf{k}}=-2t(\cos k_{x}+\cos k_{y})\). Also, \(A_{ij}\) can be treated in the continuum approximation, \(A_{ij}\to A_{\mu}\), where \(\mu\) (\(=x,y\)) specifies the bond direction of \((i,j)\), and expanded as
\[A_{\mu}=\frac{1}{2}A_{\mu}^{\alpha}\sigma^{\alpha}=\frac{1}{2}(A_{\mu}^{z} \sigma^{z}+\mathbf{A}_{\mu}^{\perp}\cdot\mathbf{\sigma}^{\perp}), \tag{5}\]
where \(\alpha=x,y,z\) and \(\perp=x,y\). The spin-conserving component \(A^{z}\) describes adiabatic processes, whereas the spin-flip component \(\mathbf{A}^{\perp}\) induces nonadiabatic transitions. In FM, the latter can be important only in the weak-coupling regime [3], but in AF, both are important because of spin degeneracy of the AF bands. Both produce the same effective field, \((\nabla\times\mathbf{A}^{z})_{z}=(\mathbf{A}_{x}^{\perp}\times\mathbf{A}_{y}^{\perp})^{ z}=\mathbf{n}\cdot(\partial_{x}\mathbf{n}\times\partial_{y}\mathbf{n})\).
To calculate the spin Hall conductivity, \(\sigma_{\rm SH}\equiv\frac{1}{2}(\sigma_{xy}^{z}-\sigma_{yx}^{z})\), we assume a good AF metal and focus on the Fermi-surface contribution [30],
\[\sigma_{ij}^{z}(\mathbf{Q})=-\frac{e\hbar}{4\pi}{\rm Tr}\left\langle\mathcal{J}_{ \rm s,i}^{z}G_{\mathbf{k}_{+},\mathbf{k}^{\prime}}^{\rm R}\mathcal{J}_{j}G_{\mathbf{k}^{ \prime},\mathbf{k}_{-}}^{\rm A}\right\rangle_{\rm i}, \tag{6}\]
where \(\mathcal{J}_{\rm s,i}^{z}\) and \(\mathcal{J}_{j}\) are spin-current and number-current vertices, \({\rm Tr}\) means the trace in spin, sublattice, and \(\mathbf{k}\) spaces (\(\mathbf{k}\), \(\mathbf{k}^{\prime}\)-integrals), and \(\langle\cdots\rangle_{\rm i}\) represents impurity average. The Green's function \(G_{\mathbf{k},\mathbf{k}^{\prime}}^{\rm R(A)}=(\mu-H\pm i0)_{\mathbf{k},\mathbf{k}^{\prime}}^{-1}\) takes full account of impurities and the gauge field, and \(\mathbf{k}_{\pm}=\mathbf{k}\pm\mathbf{Q}/2\). We treat the impurity scattering in the Born approximation with ladder vertex corrections (VC) [31]. The superscript \(z\) on \(\sigma_{ij}^{z}\) and \(\mathcal{J}_{\rm s,i}^{z}\) indicates the spin component in the rotated frame, thus it is the component _projected to the local Neel vector \(\mathbf{n}\)_.
After a standard procedure (see Supplemental Material [31]), we obtain Eq. (1) with \(\tilde{\sigma}_{\rm SH}=\tilde{\sigma}_{\rm SH}^{(0)}+\tilde{\sigma}_{\rm SH}^ {(1)}\),
\[\tilde{\sigma}_{\rm SH}^{(0)} =(J\tau)^{2}\frac{t^{2}\nu}{\mu}\left(1-\frac{J^{2}}{\mu^{2}} \right)C_{xy}, \tag{7}\] \[\tilde{\sigma}_{\rm SH}^{(1)} =(J\tau)^{2}\frac{t^{2}\nu}{\mu}\frac{8t^{2}}{\mu^{2}+J^{2}} \left(\frac{\tau^{-1}}{\tau_{\varphi}^{-1}+\tau_{\rm s}^{-1}}\right)C_{xx}^{2}, \tag{8}\]
where \(\tilde{\sigma}_{\rm SH}^{(0)}\) is the contribution without VC, which comes from both adiabatic and nonadiabatic processes, and \(\tilde{\sigma}_{\rm SH}^{(1)}\) is the contribution with VC, coming only from nonadiabatic processes. Here, \(\nu=\nu(\mu)\) is the density of states (per spin) at chemical potential \(\mu\), \(C_{ij}=\langle 1-\cos k_{\rm i}\cos k_{j}\rangle_{\rm FS}\) is the Fermi surface average [32], \(\tau=[\gamma_{0}+(J/\mu)\gamma_{3}]^{-1}/2\) is the scattering time (\(\gamma_{0}=\pi n_{\rm i}u_{\rm i}^{2}\nu\) and \(\gamma_{3}=(J/\mu)\gamma_{0}\) are the sublattice-independent and dependent parts, respectively, of the damping, and \(n_{\rm i}\) is the impurity concentration), and
\[\frac{1}{\tau_{\varphi}}=\frac{4J}{\mu}\frac{\mu^{2}+J^{2}}{\mu^{2}-J^{2}} \gamma_{3}=\frac{2J^{2}}{\mu^{2}-J^{2}}\frac{1}{\tau}, \tag{9}\]
is the "spin dephasing" rate [2]. We introduced a finite spin relaxation rate \(\tau_{\rm s}^{-1}\) by hand [34]; without \(\tau_{\rm s}^{-1}\), we would have an unphysical result that \(\tilde{\sigma}_{\rm SH}^{(1)}\) does not vanish in the limit \(J\to 0\). Note that \(\tau_{\varphi}^{-1}\) differs from \(\tau_{\rm s}^{-1}\) in that it does not require spin-dependent scattering, randomizes only the transverse (\(\perp\mathbf{n}\)) components of the electron spin (see below), and vanishes as \(J\to 0\). The results (7) and (8) are obtained at the leading order, i.e., second order in spatial gradient and second order in \(\tau\).
The coefficients \(\tilde{\sigma}_{\rm SH}^{(0)}\) and \(\tilde{\sigma}_{\rm SH}^{(1)}\) are plotted in Fig. 2 (a). They are comparable in magnitude at large \(J\) (\(\sim 1.5t\)),
but as \(J\) is reduced, \(\tilde{\sigma}_{\rm SH}^{(1)}\) grows markedly whereas \(\tilde{\sigma}_{\rm SH}^{(0)}\) decreases. The sum \(\tilde{\sigma}_{\rm SH}=\tilde{\sigma}_{\rm SH}^{(0)}+\tilde{\sigma}_{\rm SH}^{( 1)}\) is plotted in Fig. 2 (b) by solid lines, which grows as \(J\) is reduced, especially near the AF gap edge, but finally vanishes at \(J=0\). Since \(\tilde{\sigma}_{\rm SH}^{(1)}\) comes solely from nonadiabatic processes, these results show that the combined effect of nonadiabaticity and the VC is important for the present SHE [35]. Physically, a nonadiabatic process produces a transverse spin polarization, and the VC describes its collective transport, which is however limited by spin dephasing [28; 2; 36]. The origin of the enhancement at small \(J\) can be traced to the reduced dephasing at small \(J\)[31]. As seen from Eq. (9), the spin dephasing arises through \(\gamma_{3}\), a sublattice asymmetry in (nonmagnetic) scattering [37; 28], and its physical picture is illustrated in Fig. 3.
The obtained result, Eq. (1), needs to be interpreted with care. It arises with the scalar chirality formed by the Neel vector \(\mathbf{n}\), and changes sign under \(\mathbf{n}\to-\mathbf{n}\). This is not a pleasant situation since any physical quantity measurable by (semi)macroscopic means should not depend on the sign of \(\mathbf{n}\), or on the definition of sublattice. This (apparent) puzzle is resolved if we note that the spin component of the calculated spin current \(\tilde{j}_{x,{\rm s}}^{z}\) is the one projected to the Neel vector \(\mathbf{n}\). Therefore, we write \(\tilde{j}_{\rm s}^{z}=\mathbf{n}\cdot\mathbf{j}_{\rm s}\) and identify \(\mathbf{j}_{\rm s}\) as a physical spin current. The physical spin Hall current is thus given by Eq. (2).
It is in fact possible to obtain Eq. (2) directly. By assuming \(J\) is small and treating it perturbatively, we found the spin current arises at second order in \(J\)[31],
\[j_{{\rm s},i}^{\alpha}=(J\tau)^{2}\frac{t^{2}\nu}{\mu}C_{xy}(\partial_{i}\mathbf{n }\times\partial_{j}\mathbf{n})^{\alpha}eE_{j}. \tag{10}\]
This contrasts with the THE in FM caused by scalar spin chirality, which starts at third order (\(\sim J^{3}\)) [38], and demonstrates that the essential quantity for the present SHE is the vector (not scalar) chirality. That Eq. (10) is an even function of \(J\) (or \(J\mathbf{n}\)) is consistent with the fact that the spin current is even under time reversal.
The expression Eq. (2) holds locally in space (as far as the variation of \(\mathbf{n}\) is sufficiently slow). As a spin current measured experimentally, we consider a spatially-averaged one, \(\langle\mathbf{j}_{\rm s}^{\alpha}\rangle=\Omega^{-1}\int\mathbf{j}_{\rm s}^{\alpha} dxdy\) (in two dimensions), where \(\Omega\) is the sample area. It can be written as
\[\langle\mathbf{j}_{\rm s}^{\alpha}\rangle=\pi\tilde{\sigma}_{\rm SH}\frac{m^{ \alpha}}{\Omega}\left(e\mathbf{E}\times\hat{z}\right), \tag{11}\]
where
\[m^{\alpha}=\frac{1}{2\pi}\int(\nabla\times\mathbf{a}^{\alpha})_{z}dxdy=\frac{1}{2 \pi}\oint\mathbf{a}^{\alpha}\cdot d\mathbf{\ell}, \tag{12}\]
and \(a_{i}^{\alpha}=(\mathbf{n}\times\partial_{i}\mathbf{n})^{\alpha}\) [Eq. (3)] is an emergent vector potential in spin channel. The line integral is taken along the sample perimeter. If the system has easy-plane magnetic anisotropy, and the Neel vector on the sample edge is constrained to lie in-plane, e.g., \(x\)-\(y\) plane, the line integral of the vector chirality defines a topological winding number \(m^{z}\in\mathbb{Z}\) in \(\pi_{1}(S^{1})\). The spin Hall conductivity is thus proportional to the topological number density \(m^{z}/\Omega\), and this fact resurrects the naming "topological" spin Hall effect. We emphasize that it is characterized by the vector chirality of Neel vector along the sample edge. Therefore, the present TSBE is absent for AF skyrmions, in which the Neel vector at the edge is uniaxial. On the other hand, it is finite for AF merons, which have finite in-plane winding of the Neel vector along the edge.
To verify these results, we have conducted numerical works based on the four-terminal Landauer-Buttiker formula [39]. We consider ballistic systems with \(L\times L\) sites without disorder, and containing a single AF skyrmion or a single AF meron. For both textures, the spin Hall conductance \(G_{\rm SHC}^{z}\) shows a strong peak just below the AF gap [Fig. 4 (a) and (b)], which, however, behave oppositely as \(L\) is increased (with the skyrmion/meron size fixed); for the AF skyrmion the peak decreases with \(L\) and seems to vanish in the thermodynamic limit. In contrast, for the AF meron it increases with \(L\) [Fig. 4 (c)]. This is consistent with the analytical result, which is valid for infinite-size systems. Plots for several \(J/t\) values are shown in Fig. 4 (d) for the AF meron system, showing that it is indeed enhanced at small \(J/t\). All these agree with the analytic results, except for the detailed shape of \(\mu\)-dependence.
The discrepancy in shape (\(\mu\)-dependence) between the
Figure 3: Physical picture of electron spin transport in a uniform antiferromagnet. The blue sphere with an arrow represents an electron, the green arrow a localized spin, and the red star a nonmagnetic impurity. (a) The electron spin precesses around the local moment, alternating its sense from site to site. (b) Interaction with impurities locally modifies the precession. (c) A “collective” transverse spin density contributed from many electrons decays and loses its original information through the impurity scattering. This is because the degree of the modification, mentioned in (b), varies from electron to electron. This is called “dephasing” and the characteristic length is the “dephasing length” \(\lambda_{\rho}=\sqrt{D\tau_{\varphi}}\). The orange stars represent averaged impurities.
numerical [Fig. 4 (d)] and analytic results [Fig. 2 (b)] may be understood as due to the nonlocality effect in the former. To illustrate this, let us first consider the diffusive regime. As the typical wave number \(q\) of the Neel texture (i.e., inverse of meron/skyrmion size) is increased, Eq. (8) is modified as
\[(\tau_{\varphi}^{-1}+\tau_{\rm s}^{-1})^{-1}\rightarrow(\tau_{\varphi}^{-1}+ \tau_{\rm s}^{-1}+Dq^{2})^{-1}, \tag{13}\]
in the denominator, where \(D\) is the diffusion constant. When electron spin diffusion (\(Dq^{2}\)) occurs faster than spin dephasing (\(\tau_{\varphi}^{-1}\)), the effective field becomes "nonlocal". Similar feature has been noted for FMs, in which \(Dq^{2}\) is compared to the exchange splitting [3]. Here in AF, it is compared to the (much smaller) spin dephasing rate, \(\tau_{\varphi}^{-1}\), hence the present SHE enters the nonlocal regime rather easily compared to the THE in FM. More explicitly, the nonlocality appears if
\[ql>\frac{2J}{\sqrt{\mu^{2}-J^{2}}},\quad\text{or}\quad|\mu|>J\sqrt{1+(2/ql)^{ 2}}, \tag{14}\]
where \(l\) is the mean free path. In Fig. 2 (b), the analytic results with \(q^{-2}=6000\) (with lattice constant taken unity) are plotted by dotted lines. The suppression due to nonlocality is more significant at larger \(|\mu|\) (away from the AF gap), leaving a sharp peak in the vicinity of the AF gap edge. Since cleaner systems enter the nonlocal regime more easily [see Eq. (14) and a red dotted line in Fig. 2 (c)], this feature is expected to persist into the ballistic regime with a wider nonlocality region. The shape of Fig. 4 (d) may thus be understood as due to the nonlocality effect.
Thus, as in the case of THE in FM [3], the present TSHE in AF exhibits various characteristic regimes [Fig. 2 (c)]. These are summarized as follows. First, for a ballistic and local regime, the effect is truly topological. As \(q\) is increased and the nonlocal effects become important, the SHC deviates from the topological expression. In the diffusive case, it is difficult to have the topological expression because of dephasing (and nonlocality), but the effect is enhanced for weak-coupling AF with small AF gap. An interesting possibility may be found in mesoscopic systems, for which the effect can be topological even if the system is in a diffusive regime.
The emergent vector potential \(\mathbf{a}^{\alpha}\) in the spin channel, identified here through TSBE, has more generality. In a study on THE in canted AF [28], an emergent vector potential in charge channel was identified as \(l^{\alpha}\mathbf{a}^{\alpha}\), where \(l^{\alpha}\) is the canting (uniform) moment. Also, \(\mathbf{a}^{\alpha}\) can be expressed as \(a_{i}^{\alpha}=-(\mathcal{R}\mathbf{A}_{i}^{\perp})^{\alpha}\)[40], where \(\mathcal{R}\) is an SO(3) matrix that connects the rotated and the original frames (e.g., \(\mathbf{n}=\mathcal{R}\hat{z}\)), showing its conformity with the spin gauge field \(\mathbf{A}_{i}^{\perp}\). These facts reinforce our interpretation of \(\mathbf{a}^{\alpha}\) as an effective vector potential in spin channel.
To realize the present TSHE experimentally, a prime candidate texture is \(\mathbf{n}\)-meron. Such texture was found very recently in \(\alpha\)-Fe\({}_{2}\)O\({}_{3}\)[19], which is however an insulator; search for metallic systems is desired. Another candidate is a canted AF; if the ferromagnetic moment \(\mathbf{l}\) (due to canting) forms a skyrmion (called "I-skyrmion" in [28]), topological consideration shows that the Neel vector winds at least twice around the skyrmion, i.e., \(m^{z}=2\) per skyrmion [41; 28]. A recent experiment on thin films of Ce-doped CaMnO\({}_{3}\), a canted AF, observed skyrmion bubbles formed by the (weak) _ferromagnetic_ moment [42]. Therefore, this system can also be a candidate for the present TSHE.
Finally, we discuss the relationship with previous theoretical studies. In Ref. [11], the TSH conductivity was investigated in an AF skyrmion lattice with a focus on the intrinsic (Berry curvature) contribution. It is a future issue to investigate such intrinsic contribution in our framework. For example, one may consider a "meron-antimeron lattice" which contains a vortex with \(m=\pm 2\) per unit cell. In Ref. [12], the Landauer-Buttiker method was used to study the TSBE in AF skyrmion systems, and a finite TSBE was found for _finite-size systems_, which does not contradict with our result because of finite size. More importantly, an increase of the spin Hall conductivity was pointed out for special impurity configurations, and it is also interesting to investigate how the impurity configuration affect the spin dephasing and TSBE.
To summarize, we have studied a spin Hall effect due to magnetic textures in AF metals. By analytic calcu
Figure 4: Topological spin Hall conductance (\(G_{\rm SHC}^{z}\)) based on the Landauer-Buttiker formula for finite systems with \(L\times L\) sites. (a) AF skyrmion system. (b) AF meron system. (c) \(L\)-dependence of the peak value of \(G_{\rm SHC}^{z}\). The data are fitted with functions, \(f(x)=286x-0.504\) and \(g(x)=0.237/x-2.22\). (d) AF meron system with \(L=70\) for several choices of \(J/t\). We took \(J/t=0.3\) [except in (d)] and meron/skyrmion radius \(r=15\). The data are symmetrized with respect to \(J\rightarrow-J\), as explained in [31].
lations, we found a topological contribution proportional to the winding number defined by vector chirality. This is finite for AF merons but not for AF skyrmions, and is enhanced at weak coupling. These results may provide hints to enhance spin currents and give directions for experimental investigations and device fabrications. The results are confirmed by numerical calculations based on the Landauer-Buttiker formula. Important roles played by nonadiabatic processes and spin dephasing are pointed out.
We would like to thank A. Yamakage, J. Fujimoto, T. Yamaguchi, Y. Imai, A. Matsui, and T. Nomoto for valuable discussions. This work was partly supported by JSPS KAKENHI Grant Numbers JP15H05702, JP17H02929, JP19K03744 and No. JP21H01799, and the Center of Spintronics Research Network of Japan. KN is supported by JSTCREST (JP-MJCR18T2) and JSPS KAKENHI Grant Number JP21K13875. JN is supported by a Program for Leading Graduate Schools "Integrative Graduate Education and Researchin Green Natural Sciences" and Grant-in-Aid for JSPS Research Fellow Grant Number JP19J23587.
|
2301.00899 | Deep reinforcement learning for irrigation scheduling using
high-dimensional sensor feedback | Deep reinforcement learning has considerable potential to improve irrigation
scheduling in many cropping systems by applying adaptive amounts of water based
on various measurements over time. The goal is to discover an intelligent
decision rule that processes information available to growers and prescribes
sensible irrigation amounts for the time steps considered. Due to the technical
novelty, however, the research on the technique remains sparse and impractical.
To accelerate the progress, the paper proposes a principled framework and
actionable procedure that allow researchers to formulate their own optimisation
problems and implement solution algorithms based on deep reinforcement
learning. The effectiveness of the framework was demonstrated using a case
study of irrigated wheat grown in a productive region of Australia where
profits were maximised. Specifically, the decision rule takes nine state
variable inputs: crop phenological stage, leaf area index, extractable soil
water for each of the five top layers, cumulative rainfall and cumulative
irrigation. It returns a probabilistic prescription over five candidate
irrigation amounts (0, 10, 20, 30 and 40 mm) every day. The production system
was simulated at Goondiwindi using the APSIM-Wheat crop model. After training
in the learning environment using 1981-2010 weather data, the learned decision
rule was tested individually for each year of 2011-2020. The results were
compared against the benchmark profits obtained by a conventional rule common
in the region. The discovered decision rule prescribed daily irrigation amounts
that uniformly improved on the conventional rule for all the testing years, and
the largest improvement reached 17% in 2018. The framework is general and
applicable to a wide range of cropping systems with realistic optimisation
problems. | Yuji Saikai, Allan Peake, Karine Chenu | 2023-01-02T23:13:20Z | http://arxiv.org/abs/2301.00899v2 | # Deep reinforcement learning for irrigation scheduling using high-dimensional sensor feedback
###### Abstract
Deep reinforcement learning has considerable potential to improve irrigation scheduling in many cropping systems by applying adaptive amounts of water based on various measurements over time. The goal is to discover an intelligent decision rule that processes information available to growers and prescribes sensible irrigation amounts for the time steps considered. Due to the technical novelty, however, the research on the technique remains sparse and impractical. To accelerate the progress, the paper proposes a general framework and actionable procedure that allow researchers to formulate their own optimisation problems and implement solution algorithms based on deep reinforcement learning. The effectiveness of the framework was demonstrated using a case study of irrigated wheat grown in a productive region of Australia where profits were maximised. Specifically, the decision rule takes nine state variable inputs: crop phenological stage, leaf area index, extractable soil water for each of the five top layers, cumulative rainfall and cumulative irrigation. It returns a probabilistic prescription over five candidate irrigation amounts (0, 10, 20, 30 and 40 mm) every day. The production system was simulated at Goodiwindi using the APSIM-Wheat crop model. After training in the learning environment using 1981-2010 weather data, the learned decision rule was tested individually for each year of 2011-2020. The results were compared against the benchmark profits obtained using irrigation schedules optimised individually for each of the considered years. The discovered decision rule prescribed daily irrigation amounts that achieved more than 96% of the benchmark profits. The framework is general and applicable to a wide range of cropping systems with realistic optimisation problems.
_Keywords_: APSIM, artificial intelligence, crop modelling, management optimisation, precision agriculture, wheat
## 1 Introduction
Fresh water is becoming a scarce resource in many parts of the world, and its use in agriculture increasingly needs to be optimised. While there are a number of approaches to irrigation optimisation, irrigation scheduling using advanced sensor technologies has considerable potential to apply
the right amount of water at the right time based on monitored plant, soil, and atmospheric conditions [1]. In operationalising precision irrigation, a significant challenge is to devise an intelligent decision rule that prescribes a sensible irrigation amount at the time of each decision making based on inputs from a variety of crop and environmental measurements [2].
While precision irrigation, as a form of precision agriculture, holds the promise to increase resource use efficiency by exploiting advanced technologies, it is also faced with the challenge--the technologies are too complicated to fully exploit in practice [3; 4]. For example, drip irrigation is a prototypical practice of precision irrigation, enabling precise control of irrigation rate and timing. For determining rates and timings, Abioye et al. [1] listed 18 basic parameters that can be readily monitored using available sensor devices: 10 crop parameters including leaf area index and sap flow, 4 soil parameters including soil moisture and salinity, and 4 weather parameters including temperature and rainfall. The question is, given the data stream generated by a variety of sensors, how to sensibly determine when and how much to activate the drip irrigation system. In other words, the task is to devise an intelligent decision rule that sequentially prescribes irrigation amounts based on high-dimensional sensor feedback in order that the irrigated amounts are collectively sensible to achieve overall production goals such as profit maximisation.
As if reflecting the difficulty of the task, most studies have been addressing irrigation optimisation problems using only low-dimensional sensor feedback and often focus on only irrigation timings without thorough consideration of irrigation amounts. For example, the vast majority of studies considers a single source of feedback at a time: soil moisture or soil water deficit [5]. In addition, irrigation rules often handle only timings and use irrigation amounts needed to replenish the soil to field capacity [5]. If soil moisture is the only feedback information and production goals are yield maximisation, then it is intuitive and probably reasonable to assume that replenishment of soil water is likely to reach the aim. However, the assumption here ignores the availability of other sources of feedback information. Moreover, real-world optimisation problems are rarely as simple as unconstrained yield maximisation. For instance, in regions with water restrictions, applying deficit irrigation to large cropping areas is often more profitable for the farm than focusing on fully irrigated crops for only small areas [6; 7].
A standard method to find optimal decision rules in real-world systems is model predictive control (MPC), which has also been adopted in agricultural decision problems [8] including irrigation scheduling [9]. Crucially, MPC requires mathematical models of how systems evolve over time. While, in many physical systems, such models of dynamics can be derived based on first principles such as Newtonian mechanics, there are in general no such first principles for complex agricultural systems [10] due to the significant non-linearity in responses and states that characterise these systems [11; 12]. Therefore, to apply MPC, dynamics needs to be estimated using data (i.e., system identification), which is feasible only in low-dimensional cases. Indeed, "most of the existing works on system identification are based on the soil moisture equation without capturing the changing dynamics of soil, plant, and weather" [9, p. 2]. This again ignores the availability of high-dimensional sensor feedback.
Reinforcement learning (RL) is a subfield of machine learning, which intends to discover intelligent decision rules without prior knowledge of systems dynamics [13]. As a form of machine learning, RL relies on data that encodes key information of decision-making experience in an environment of interest over time. Relevant data include both feedback from the decision environment (e.g., monitored crop, soil, and weather) and actions taken by following decision rules (e.g., irrigation amounts). In cases of high-dimensional feedback, intelligent decision rules are most likely complex functions that can capture intricate relations between feedback and appropriate actions. A standard approach to learning such complex functions from data is to take advantage of the representational power of deep learning [14]. Hence, deep RL has emerged as a promising approach
to discovering intelligent decision rules using high-dimensional feedback [15].
Despite the tremendous potential for irrigation scheduling, there exist only a handful of RL studies in the literature [16, 17], among which only two studies employ deep RL incorporating high-dimensional feedback [18, 19]. This is in contrast to the widespread adoption of deep learning as supervised learning methods [20] and reflects the technical novelty of deep RL in the agricultural community. While both Yang et al. [18] and Chen et al. [19] successfully apply deep RL to specific problems, their implementations involve several impractical components, making it difficult for other researchers to apply the methods to different problems. First, Yang et al. [18] used a fixed weather pattern in every simulation of crop production. Since weather is the most important random aspect in crop production, the omission undermines the practical relevance of the study. Second, Yang et al. [18] used in-season yield estimates from the simulator as performance signals to facilitate the learning. In reality, such estimates are unreliable especially at early stages. Practical RL methods avoid relying on unrealistic extra information and try to overcome the common challenge of sparse performance signals [21]. Third, Chen et al. [19] constructed a learning environment by manually specifying component processes of systems dynamics for the particular problem. In general, it is costly in time and other resources to manually construct complex agricultural systems, which typically comprise hundreds or even thousands of component processes. Since distinct farms/crops necessitate construction of distinct environments, use of crop simulators is more scalable and practical as it allows researchers to create pertinent environments characterised by crop types, soil, weather, and management practices for their unique problems.
To accelerate the research on deep RL for irrigation scheduling, this paper provides a general framework and actionable procedure that facilitate individual applications of deep RL using high dimensional sensor feedback. The framework consists of a formal mathematical formulation of an optimisation problem (Section 2.1), a solution algorithm (Section 2.2), and a procedure for constructing learning environments and implementing the algorithm (Section 2.3). In describing the procedure, key aspects of specifying both learning environments and learning algorithms are emphasised. To demonstrate effectiveness of the framework, a simulation study was conducted with the APSIM-Wheat crop model for irrigated wheat at Goodiwindi, Australia. A profit-maximising decision rule was learned in the simulated environment using 1981-2010 weather data, and it was tested using 2011-2020 weather data. The resulting profit for each of the testing years was compared against the benchmark profit obtained using an irrigation schedule optimised specifically for that particular year (Section 2.4 and 3). Finally, the discussion includes analysis of the case study, key assumptions, limitations, and future directions of the framework (Section 4).
## 2 Materials and methods
### Irrigation optimisation problem
Suppose that irrigation management starts at time \(t=0\) (e.g., day of sowing) and ends at time \(t=T-1\) (e.g., day of reaching a certain growth stage). The unit of time \(t\) may be a day, several days, or a week. At each time step \(t\in\{0,1,\ldots,T-1\}\), the environment provides the state of the system \((S_{t})\). A decision rule uses the observed state as inputs/feedback and prescribes an action \((A_{t})\) that specifies an irrigation amount for time step \(t\). After taking action \(A_{t}\), the environment also provides a "reward" \((R_{t+1})\), which comprises immediate costs and benefits of taking action \(A_{t}\). In irrigation optimisation, specifically, a reward may include costs of irrigation (e.g., water costs) at every time step and crop revenue added at the last time step. The process starts with some \(S_{0}\) at \(t=0\) and continues until \(t=T-1\). This single process, consisting of a sequence of state observations, actions, and rewards, forms an episode of RL. The quantity that the decision-making
agent tries to maximise is the sum of rewards:
\[\sum_{t=0}^{T-1}R_{t+1}=R_{1}+R_{2}+\cdots+R_{T}.\]
In the example of water costs and crop revenue, \(R_{1},R_{2},\ldots,R_{T-1}\) are negative numbers (costs) while \(R_{T}\) is a positive number (revenue) minus the cost of irritation that occurred at the last time step (if any). Since a reward at \(t+1\) depends on a state and an action at \(t\), \(R_{t+1}\) is technically a function of \(S_{t}\) and \(A_{t}\), i.e., \(R_{t+1}=R(S_{t},A_{t})\). So, the quantity to maximise can be rewritten as follows:
\[\sum_{t=0}^{T-1}R(S_{t},A_{t})=R(S_{0},A_{0})+R(S_{1},A_{1})+\cdots+R(S_{T-1}, A_{T-1}). \tag{1}\]
Let \(\pi\) denote a stochastic decision rule, a mathematical function that maps each possible state (\(s\)) to a probability of taking action (\(a\)), i.e., \(\pi(a,s)=\mathbb{P}(A=a|S=s)\) where \(A\) and \(S\) are random variables. In other words, decision rule \(\pi\) probabilistically prescribes an irrigation amount \(a\) given the state observation \(s\). While decision rules can be deterministic, stochastic decision rules are more common in applications. Using \(\pi\), the ultimate goal of learning can be formally expressed as solving the following maximisation problem:
\[\max_{\pi}\mathbb{E}\left[\sum_{t=0}^{T-1}R(S_{t},A_{t})\right],\]
where \(A_{t}\) is chosen according to \(\pi\) and \(\mathbb{E}\) is the expectation with respect to the randomness in both states and decision rule. Equivalently, the goal is to discover an optimal decision rule \(\pi^{*}\) such that
\[\pi^{*}\in\operatorname*{argmax}_{\pi}\mathbb{E}\left[\sum_{t=0}^{T-1}R(S_{t},A_{t})\right]. \tag{2}\]
However, when solving complex optimisation problems using deep RL, it is computationally infeasible to find the exact \(\pi^{*}\). To illustrate this, suppose five candidate actions to choose from at each of 150 time steps. The total number of possible action sequences is \(5^{150}(\approx 10^{105})\), which is too large for any existing computer to find an optimal sequence from. Consequently, a practical goal is to approximately solve the problem (2) by finding a sub-optimal yet good enough decision rule \(\pi^{\dagger}\) such that
\[\mathbb{E}\left[\sum_{t=0}^{T-1}R(S_{t},A_{t}^{\dagger})\right]\approx \mathbb{E}\left[\sum_{t=0}^{T-1}R(S_{t},A_{t}^{*})\right], \tag{3}\]
where \(\pi^{\dagger}(a,s)=\mathbb{P}(A_{t}^{\dagger}=a|S_{t}=s)\) and \(\pi^{*}(a,s)=\mathbb{P}(A_{t}^{*}=a|S_{t}=s)\).
### Solution algorithm
As mentioned above, in practice, solving the maximisation problem (2) is equivalent to learning a good decision rule that sequentially processes a stream of states, actions, and rewards. There are many RL algorithms that can discover equally good decision rules. While both Yang et al. [18] and Chen et al. [19] employ Q-learning, a popular class of algorithms, the framework presented in this paper assumes another class of algorithms called policy gradient that is conceptually simpler, which helps to lower the technical barriers and allow applied researchers to focus on their applied issues. Recall that the goal of RL is to learn an optimal decision rule through trial-and-error
experience. In Q-learning, however, what is improved throughout the learning is not a decision rule but instead a mathematical object called an action-value function, from which a decision rule is indirectly derived. In contrast, a policy gradient method explicitly maintains and directly improves a decision rule throughout the learning [13]. Among several variants of policy gradient, this paper adopts the classical algorithm REINFORCE [22].
A description of the algorithm is provided below.
```
1:require:\(\pi,\theta_{0},\alpha,N\)
2:for\(n\in\{1,2,\ldots,N\}\)do
3:\(\theta\leftarrow\theta_{n-1}\)
4: Simulate an episode \(S_{0},A_{0},R_{1},\ldots,S_{T-1},A_{T-1},R_{T}\) by following \(\pi_{\theta}\)
5:\(G\gets 0\)
6:for\(t\in\{T-1,T-2,\ldots,0\}\)do
7:\(G\gets G+R_{t+1}\)
8:\(\theta\leftarrow\theta+\alpha G\nabla\log\pi_{\theta}(A_{t},S_{t})\)
9:\(\theta_{n}\leftarrow\theta\)
10:return Best \(\theta\) among \(\{\theta_{1},\theta_{2},\ldots,\theta_{N}\}\)
```
**Algorithm 1** REINFORCE
\(\pi_{\theta}\) denotes a specific decision rule parameterised by \(\theta\). In other words, within a class of decision rules \(\pi\), a particular decision rule is specified by \(\theta\), which is gradually changed throughout the learning process, and the goal of learning is to find \(\theta^{\dagger}\) that specifies a good decision rule \(\pi^{\dagger}=\pi_{\theta^{\dagger}}\). \(\alpha\) is a small positive number that controls how much \(\theta\) changes after each episode throughout the learning process. \(N\) is the total number of training episodes. \(G\) is an intermediate dummy variable. Finally, log and \(\nabla\) denote respectively the natural logarithm and the gradient with respect to \(\theta\).
Note that this section immediately follows the problem formulation (Section 2.1) to highlight solution algorithms as a key component of the framework. However, a choice of algorithms depends on the nature of a specific problem, so these two components need to be jointly considered when applying the framework in practice. For example, the more complex a problem, the more sophisticated the required algorithm is. Details are discussed in Section 4.
### Learning procedure
As stated in the introduction, RL is a technique for discovering a good decision rule for the environment considered through decision-making experience. While environments can be real-world settings, in most cropping systems, it is infeasible to carry out a sufficient number of field trials (i.e., episodes) in the real world. Hence, in the current framework, environments are created by simulations with APSIM [23]. Learning takes place in these simulated environments by following RL algorithms that try to solve optimisation problems.
When a learned decision rule is intended to be deployed or tested in a real-world system, it is vital to ensure that the simulated environment is of high fidelity, capturing salient features of the corresponding real-world situation; otherwise, the learned decision rule would be used in too distinct conditions to perform adequately. As noted in many studies (e.g., Berghuijs et al. [24] and Collins and Chenu [25]), to create high-fidelity APSIM simulations for particular real-world scenarios, the following factors should be carefully specified:
* meteorological information,
* soil characteristics,
* cultivar characteristics, and
* management practices (except irrigation schedule, which is actively optimised).
Next, the optimisation problem (Section 2.1) needs to be specified with respect to each of the following:
* State variables \((S_{t})\). Recall that state variables are inputs from various sensors that provide useful information for sequential decisions. Thus, selection of state variables from hundreds of APSIM simulated variables is based on usefulness, similarity, and monitoring capabilities in the corresponding real-world situation. Similarity means close correspondence between information provided by real-world measurement and information provided by APSIM simulated variables. For example, the crop stage variable in APSIM correspond to a physiological development stage that can easily be measured by skilled operators in the real world. Monitoring capabilities are often limited by physical and economic constrains. For example, the crop stage is updated and accessible on a daily basis in APSIM, whereas skilled operators may only measure it at key stages.
* Candidate actions \((A_{t})\). Candidate actions are possible irrigation amounts over which a stochastic decision rule determines a probability distribution at each time of decision making. For example, suppose irrigation of 0, 20, and 40mm are candidate actions for stage 14.5 and a LAI of 0.17 at \(t=150\) (\(S_{150}=(14.5,0.17)\)). Then, a decision rule \(\pi(a,s)\) may prescribe \[\pi(0,(14.5,0.17)) =0.8\] \[\pi(20,(14.5,0.17)) =0.12\] \[\pi(40,(14.5,0.17)) =0.08\] implying that, given the state \(S_{150}=(14.5,0.17)\), \(A_{150}=0\), \(A_{150}=20\), and \(A_{150}=40\) are chosen with probability 0.8, 0.12, and 0.08 respectively. Similar to the above constraints, capabilities of irrigator may limit possible candidate actions. For example, even if APSIM can precisely simulate each of candidate amounts \(\{0,5,10,15,\dots\}\) (mm), a real-world irrigator may lack such precision. In this case, the candidate set should be chosen coarser (e.g., \(\{0,10,20,\dots\}\)).
* Reward characteristics \((R_{t+1})\). As mentioned in Section 2.1, a reward \(R_{t+1}\) is a net benefit of taking action \(A_{t}\) under state \(S_{t}\) at time \(t\). The reward function \(R(S_{t},A_{t})\) should be specified so that the sum of rewards over the irrigation management window from \(t=0\) to \(t=T-1\) is the quantity that the decision-making agent hopes to maximise. Aside from the example of water costs and crop revenue illustrated above, reward may alternatively consist of some water use efficiency metrics.
* Unit of time \((t)\), start time \((t=0)\), and end time \((t=T-1)\) of irrigation management. Although a day is the default unit in many cases as APSIM adopts it, different units can be chosen based on other practical considerations including state monitoring and irrigator capabilities. The start and end times should also reflect practicality in the intended real-world situation to help the learning algorithm return a sensible decision rule.
Following the specification of the optimisation problem, details of the learning algorithm needs to be specified, which is indicated by "**require**" at the top of Algorithm 1. \(\theta_{0}\) is an initial parameter vector of \(\theta\). Since \(\theta\) typically consists of millions of parameters in deep neural networks, it is customary to initialise \(\theta_{0}\) by drawing random numbers from the standard normal distribution. \(N\) is chosen depending on computational resources. Since it is the total number of training episodes,
the larger \(N\), the better learning results are. \(\alpha\) is heuristically set (e.g., \(10^{-7}\)) in tandem with the neural network architecture described below. \(\pi\) denotes a class of decision rules or a neural network architecture, which determines a set of possible functions (i.e., decision rules) that can be represented by \(\theta\). In other words, different architectures have different numbers of parameters and different ways to combine them, implying different sets of possible functions to represent. Recall that machine learning is equivalent to exploring for a good \(\theta\). Thus, overly simple architectures excessively restrict the set of possible decision rules and limit its performance, whereas overly rich architectures have too many parameters to discover good ones. This trade-off, also known as model selection or hyper-parameter tuning, is one of the most challenging aspects in machine learning applications. An implication is that, similar to a choice of algorithms (Section 2.2), a choice of architectures depends on the nature of a specific problem.
Finally, with all the specifications, the algorithm is implemented by integrating the code of deep RL into the code of APSIM written in C#. The integration can be seamlessly accomplished owing to the open-source APSIM next generation [23] and TensorFlow.NET [26]. As a reference, the code used for the case study (Section 2.4) is available on the website ([https://github.com/ysaikai/RLIR](https://github.com/ysaikai/RLIR)).
### Case study
Effectiveness of the framework is demonstrated for a scenario of spring wheat production in subtropical Australia. Due to the randomness involved in weather patterns and action selections by the decision rule, for reproducibility of the results, the random number generation in C# program is fixed at the beginning of simulations using command Random(0). Below is a summary of key specifications, followed by subsections that contain details.
\begin{tabular}{l l} Weather data & 1981-2010 for learning and 2011-2020 for testing \\ \hline State variables & Phenological stage, LAI, \{ESW\({}_{d}\}_{d=1}^{5}\), CuIrrig, CuRain \\ \hline Actions & 0, 10, 20, 30, 40 (mm of irrigation) \\ \hline Total episodes & 20,000 \\ \hline Rewards & Water costs and revenue \\ \end{tabular}
#### 2.4.1 APSIM specifications
Simulations were conducted for irrigated spring wheat production in Goodiwindi, Australia. The irrigation system was assumed to be a centre-pivot system, which is capable of being automated to apply precise amounts of water, and is used by some farmers in the region for water-saving features. Weather data for 1981-2020 was obtained using SILO Patch point data [27], among which 1981-2010 were used for learning and 2011-2020 were used for testing. Apart from the random weather realisations, all the other specifications (e.g., soil characteristics and management practices) were common in every simulation. Regarding soil characteristics, APSoil #906 (Thallon) was used with an initial soil water profile full at 20% of the plant available water capacity (PAWC). Specifically, the soil texture is clay, and PAWC is determined by the following drained upper limit (DUL), crop lower limit (CLL), and bulk density (BD):
Variety Sunbri was sown at 30 mm sowing depth and 100 seeds/m\({}^{2}\) density, with the sowing window between 25 April and 1 June. If 1 June was reached, 20 mm of water would be automatically applied. Fertilisation took place at sowing with 350kg/ha of nitrogen to ensure non-limiting N supply [7] and focus solely on the impact of irrigation scheduling on the profit. Since each APSIM simulation returns one profit figure, the notion of "episode" in RL simply corresponds to "simulation" in APSIM. For reproducibility, the complete.apsimx file is available on the website.
#### 2.4.2 Learning specifications
Nine state variables were considered to provide useful information for irrigation decisions while realistic for regular monitoring in real-world situations:
* crop phenological stage (_Stage_),
* leaf area index (_LAI_),
* extractable soil water (_ESW_) at five different layers: 0-15, 15-30, 30-60, 60-90, and 90-120 (cm),
* cumulative irrigation amount since sowing (_CuIrrig_), and
* cumulative rainfall since sowing (_CuRain_).
_LAI_ and _Stage_ provide information about the crop status. _ESW_s measured at five different layers provide information about the soil water available to the crop. Finally, _CuIrrig_ and _CuRain_ provide information about the past decisions and rainfall events respectively. In terms of observability, while _CuIrrig_ and _CuRain_ are clearly simple to record, the other state variables are also considered observable in many real-world scenarios. In practice, _LAI_ can be estimated by, for instance, remote sensing [28]. In addition, _Stage_ that represents observable physiological development processes can be measured in the field by moderately skilled operators. _ESW_s are also measurable in practice using an appropriately calibrated moisture monitoring device.
Regarding actions, the candidate irrigation amounts were assumed to be 0, 10, 20, 30, and 40 mm, i.e., \(A_{t}\in\{0,10,20,30,40\}\) for all \(t\in\{0,1,\ldots,T-1\}\). The learning started on the day of sowing and ended on the day when _Stage_ reaches 85 ("soft dough") during mid grain filling. Consequently, the learning window was random because the sowing day was influenced by the preceding precipitation, which changed from one season to another, and the plant growth was influenced by not only the season but also the stochastic decision rule, which continued to evolve in the course of learning.
The rewards \(R_{1},R_{2},\ldots,R_{T}\) were defined so that the sum \(R_{1}+R_{2}+\cdots+R_{T}\) was equal to the (partial) profit--crop revenue minus water costs, which was also used in other studies (e.g., Yang
et al. [18]). Formally,
\[R_{t+1}=R(S_{t},A_{t})=\begin{cases}-c\times A_{t}&\text{for}\ \ t\in\{0,\ldots,T-2 \}\\ -c\times A_{t}+p\times\text{Yield}&\text{for}\ \ t=T-1\end{cases},\]
where \(c\) was the unit water cost ($/mm) assumed to be \(c=0.6\) or $60 AUD per mealitre, a low-cost water scenario that would incur energy costs in capturing water from a nearby river and then pumping it through an overhead irrigation system. \(p\) was the output price ($/kg of grains) assumed to be \(p=0.25\) or $250 AUD per tonne, a typical "on-farm" price for wheat grain in the region after the cost of transport to the nearest grain depot was deducted. Yield figure was not available at \(t=T-1\) but realised at the end of each simulation, after which Line 6-9 in Algorithm 1 took place.
As indicated in Line 10 of Algorithm 1, the best \(\theta\) was returned as an outcome of learning after completing all \(N\) episodes. To determine which \(\theta\) among \(\{\theta_{1},\theta_{2},\ldots,\theta_{N}\}\) was the one that defines the best decision rule \(\pi^{\dagger}\), the average profit over the past 100 episodes was adopted as the performance metrics. It was the moving average of order 100, and the performance of \(\theta_{n}\) for \(n\in\{100,101,\ldots,N\}\) was the average profit over episodes \(n-99,n-98,\ldots,n\) (and, for \(n<100\), the average profit over episodes \(1,2,\ldots,n\)). The reason for the moving average as opposed to a single profit figure was the randomness involved in weather patterns and action prescriptions. The best \(\theta\) was the one that the corresponding decision rule \(\pi_{\theta}\) produced the highest moving average profit over the total \(N=20,000\) episodes.
The neural network used to model an irrigation decision rule was fully-connected and feed-forward, the most basic network architecture [29]. Specifically, it had five hidden layers in the middle, where 400 nodes in the first and fifth layers, 600 nodes in the second and fourth layers, and 800 nodes in the third layer as well as a single bias node at each layer (Figure 1). Note that bias nodes in neural networks acted as intercept terms in linear functions. As indicated in the figure, the network took nine state variables as inputs, transformed them through the network, and output five numbers as action probabilities for five candidate irrigation amounts. With this architecture, the length of vector \(\theta\) that specified the decision rule \(\pi_{\theta}\) was equal to the number of edges in the network:
\[1,448,005=(9\times 400)+(400\times 600+600)+(600\times 800+800)+\] \[(800\times 600+600)+(600\times 400+400)+(400\times 5+5).\]
In other words, after each of 20,000 episodes, these 1,448,005 values were gradually changed in search of the best combination throughout the learning. The choice for activation functions also followed one of the most basic configurations--ReLU (rectified linear unit) [30] at nodes in each hidden layer and "softmax" [29] at nodes in the output layer. Finally, as mentioned above, \(\alpha\) was heuristically searched for and set equal to \(10^{-7}\).
#### 2.4.3 Testing and benchmarking
Once the learning was over, the best discovered decision rule was tested independently for each year of 2011-2020. To assess the RL performance, another set of profits were also obtained for benchmarking. This was done by running the algorithm separately in each year of 2011-2020, thus identifying the "best" irrigation schedule for each specific year. It can be expected that the learned decision rule over 1981-2010 performs well on average for these years but may be less optimal in each year of 2011-2020. In contrast, obtaining the above benchmark profits was much easier search of optimal decision rules because weather patterns were then fixed (i.e., no longer random), and there was no separation between learning and testing environments. For example, to obtain a benchmark profit for 2020, the algorithm was run only in the environment of 2020 for arbitrarily many episodes, and the maximum profit encountered at any point in the course of search became the benchmark for 2020. Consequently, these benchmark profits were deemed to provide reasonable upper bounds for the corresponding years.
## 3 Results
The decision rule was discovered by experimenting different irrigation amounts guided by the learning algorithm over \(N=20,000\) episodes, each of which was a random realisation of 30 years of wheat simulations (1981-2010) with the APSIM-Wheat crop model. The learned decision rule, mapping nine state variables (i.e. Stage, LAI, ESW of five soil layers, Culrrig, and CuRain) to
Figure 1: The neural network architecture used in the case study. The diagram indicates, for a given time \(t\), nine state variables (i.e., Stage, LAI, ESW of five soil layers, Culrrig and CuRain) at the leftmost input layer and five candidate actions (i.e., irrigation of 0, 10, 20, 30 and 40mm) at the rightmost output layer. At every decision making, the neural network takes nine inputs from the sensors and returns five probabilities for action prescription. Due to the space restriction, only 4% of the total number of nodes at each of five middle layers are drawn in the figure. A bias node (i.e., intercept term) is also drawn at the top of each of the middle layers.
five action probabilities (i.e., probabilities to irrigate 0, 10, 20, 30 and 40 mm), is a complicated function characterised by 1,448,005 estimated neural network parameters (Figure 1). Therefore, instead of trying to describe here mathematical properties of the decision rule, key features of the decision rule are highlighted by presenting summary statistics of the training and testing results as well as particular testing results for representative scenarios. All testing results are available on the website.
The learned decision ruled was tested 30 times (replication) for each year of 2011-2020. For each day over the management window, the decision rule took nine state variables simulated by APSIM and prescribed five irrigation probabilities. Those irrigation probabilities were unique to each scenario (each day of each replicate of each tested year) as they depended on the dynamically evolving state variables. Hence, while testing was conducted separately in each year of 2011-2020 (i.e., fixed weather pattern), a specific irrigation amount was still randomly chosen every day, implying some randomness in resulting profits. Results from the testing of 30 replicates of each year using the learned decision rule are presented in Table 1 & 2 and Figures 1(a) & 1(b). Although 30 replicates may seem small, it turned out sufficient because the standard deviation of random profits was quite small relative to the average profit in every year.
The learned decision rule managed to prescribe daily irrigation amounts that led to at least 96% of the corresponding benchmark profit. Overall, the profit increased on average by 1,013 $/ha; i.e., from an average 717 $/ha under rainfed conditions to an average 1,730 $/ha under irrigated conditions. For the testing years (2011-2020), the decision rule recommended applying between 223 and 510 mm of irrigation (398 mm on average). The corresponding yield increased on average by 5,008 kg/ha; i.e., from an average 2,868 kg/ha under rainfed conditions to an average 7,876 kg/ha under irrigated conditions. Figure 1(a) and Figure 1(b) illustrate comparison between rainfed and irrigated yields and profits respectively over the whole study periods (1981-2020). In addition, details of yield, cumulative rainfall, and cumulative irrigation are found in Table 2.
\begin{table}
\begin{tabular}{c|r r r} Year & Benchmark & Test profit & Performance \\ & (\$/ha) & (\$/ha) & (\%) \\ \hline
2011 & 2,139 & 2,092 (12.3) & 97\% \\
2012 & 2,197 & 2,164 (9.4) & 98\% \\
2013 & 1,561 & 1,522 (10.0) & 97\% \\
2014 & 1,972 & 1,932 (10.8) & 97\% \\
2015 & 1,910 & 1,872 (8.9) & 98\% \\
2016 & 1,799 & 1,778 (12.1) & 98\% \\
2017 & 1,431 & 1,374 (11.4) & 96\% \\
2018 & 1,556 & 1,513 (11.2) & 97\% \\
2019 & 1,431 & 1,384 (10.2) & 96\% \\
2020 & 1,711 & 1,673 (11.4) & 97\% \\ \end{tabular}
\end{table}
Table 1: Testing results of the learned decision rule against the benchmark profits for 2011–2020. A test profit for each year is presented as the average of 30 replicated testings, and the corresponding standard deviation is given in the parentheses. Performance is calculated as the ratio of test profit over benchmark profit. All simulations were conducted for cv. Sunbri at Goodiwindi.
Figure 2: Irrigated yields (a) and profits (b) resulting from applying the learned decision rule over the training (1981–2010) and the testing (2011–2020) years together with APSIM simulations in rainfed conditions (i.e., with no irrigation applied). Each yield/profit is the average of 30 replicates. The total irrigation amount applied each year is indicated by a triangle (). All simulations were conducted for cv. Sunbri at Goondiwindi.
To gain some insight into how the decision rule makes daily prescriptions, Figure 2(a) & 2(b) illustrates a sequence of the prescribed action probabilities in two of 30 replicates for Year 2020. At each day, the probabilities for non-zero irrigation amounts are vertically stacked and colour-coded, and the realised irrigation that was applied is presented by a triangle. Since rainfall has a direct impact on irrigation decisions, daily rainfall is also included in the figures. Overall, in the first replicate (Figure 2(a)), the positive prescriptions are significantly concentrated between Day 240 and 266. The resulting prescriptions were substantially different in the second replicate (Figure 2(b)) where the positive prescriptions are less concentrated and more evenly spread. However, despite the stark contrast in prescriptions, the resulting yields and total irrigation amounts are surprisingly similar (7,516 kg/ha v. 7,517 kg/ha using 350 mm and 320 mm of water, respectively), indicating the consistent performance of stochastic decision rule on average.
\begin{table}
\begin{tabular}{r r r r|r r r} Year & Yield & Rainfall & Irrigation & Year & Yield & Rainfall & Irrigation \\ & (kg/ha) & (mm) & (mm) & & (kg/ha) & (mm) & (mm) \\ \hline
[MISSING_PAGE_POST]
\end{tabular}
\end{table}
Table 2: Yield (kg/ha), cumulative rainfall (mm), and cumulative irrigation (mm) over the whole study periods (1981–2020). Yield and cumulative irrigation in each year is the average over 30 replicates. All simulations were conducted for cv. Sunbri at Goodiwindi.
Again, it is important to keep in mind that an action prescription (i.e., probabilities for irrigation amounts) at any point in time strongly reflected and reacted to the underlying dynamics, which consisted of both the external states (e.g., daily climatic demand and soil status) and previous irrigations. As illustrated in Figure 2(a) & 2(b), once a different irrigation amount was applied, the dynamics changed and led to distinct subsequent irrigation probabilities and realisations of states (e.g., soil water) despite the fact that daily climatic variables were fixed (i.e., the same climatic data used for replicates in the same year).
Table 3 provides complete results for the first replicate used to generate Figure 2(a). The table contains a sequence of states (i.e., daily stage, LAI, etc.) and action probabilities (i.e., irrigation probabilities) prescribed by the decision rule. At each row, the decision rule took the values of nine states variables in Column 2-10 and returned five probability numbers found in \(p(0)\)-\(p(40)\) columns. Then, out of 0, 10, 20, 30, and 40 mm of irrigation, a single amount was randomly chosen according to the probabilities, which is found in _Action_ column. Aligned with the common
Figure 3: Prescribed daily irrigation probabilities from sowing (Day 122) to stage 85 (Day 285) in two of 30 replicates for Year 2020. The first replicate is presented in (a) and the second one in (b). Dots (\(\bullet\)) and triangles (\(\blacktriangledown\)) represent daily rainfall and realised irrigation amounts respectively.
intuition, the probability for no irrigation \(p(0)\) generally jumped upwards immediately after any positive irrigation, while all the probabilities dynamically changed throughout the season adapting to the dynamics of state variables.
\begin{table}
\begin{tabular}{c|c c c c c c c c c c c c c c c c} \multicolumn{13}{c}{State variables} & \multicolumn{13}{c}{Action probabilities} \\ Day & Stage & LAI & ESW1 & ESW2 & ESW3 & ESW4 & ESW5 & CuRrig & CuRain & Action & p(0) & p(10) & p(20) & p(30) & p(40) \\ & & & (mm) & (mm) & (mm) & (mm) & (mm) & (mm) & (mm) & (mm) & (mm) & (mm) & & \\ \hline
[MISSING_PAGE_POST]
\begin{tabular}{c c c c c c c c c c c c c c c c c} \multicolumn{12}{c}{State variables} & \multicolumn{6}{c}{Action probabilities} \\ Day & Stage & LAI & ESW1 & ESW2 & ESW3 & ESW4 & ESW5 & CuTrrig & CuRuain & Action & p(0) & p(10) & p(20) & p(30) & p(40) \\ & & & (mm) & (mm) & (mm) & (mm) & (mm) & (mm) & (mm) & (mm) & (mm) & (mm) & & & \\ \hline
[MISSING_PAGE_POST]
## 4 Discussion
### The case study
The results from the case study suggest an alternative approach to irrigation scheduling in the region. For an irrigated wheat crop at Goodiwindi, growers typically make irrigation decisions using simple rules based on a phenological stage or soil moisture levels, which are monitored using capacitance probes installed 25 to 30 days after plant emergence. Even if more information is available, it will be virtually impossible for growers to process all the information at once and make sensible decisions every time. The case study demonstrated that, by algorithmically learning decision rules in the form of mathematical functions, it would be possible to make use of all the relevant information.
Comparing the simulated yields and irrigated amounts with the corresponding numbers in the existing studies, the learning environment constructed for the case study seems to be a reasonable one. Specifically, in this region, the maximum yield is estimated to range between 6.8 and 8.7 t/ha, utilising up to 550 mm of water including initial soil water, rainfall and irrigation [7, 31]. The simulated yields varied between 7.0 and 9.5 t/ha for the learning years (1981-2010) and between 6.7 and 9.7 t/ha for the testing years (2011-2020). Over the whole studied period (1981-2020), the initial soil water was set to 40 mm, the average rainfall was 195 mm ranging between 39 and 537 mm, and the average amount of irrigated water was 350 mm ranging between 32 and 545 mm.
The following are additional comments on the specifications and results of the case study. The set of candidate actions was chosen to be \(\{0,10,20,30,40\}\) mm of irrigation because 40 mm per day was a reasonable maximum amount for the assumed use of overhead irrigators. However greater irrigation amounts can be applied as done by some growers with other irrigation systems in the region. In terms of computational practicality, the total number of training episodes was set equal to \(N=20,000\) based on the available computational capacity. While it turned out sufficient for this particular scenario, different values may be adapted to other problems. Finally, for reproducibility of the case study, the random number generation in C# program was fixed at the beginning of learning by choosing number 0 in the command Random(0). Use of other numbers could lead to distinct learning results, but learned decision rules would likely perform equally well.
### Assumptions and limitations
For successful applications of the framework, it is crucial to construct learning environments of high-fidelity; that is, creating environments that capture salient features of the corresponding real-world situations into which the learned decision rules are intended to be deployed. In the case study, the testing/deployment environment was the same as the learning environment. As a result, it was relatively easy to achieve high performance of the learned decision rule (i.e., achieving more than 96% of the benchmark profit). In contrast, when deploying decision rules discovered by simulation into real-world systems, there are inevitable gaps between simulated and real-world environments, which will likely result in lower performance than those in the case study. Based on the existing knowledge about intended systems (e.g., crops, soil properties, and management practices), researchers need to narrow the gaps to claim the practical usefulness of discovered decision rules. The task is considered reasonable because of the existence of numerous studies that verify the fidelity of APSIM in a wide range of production systems especially for wheat in Australia [32, 33, 34].
Another important yet tacit assumption is that the randomness in testing/deployment environments is reasonably represented in learning environments. Recall the problem formulation (2),
where the objective function to maximise is the expectation with respect to action selection and state distribution. When the learned decision rules are deployed without any modification, the randomness in the former is identical between learning and deployment environments. However, the randomness in the latter may differ. In the case study, the weather pattern at each training episode was randomly chosen from 30 possible patterns with probability 1/30. An implicit assumption was that the weather pattern in each of 10 testing years (2011-2020) was a realisation of the weather distribution collectively created by 30 patterns (1981-2010). Clearly, none of 10 patterns in the testing was exactly the same as one of 30 patterns in the learning. But, loosely speaking, each pattern in the testing was "statistically similar" to the created distribution. In practice, climate change may invalidate the assumption, necessitating separate creation of weather patterns for learning environments (e.g., fit time-series models to weather data and sample from the models). In addition, if other randomness is introduced to learning environments, researchers must ensure that it is also present in deployment environments.
Lastly, there is no consideration of spacial variations in the current framework; i.e., a single production environment is assumed for both learning and deployment. This stems from the current limitation of crop simulators. To obtain a decision rule capable of handling a spatial variation, researchers may create multiple environments according to the spatial variation, learn multiple decision rules through independent learning processes, and select an appropriate one based on spatial characteristics in the deployment environment each time of decision making.
### Future directions
The optimisation problem in the case study was formulated as a simple one--the profit maximisation with the unlimited water availability. The simple formulation is deliberate in order to highlight the key implementation aspects of the deep RL framework, which is significantly more sophisticated than standard optimisation techniques. Since the proposed framework is general and flexible, it can be applied to a variety of real-world problems, which are often more complex. For example, a grower may manage multiple crops, instead of a single crop, and want to maximise the farm profit by optimising the farm-level water use on the whole. In this case, multiple learning environments can be constructed and run simultaneously. In another scenario, the total amount of water available to a grower may be constrained in each year or even across multiple years. In this case, the remaining water budget at any time step can be included as another state variable. Moreover, the framework can be used to optimise other management practices such as fertilisation together with irrigation. In this scenario, successfully learned decision rules can take into account the interaction between different management practices (e.g., nitrogen application and irrigation). In most scenarios, difference will be in specification of rewards and state variables.
Regarding learning algorithms (Section 2.2), the framework assumes the classical REINFORCE due to its conceptual simplicity, which helps to lower the technical barrier and allow applied researchers to focus on their applications. REINFORCE worked and achieved more than 96% of the maximum profit in the case study (Table 1) with nine state variables, five candidate actions, and the deterministic APSIM simulations (aside from the random weather pattern). If more complex problems are to be tackled, it will likely be necessary to use more advanced algorithms such as actor-critic methods [13] or even state-of-the-art algorithms. Nevertheless, main difference will be in implementation of algorithms, while the rest of the framework will remain the same.
Use of weather forecast as a state variable is possible and simple to include in the framework, provided that some numerical values are available (e.g., expected amounts of rainfall over the following days or weeks). A key factor in deciding whether to include it is the quality of forecast. For example, when a significant amount of rainfall is reliably predicted in the coming days, a good
decision rule will prescribe a high probability for no irrigation. By including reliable rain forecast in the framework, such decision rules can be automatically learned. However, if the forecast is unreliable, it may mislead the learning process and result in a poor decision rule.
Computational costs of learning depend on specifications of simulated environments, problem formulations, and learning algorithms. For example, if an optimisation problem is quite complex, requiring many state variables and an advanced learning algorithm, it will likely necessitate a large number of training episodes and long running time. As a reference, the learning over 20,000 episodes in the case study can be completed within 24 hours on a modern desktop computer.
Finally, to make the proposed framework truly practical and accelerate the research on RL in agriculture, it is vitally important to develop user-friendly software that dramatically reduces manual coding and facilitates implementation of the framework. In the current state of the software development, specific implementation requires some coding skills in C# programming language for use of TensorFlow.NET and its integration into APSIM. Since the industry is rapidly increasing use of digital technologies in part with the help of high-tech developing and advising companies, collaboration across academia, public sectors, and the industry is possible. All the code used for the case study is available on the website and serves as a template that can be adapted to diverse practical problems.
## 5 Conclusion
This paper proposes a deep reinforcement learning (RL) framework for irrigation optimisation, which comprises a formal mathematical formulation of optimisation problems, a solution algorithm, and a procedure for constructing learning environments and implementing the algorithm. The effectiveness of the framework was demonstrated in the case study where the profit in wheat production was maximised by learning a near optimal irrigation schedule using deep RL. Specifically, the learning environment (1981-2010) was built using the APSIM-Wheat crop model. The learned decision rule was examined in the testing environment (2011-2020) and achieved more than 96% of the maximum profit in any of 10 testing scenarios. The proposed framework is flexible and can be used to address many complex problems (e.g., maximising water use efficiency or yield with a water budget). Since there remain technical barriers for some users (i.e., mathematical formulation and implementation using C#), it is crucial to develop user-friendly software to facilitate applications of the framework. It is the authors' hope that this paper will spark wide collaboration across academia, public sectors and the industry to advance software development, help many practitioners solve their management optimisation problems, and collectively move towards economically and environmentally sustainable agriculture.
|
2307.04971 | Change of variable and discrete Hardy inequality | For absolutely convergent series we state explicitly a one-sided summation
estimate that can be viewed as the discrete analogue of the change of variable
formula on the half line. This estimate is implicit in Pascal Lef\`evre's
recent elegant proof of the classical discrete Hardy inequality. Here we remove
a superfluous irrationality condition therein and point out the change of
variable character of his approach. This leads to a simpler, shorter and
\textit{bona fide} Ingham type proof of the discrete Hardy inequality, and also
provides the optimal constant. | Yi C. Huang | 2023-07-11T02:22:56Z | http://arxiv.org/abs/2307.04971v1 | # Change of variable and discrete Hardy inequality
###### Abstract.
For absolutely convergent series we state explicitly a one-sided summation estimate that can be viewed as the discrete analogue of the change of variable formula on the half line. This estimate is implicit in Pascal Lefevre's recent elegant proof of the classical discrete Hardy inequality. Here we remove a superfluous irrationality condition therein and point out the change of variable character of his approach. This leads to a simpler, shorter and _bona fide_ Ingham type proof of the discrete Hardy inequality, and also provides the optimal constant.
Key words and phrases:Absolutely convergent series, change of variable, Cesaro means, discrete Hardy inequality, optimal constant, counting function, Abel transform 2010 Mathematics Subject Classification: Primary 40G05, 26D15 Research of the author is partially supported by the National NSF grant of China (no. 11801274). The author would also like to thank Professor Pascal Lefevre (Artois) for helpful communications. \({}^{1}\)See also [13] for a subsequent extension to weighted discrete Hardy's inequalities.
from the Machihara-Ozawa-Wadade theory [14, 15] on Hardy and Rellich inequalities in the framework of equalities. For an abstract Machihara-Ozawa-Wadade theory and a systematic treatment of Hardy inequalities on homogeneous groups, see Ruzhansky and Suragan [17]. For historical aspects and several other proofs of (1.1) and its continuous version, see Kufner, Malgranda, and Persson [18]. For more recent developments about Hardy inequalities on Euclidean domains with nonempty boundary, see for example Avkhadiev [1, 21, 22], Goel, Pinchover, and Psaradakis [19] and the references therein.
Back to Lefevre's arguments for (1.1), the following change of variable _estimate_ for convergent series is implicit. We single it out for its potential independent interest.
**Fact 1.1**.: Let \(\mathsf{s}>0\) and \(\|\{\mathsf{a}_{n}\}\|_{1}<+\infty\). Then
\[\mathsf{s}\sum_{n\geq 1}|\mathsf{a}_{\{\mathsf{ns}\}}|\leq\sum_{n\geq 0}| \mathsf{a}_{n}|. \tag{1.2}\]
Here \([\mathsf{x}]\) denotes the largest integer not exceeding \(\mathsf{x}\).
More precisely, that (1.2) holds with \(\mathsf{s}\in[0,1]\backslash\mathbb{Q}\) was used in [13] to prove the discrete Hardy inequality (1.1). In this note we point out that this additional "\(\backslash\mathbb{Q}\)" condition is superfluous. Note that (1.2) also holds trivially when \(\mathsf{s}\in\{0,1\}\).
It is plain that (1.2) is similar to the change of variable _formula_ in the continuum
\[\mathsf{s}\int_{0}^{+\infty}\mathsf{f}(\mathsf{s}\mathsf{x})\mathsf{d}\mathsf{ x}=\int_{0}^{+\infty}\mathsf{f}(\mathsf{x})\mathsf{d}\mathsf{x},\ \forall\ \mathsf{s}>0. \tag{1.3}\]
Under the Dirichlet boundary condition \(\mathsf{a}_{0}=0\) (see [10] for the interpretation), the equality in (1.2) is attained when no change of variable happens, i.e., \(\mathsf{s}=1\). Moreover, (1.2) becomes especially clear-cut for integer \(\mathsf{s}\) and non-increasing \(\{|\mathsf{a}_{n}|\}\).
## 2. Proofs of Fact 1.1
It is a crucial observation that it suffices to work with the non-increasing rearrangement of \(\{|\mathsf{a}_{n}|\}\). Denote by \(\mathbb{N}\) the set of positive integers and by \(\mathbb{N}_{0}=\mathbb{N}\cup\{0\}\).
_First proof via counting function (after Lefevre)._ For \(\mathsf{s}>0\) and \(\mathsf{m}\in\mathbb{N}_{0}\), introduce
\[\mathsf{I}_{\mathsf{m}}(\mathsf{s})=\{\mathsf{n}\in\mathbb{N}:\ [\mathsf{ns}]= \mathsf{m}\}=[\mathsf{m}/\mathsf{s},(\mathsf{m}+1)/\mathsf{s})\cap\mathbb{N}\]
and
\[\mathsf{J}_{\mathsf{m}}(\mathsf{s})=[0,\mathsf{m}/\mathsf{s})\cap\mathbb{N}.\]
Denote by \(\sharp\) the counting function. Now, following the arguments in [13],
\[\sum_{n\geq 1}|\mathsf{a}_{\{\mathsf{ns}\}}| =\sum_{m\geq 0}\sum_{n\in\mathsf{I}_{\mathsf{m}}(\mathsf{s})}| \mathsf{a}_{\{\mathsf{ns}\}}|\] \[=\sum_{m\geq 0}|\mathsf{a}_{\mathsf{m}}|\,\sharp\,\mathsf{I}_{ \mathsf{m}}(\mathsf{s})\] \[=\sum_{m\geq 0}|\mathsf{a}_{\mathsf{m}}|\,(\sharp\,\mathsf{J}_{ \mathsf{m}+1}(\mathsf{s})-\sharp\,\mathsf{J}_{\mathsf{m}}(\mathsf{s}))\,.\]
Using Abel transform, \(\sharp\,\mathsf{J}_{\mathsf{m}}(\mathsf{s})<\mathsf{m}/\mathsf{s}\) for \(\mathsf{m}\geq 1\) and monotonicity of \(\{|\mathsf{a}_{n}|\}\), one has
\[\sum_{m\geq 0}|\mathsf{a}_{\mathsf{m}}|\,(\sharp\,\mathsf{J}_{ \mathsf{m}+1}(\mathsf{s})-\sharp\,\mathsf{J}_{\mathsf{m}}(\mathsf{s})) =\sum_{m\geq 0}\sharp\,\mathsf{J}_{\mathsf{m}+1}(\mathsf{s})(| \mathsf{a}_{\mathsf{m}}|-|\mathsf{a}_{\mathsf{m}+1}|)\] \[<\mathsf{s}^{-1}\sum_{m\geq 0}\big{(}\mathsf{m}+1\big{)}(| \mathsf{a}_{\mathsf{m}}|-|\mathsf{a}_{\mathsf{m}+1}|).\]
The estimate (1.2) is thus proved after another use of Abel transform.
**Remark 2.1**.: In this proof we claim by no means any originality over Lefevre's arguments. What we do is to _not_ compute \(\sharp\operatorname{I}_{\operatorname{m}}(\mathsf{s})\), and just leave it implicit.
Second proof via change of variable formulaAs observed above, we can work with decreasing \(\{|\mathsf{a}_{\operatorname{n}}|\}\). Then, (1.2) simply follows from (1.3) applied to \(\mathsf{f}(\mathsf{x})=|\mathsf{a}_{[\operatorname{x}]}|\).
## 3. Change of variable and discrete Hardy inequality
To end our discussions, let us recall Lefevre's elegant proof of (1.1). By an Ingham type representation \(\mathsf{A}_{\operatorname{n}}=\int_{0}^{1}\mathsf{a}_{[(\mathsf{n}+1)\mathsf{ s}]}\mathsf{ds}\) and triangle inequality, one has
\[\left(\sum_{n=0}^{\operatorname{N}}|\mathsf{A}_{\operatorname{n}}|^{\mathsf{ p}}\right)^{1/\mathsf{p}}\leq\int_{0}^{1}\left(\sum_{n\geq 1}|\mathsf{a}_{[\operatorname{ n}\mathsf{s}]}|^{\mathsf{p}}\right)^{1/\mathsf{p}}\mathsf{ds}. \tag{3.1}\]
Inserting the _pointwise_ inequality (1.2) for \(\{|\mathsf{a}_{[\operatorname{ n}\mathsf{s}]}|^{\mathsf{p}}\}\) into (3.1), computing the resulted integral and letting \(\mathsf{N}\to+\infty\) then complete the proof of (1.1).
In conclusion, although Lefevre wrote in Remark (2) of [10] that "\(\cdots\) there is no change of variable in our argument", our interpretation in this note somehow clarifies that the ingredient of his approach is indeed a change of variable in the discrete setting. Furthermore, our proof of the discrete Hardy inequality (1.1) via the second approach to (1.2) is simpler and shorter than the one given in [10].
### Compliance with ethical standards
#### Conflict of interest
The author declares that there is no conflict of interest.
#### Availability of data and material
Not applicable.
|
2303.16882 | On equilibrium states of fluid membranes | The paper studies the equilibrium configurations of inextensible elastic
membranes exhibiting lateral fluidity. Using a continuum description of the
membrane's motions based on the surface Navier--Stokes equations with bending
forces, the paper derives differential equations governing the mechanical
equilibrium. The equilibrium conditions are found to be independent of lateral
viscosity and relate tension, pressure, and tangential velocity of the fluid.
These conditions suggest that either the lateral fluid motion ceases or
non-decaying stationary flow of mass can only be supported by surfaces with
Killing vector fields, such as axisymmetric shapes. A shape equation is derived
that extends the classical Helfrich model with an area constraint to membranes
of non-negligible mass. Furthermore, the paper suggests a simple numerical
method to compute solutions of the shape equation. Numerical experiments
conducted reveal a diverse family of equilibrium configurations. The stability
of equilibrium states involving lateral flow of mass remains an unresolved
question. | Maxim A. Olshanskii | 2023-03-29T17:51:48Z | http://arxiv.org/abs/2303.16882v2 | # On equilibrium states of fluid membranes
###### Abstract
The paper studies the equilibrium configurations of inextensible elastic membranes exhibiting lateral fluidity. Using a continuum description of the membrane's motions based on the surface Navier-Stokes equations with bending forces, the paper derives differential equations governing the mechanical equilibrium. The equilibrium conditions are found to be independent of lateral viscosity and relate tension, pressure, and tangential velocity of the fluid. These conditions suggest that either the lateral fluid motion ceases or non-decaying stationary flow of mass can only be supported by surfaces with Killing vector fields, such as axisymmetric shapes. A shape equation is derived that extends the classical Helfrich model with an area constraint to membranes of non-negligible mass. Furthermore, the paper suggests a simple numerical method to compute solutions of the shape equation. Numerical experiments conducted reveal a diverse family of equilibrium configurations. The stability of equilibrium states involving lateral flow of mass remains an unresolved question.
+
Footnote †: preprint: accepted to FoF
## I Introduction
Motivated by applications in cell biology, there has been extensive research on studying equilibrium configurations of fluid membranes, their stability, and transformations [1; 2; 3; 4; 5; 6; 7]. A now-classical energetic approach to describe the statics and dynamics of fluid membranes was pioneered by Canham [8] and Helfrich [9]. According to the Canham-Helfrich theory, an equilibrium shape of a membrane minimizes a curvature energy functional subject to possible constraints.
More complex models account for in-plane fluidity exhibited by the membranes. In a continuum-based modeling approach, the membrane is represented by a _material surface_ that supports a density flow and may deform driven by both elastic and hydrodynamic forces. The development and analysis of continuum-based models and their application to numerical simulation of fluid membrane dynamics is an area of active research [10; 11; 12; 13; 14; 15; 16; 17; 18; 19]. In Ref. [20], such models of elastic fluid thin sheets were given the name _fluid deformable surfaces_.
A system of equations governing the motion of fluid deformable surfaces consists of the surface Navier-Stokes equations coupled with an elasticity model and posed on a time-dependent surface, while the surface evolution is defined by the hydrodynamic part of the solution; see Sec. II for further details. The governing equations represent the conservation of momentum so that a steady state of the system is a mechanical equilibrium. Existing research on fluid deformable surfaces mainly addresses the system evolution, with only a few papers addressing the problem numerically [12; 17; 19; 20; 21], among which Refs. [17; 19] considered relaxation to equilibrium. In the present study, we are interested in equilibrium configurations.
Assuming a steady state solution to the surface Navier-Stokes equations with vanishing external lateral forces, we deduce three conditions for such solutions to exist: the radial motions are zero, the lateral motions correspond to a Killing field on manifold, and the third condition requires that a specific surface pressure (defined in (12)) is constant. The first two conditions imply that only two scenarios of equilibrium are possible: Either the fluid motion ceases and the problem reduces to the well studied one of finding a shape of minimal curvature energy under the area constraint, or an additional geometrical constraint arises for the equilibrium shape to support a non-decaying lateral fluid flow. The constraint is satisfied by axisymmetric shapes.
To explore the second scenario, we utilize the third condition and a specific elasticity model (the simplest Helfrich model in this paper) to derive the _shape equation_, which is an equation satisfied by geometrical quantities of the system in equilibrium. We introduce a numerical approach to solve the shape equation, which takes advantage of the axial symmetry of the unknown surface. We solve the shape equation numerically to obtain branches of shapes with a fixed surface area and varying interior volume for several sets of physical parameters. In particular, we find steady states of the surface fluid equations with vanishing elastic forces, which correspond to equilibrium configurations of 'pure fluidic' membranes. The stability of the discovered equilibrium states remains an open question.
The remainder of the paper is organized into four sections. Section II reviews the continuum model of a fluid-elastic membrane and derives the equilibrium conditions and the shape equation. Section III introduces a numerical solver. The computed shapes are discussed in Section IV. Section V provides a few concluding remarks.
For the model setup and analysis, this paper employs elementary tangential calculus [14; 22] in embedding \(\mathbb{R}^{3}\) space, thus avoiding any calculations in local surface
coordinates.
## II A deforming fluid-elastic membrane
Following the continuum-mechanical description, we represent the membrane as a smooth closed time-dependent surface \(\Gamma(t)\subset\mathbb{R}^{3}\) with a density distribution \(\rho(x,t)\)[24; 23]. Let \(\mathbf{u}\) be a smooth velocity field of the density flow on \(\Gamma\), i.e., \(\mathbf{u}(x,t)\) is the velocity of the material point \(x\in\Gamma(t)\). In general, \(\mathbf{u}\) is not necessarily tangential to \(\Gamma\), and its normal component defines the geometric evolution of \(\Gamma\).
To formulate equations governing the motion of the membrane, we need a few surface quantities and tangential differential operators. Let \(\mathbf{n}\) be an outward-pointing normal vector on \(\Gamma\), and let \(\mathbf{P}=\mathbf{I}-\mathbf{n}\mathbf{n}^{T}\) denote the normal projector. The surface gradient \(\nabla_{\Gamma}p\) of a scalar function \(p:\Gamma\to\mathbb{R}\) can be defined as \(\nabla_{\Gamma}p=\mathbf{P}\nabla p^{e}\), where \(p^{e}\) is an arbitrary smooth extension of \(p\) in a neighborhood of \(\Gamma\). Also, \(\nabla_{\Gamma}\mathbf{u}=\mathbf{P}(\nabla_{\Gamma}u_{1},\nabla_{\Gamma}u_{ 2},\nabla_{\Gamma}u_{3})^{T}\) is a surface gradient of a vector field \(u=(u_{1},u_{2},u_{3})^{T}:\Gamma\to\mathbb{R}^{3}\). In other words, \(\nabla_{\Gamma}\mathbf{u}\) is a covariant gradient if \(\mathbf{u}\) is tangential to \(\Gamma,\ \mathrm{div}_{\Gamma}\mathbf{u}=\mathrm{tr}(\nabla_{\Gamma}\mathbf{u})\) is the surface divergence, and \(\Delta_{\Gamma}p=\,\mathrm{div}_{\Gamma}\nabla_{\Gamma}p\) is the Laplace-Beltrami operator. For a tensor field \(\mathbf{A}=[\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{a}_{3}]:\Gamma\to\mathbb{R}^ {3\times 3}\), the surface divergence \(\,\mathrm{div}_{\Gamma}\mathbf{A}\) is defined row-wise.
We assume an inextensible viscous membrane and use the Boussinesq-Scriven constitutive relation for the surface stress tensor. Conservation of mass and linear momentum for an arbitrary material area \(\gamma(t)\subset\Gamma(t)\) leads to the evolving surface Navier-Stokes equations for the viscous thin material layer [14]:
\[\left\{\begin{aligned} \rho\dot{\mathbf{u}}&=- \nabla_{\Gamma}p+2\mu\,\mathrm{div}_{\Gamma}(\mathbf{D}_{\Gamma}(\mathbf{u}))+ \mathbf{b}+p\kappa\mathbf{n},\\ \mathrm{div}_{\Gamma}\mathbf{u}&=0,\\ \dot{\rho}&=0,\end{aligned}\right.\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
The result in (6) is well-known due to Willmore [25]. For completeness, we give in Appendix a short proof using elementary tangential calculus. From (6) it is clear that the release of the bending energy produces a force in the normal direction to the surface:
\[\mathbf{b}^{\mathrm{elst}}=c_{\kappa}(\Delta_{\Gamma}\kappa+\frac{1}{2}\kappa^{ 3}-2K\kappa)\mathbf{n}. \tag{7}\]
**Remark 1**: The fluid system (1)-(2) with a generic force \(\mathbf{b}\) was independently derived in Ref. [14] from balance laws of continuum mechanics and in Ref. [26] from energetic principles (with \(\mathbf{b}=0\)). Equations for moving fluid membrane were also derived in local (curvilinear) coordinates [15; 10]. If translated into the language of tangential calculus, the equations from Refs. [15; 10] were shown [27] to also yield (1)-(2). A relation between different formulations found in the literature was also discussed in Ref. [28].
### Conditions of equilibrium
We are interested in the equilibrium state solutions to (1)-(2) with external and bending forces (4), (7). The _geometric_ equilibrium requires \(\Gamma(t)\) to be time-independent (in the sense of a shape). This and (2) immediately implies the _first equilibrium condition_:
\[\mathbf{u}\cdot\mathbf{n}=0. \tag{8}\]
**Case \(\mathbf{u}=0\).** Let us start with considering the case of no-flow, \(\mathbf{u}=0\). The momentum equation in (1) yields \(\nabla_{\Gamma}p=\mathbf{b}^{\mathrm{elst}}+(p^{\mathrm{ext}}+p\kappa) \mathbf{n}\). The left hand side of this identity is tangential to \(\Gamma\), while the right hand side is orthogonal, and so both are zero yielding \(p=\mathrm{const}\) (the implication of \(\nabla_{\Gamma}p=0\) along \(\Gamma\)). Denote this constant surface pressure by \(p_{0}\). Then \(0=\mathbf{b}^{\mathrm{elst}}+(p^{\mathrm{ext}}+p\kappa)\mathbf{n}\) and (7) imply
\[c_{\kappa}(\Delta_{\Gamma}\kappa+\frac{1}{2}\kappa^{3}-2K\kappa)+p_{0}\kappa+ p^{\mathrm{ext}}=0. \tag{9}\]
Equation (9) was also derived in Ref. [29] as the optimality condition for finding the minimum of Willmore energy (5) subject to conserved surface area and enclosed volume, with constants \(p_{0}\) and \(p^{\mathrm{ext}}\) playing the role of Lagrange multipliers for the area and volume constraints, respectively. This constrained minimization problem has been extensively studied in the literature in the context of finding the shapes of vesicles [2; 3; 4; 6; 30].
We conclude that for the static equilibrium (\(\mathbf{u}=0\)) the system (1)-(2), (4), (7) coincides with a well-studied problem of Willmore energy constrained minimization. The membrane fluidity does not play a role in this scenario. We now consider the case of dynamic equilibrium.
**Case \(\mathbf{u}\neq 0\).** The geometric equilibrium condition still implies \(\mathbf{u}\cdot\mathbf{n}=0\) (only lateral motions are allowed). To deduce other conditions, we consider the tangential part of the momentum equation (1). This can be done by applying the orthogonal projection \(\mathbf{P}\) to the first equation in (1) and noting that \(\mathbf{P}\mathbf{n}=0\) and hence \(\mathbf{P}\mathbf{b}=0\). We get
\[\rho\mathbf{P}\mathbf{\dot{u}}=-\nabla_{\Gamma}p+2\mu\mathbf{P}\,\mathrm{div} _{\Gamma}(\mathbf{D}_{\Gamma}(\mathbf{u})).\]
For tangential field \(\mathbf{u}\) we have \(\mathbf{P}\mathbf{u}=\mathbf{u}\) and so \(\mathbf{P}(\mathbf{u}\nabla\mathbf{u})=\mathbf{P}(\nabla\mathbf{u})\mathbf{u} =(\nabla_{\Gamma}\mathbf{u})\mathbf{u}\). For a geometrically stationary surface we also have \(\frac{\partial\mathbf{P}}{\partial t}=0\). This and imply the following identity for the projection of material derivative:
\[\mathbf{P}\mathbf{\dot{u}}=\mathbf{P}\Big{(}\frac{\partial\mathbf{u}}{ \partial t}+(\nabla\mathbf{u})\mathbf{u}\Big{)}=\frac{\partial\mathbf{u}}{ \partial t}+(\nabla_{\Gamma}\mathbf{u})\mathbf{u}=\frac{\partial\mathbf{u}}{ \partial t}+(\mathbf{u}\cdot\nabla_{\Gamma})\mathbf{u}.\]
We therefore get the following system satisfied by \(\mathbf{u}\), such that \(\mathbf{u}\cdot\mathbf{n}=0\), and \(p\):
\[\left\{\begin{aligned} \rho\left(\frac{\partial\mathbf{u}}{ \partial t}+(\mathbf{u}\cdot\nabla_{\Gamma})\mathbf{u}\right)&=- \nabla_{\Gamma}p+2\mu\mathbf{P}\,\mathrm{div}_{\Gamma}\mathbf{D}(\mathbf{u}) \\ \mathrm{div}_{\Gamma}\mathbf{u}&=0\end{aligned}\right. \tag{10}\]
on a geometrically stationary \(\Gamma\). The system (10) is the Navier-Stokes equations on a Riemann manifold [31].
Multiplying the first equation in (10) with \(\mathbf{u}\), integrating over \(\Gamma\) and integrating by parts brings us to the energy equality
\[\frac{\rho}{2}\frac{d}{dt}\int_{\Gamma}|\mathbf{u}|^{2}\,ds=-2\mu\int_{\Gamma }|\mathbf{D}_{\Gamma}(\mathbf{u})|^{2}ds.\]
We see that the kinetic energy of the lateral flow decays for all motions with \(\mathbf{D}_{\Gamma}(\mathbf{u})\neq 0\). Therefore, the equilibrium flow must satisfy the _second equilibrium condition_:
\[\mathbf{D}_{\Gamma}(\mathbf{u})=0. \tag{11}\]
'Tangentially rigid' motions satisfying (8) and (11) correspond to Killing vector fields on manifolds [32; 33]. A non-zero Killing field generates a continuous one-parameter group of transformations \(\Gamma\to\Gamma\) which are isometries, and the ability of \(\Gamma\) to support it is a _geometric_ constraint. In particular, among 2D compact closed surfaces only those of genus 0 and 1 may have non-zero Killing fields and the corresponding group of transformations is 1 with the exception of surfaces of constant curvature, i.e. those isometric to a sphere (3 parameter group) or a flat torus (2 parameter group) [34]. Moreover, the intrinsic geometry of such surfaces is rotationally symmetric, see e.g. Ref. [32] and lemma 0.1 in Ref. [35]. Additional assumptions on the Gauss curvature ensure (see Ref. [36] where the proof is given if \(K>0\) on \(\Gamma\) or more recent treatment in Ref. [37]) that there is a unique smooth isometric embedding of such surface into \(\mathbb{R}^{3}\) as a classical surface of revolution. We have not found results in the literature from which one may conclude that without additional assumptions on \(K\) the classical surface of revolution is the only representation in \(\mathbb{R}^{3}\) of a connected compact closed smooth surface with a Killing field, although such conclusion looks very plausible. We note that Killing fields often appear in the studies of fluid equations on manifolds [38; 39; 40; 41; 14].
With the help of \(\nabla_{\Gamma}|\mathbf{u}|^{2}=2(\nabla_{\Gamma}\mathbf{u})^{T}\mathbf{u}\), which holds if \(\mathbf{u}\cdot\mathbf{n}=0\), and \((\mathbf{u}\cdot\nabla_{\Gamma})\mathbf{u}=(\nabla_{\Gamma}\mathbf{u})\mathbf{u}\) one verifies the identity
\[(\mathbf{u}\cdot\nabla_{\Gamma})\mathbf{u}=2\mathbf{D}_{\Gamma}(\mathbf{u}) \mathbf{u}-\frac{1}{2}\nabla_{\Gamma}|\mathbf{u}|^{2}.\]
Using this identity in (10) we see that for steady flow fields satisfying (11) the momentum equation reduces to \(\nabla_{\Gamma}p-\frac{\rho}{2}\nabla_{\Gamma}|\mathbf{u}|^{2}=0\). We thus get our _third equilibrium condition_:
\[p-\frac{\rho}{2}|\mathbf{u}|^{2}=p_{0}\qquad\text{with some}\quad p_{0}:=const. \tag{12}\]
According to (12) the in-surface pressure _in an equilibrium state_ splits into a constant term and a term representing the kinetic energy density. For a pure fluid membrane (\(c_{\kappa}=0\)), \(p\) can be interpreted as the surface tension coefficient, which is found to depend on the in-plane flow.
Summarizing, we obtain three conditions for the velocity and pressure of the fluid membrane in an equilibrium. These conditions (8), (11), and (12) are independent of an elasticity model and we use them below together with the particular elasticity model to derive the shape equation.
### Shape equations
The Weingarten mapping (shape operator) \(\mathbf{H}:\Gamma\to\mathbb{R}^{3\times 3}\) is given by \(\mathbf{H}=\nabla_{\Gamma}\mathbf{n}\). Note that \(\mathbf{H}=\mathbf{H}^{T}\), \(\mathbf{H}\mathbf{n}=0\). Eigenvectors of \(\mathbf{H}\) orthogonal to \(\mathbf{n}\) are the principle directions on \(\Gamma\) and the corresponding eigenvectors are the curvatures \(\kappa_{1}\) and \(\kappa_{2}\). In particular, \(\kappa=\kappa_{1}+\kappa_{2}:=\mathrm{tr}(\mathbf{H})\). We also need the following identity for the material derivative of \(\mathbf{n}\) (see eq. (2.14) in Ref. [14]):
\[\mathbf{\dot{n}}=\mathbf{H}\mathbf{u}-\nabla_{\Gamma}(\mathbf{u}\cdot \mathbf{n}). \tag{13}\]
To deduce the shape equation, we first take the normal part of the momentum equation (1),
\[\rho\mathbf{n}\cdot\mathbf{\dot{u}}=2\mu\mathbf{n}\cdot\,\mathrm{div}_{\Gamma }\mathbf{D}_{\Gamma}(\mathbf{u})+p\kappa+\mathbf{n}\cdot\mathbf{b}. \tag{14}\]
The first term on the right-hand side vanishes due to (11). For the normal projection of the material derivative we compute with the help of \(\mathbf{u}\cdot\mathbf{n}=0\) and (13)
\[0=(\mathbf{n}\overset{\boldsymbol{\cdot}}{\cdot}\mathbf{u})=\mathbf{n}\cdot \mathbf{\dot{u}}+\mathbf{u}\cdot\mathbf{\dot{n}}=\mathbf{n}\cdot\mathbf{\dot{ u}}+\mathbf{u}^{T}\mathbf{H}\mathbf{u}.\]
Substituting this and (12), (4), (7) in (14) gives the _shape equation_
\[-\rho\mathbf{u}^{T}\mathbf{H}\mathbf{u}-\frac{\rho}{2}\kappa| \mathbf{u}|^{2}\\ =p_{0}\kappa+c_{\kappa}(\Delta_{\Gamma}\kappa+\frac{1}{2}\kappa^{ 3}-2K\kappa)+p^{\mathrm{ext}} \tag{15}\]
with some \(p_{0}=\mathrm{const}\) and tangential velocity \(\mathbf{u}\). At equilibrium, the term \(\rho\mathbf{u}^{T}\mathbf{H}\mathbf{u}\) on the left-hand side can be interpreted as the normal component of a centrifugal force generated by the material flow along a curved trajectory. This interpretation becomes evident when we restrict to axisymmetric shapes below. Therefore, the shape equation (15) represents the balance between the normal component of the centrifugal force, the effective membrane tension \((p_{0}+\frac{\rho}{2}|\mathbf{u}|^{2})\kappa\), the bending force, and the osmotic pressure \(p^{\mathrm{ext}}\). In turn, the effective membrane tension can be split into the'static' term \(p_{0}\kappa\) and the 'dynamic' term \(\frac{\rho}{2}|\mathbf{u}|^{2}\kappa\).
Summarizing, the problem of finding dynamic equilibrium of a fluid-elastic membrane can be formulated as follows: For the given density \(\rho\), bending rigidity \(c_{\kappa}\), osmotic pressure \(p^{\mathrm{ext}}\), and surface area \(A=\mathrm{area}(\Gamma)\) find a shape \(\Gamma\), tangential flow \(\mathbf{u}\) and parameter \(p_{0}\) that solve (11) and (15). Alternatively, one may ask to find \(\Gamma\), \(\mathbf{u}\), \(p_{0}\), and \(p^{\mathrm{ext}}\) such that (11) and (15) hold with given \(\rho\), \(\kappa\), \(A=\mathrm{area}(\Gamma)\)_and_\(V=\mathrm{vol}(\Gamma)\).
Any surface of revolution supports a non-zero Killing field. Moreover, it looks plausible that only surfaces of revolution support non-zero Killing fields among closed compact smooth surfaces isometrically embedded in \(\mathbb{R}^{3}\); see the discussion following (11). This motivates us to restrict further considerations to such surfaces. Without loss of generality, we let \(Oz\) to be the axis of symmetry for \(\Gamma\). Then tangential \(\mathbf{u}\) satisfying (11) is a field of rigid rotations given by
\[\mathbf{u}(\mathbf{x})=w\,\mathbf{e}_{z}\times\mathbf{x},\quad\mathbf{x}\in\Gamma, \tag{16}\]
with the angular velocity \(w\,\mathbf{e}_{z}\). With the exception of a sphere, functions \(\mathbf{u}\) in (16) represents the entire family of Killing fields on \(\Gamma\). Henceforth, we consider only \(\mathbf{u}\) given by (16). It holds
\[|\mathbf{u}(\mathbf{x})|=|w|\,r,\quad\text{with }r=\mathrm{dist}(\mathbf{x},Oz).\]
For an axisymmetric surface, the first principle direction is tangential to the generating curve and the second one is the azimuthal direction and coincides with the direction of \(\mathbf{u}\). Since the principle directions are given by the eigenvectors of \(\mathbf{H}\), the later observation implies \(\mathbf{u}^{T}\mathbf{H}\mathbf{u}=\kappa_{2}|\mathbf{u}|^{2}\). Now (11) yields the _shape equation for an axisymmetric surface_:
\[-\rho\Big{(}\kappa_{2}+\frac{\kappa}{2}\Big{)}(w\,r)^{2}\\ =p_{0}\kappa+c_{\kappa}(\Delta_{\Gamma}\kappa+\frac{1}{2}\kappa^ {3}-2K\kappa)+p^{\mathrm{ext}}, \tag{17}\]
with some \(p_{0}=\mathrm{const}\). Thus, further in the paper we are interested in the following problem: _Find an axisymmetric \(\Gamma\), \(p_{0}\), and \(p^{\mathrm{ext}}\) such that (17) holds with given \(\rho\), \(\kappa\), \(|w|\), \(A=\mathrm{area}(\Gamma)\) and \(V=\mathrm{vol}(\Gamma)\)._ We remark that instead of prescribing \(w\) one may consider the prescribed angular momentum (a conserved quantity). In such formulation, \(w\) should be treated as unknown.
**Remark 2** For \(\mathbf{u}=0\) eqs. (15) and (17) naturally simplifies to (9), which is the optimality condition for constrained minimization of the energy functional (5) with
conserved surface area and enclosed volume. However, for the general case of \(\mathbf{u}\neq 0\), it is not clear how the shape equation can be related to an energy minimization problem. A recent work by Krause et al. [19] deduced a variant of (17) by recognizing Killing fields in (16) as equilibrium solutions of the surface Navier-Stokes equations on axisymmetric surfaces. The shape equation in Ref. [19], however, uses a generic surface pressure variable (\(p_{0}\) and dynamic pressure do not appear). This makes the problem much harder to address numerically or relate to the classical constrained minimization problem for the ceasing lateral flow.
**Remark 3 (scaling)**: A scaling property well-known for (9) extends to (17): If a triple \(\{\Gamma,p_{0},p_{\mathrm{ext}}\}\) solves (17), then for any \(R>0\) the triple \(\{R^{-1}\Gamma,R^{2}p_{0},R^{3}p_{\mathrm{ext}}\}\) solves (17) with \(w\to R^{2}w\). Choosing a representative solution with \(\mathrm{area}(\Gamma)=4\pi^{2}\), it is therefore convenient to parameterize solutions by their reduced volume
\[\widehat{V}=\widehat{V}(\Gamma):=3\mathrm{vol}(\Gamma)/(4\pi),\quad\widehat{V }(\Gamma)\in(0,1], \tag{18}\]
where \(\widehat{V}=1\) corresponds to the unit sphere, a trivial solution of (17) for \(w=0\) and \(p_{0}\), \(p^{\mathrm{ext}}\) satisfying \(2p_{0}+Rp^{\mathrm{ext}}=0\). Same scaling argument holds for solutions of (11), (15).
## III Parametrization of the shape equation and a numerical solve
An axisymmetric \(\Gamma\) can be described by its profile curve
\[s\to(r(s),z(s)),\]
so that \(\Gamma\) is generated by rotating the profile curve around the \(z\)-axis in \(\mathbb{R}^{3}\). Assuming \(s\) is the arc-length parameter, one computes (cf. Section 3C in Ref. [42]) principle curvatures to be \(\kappa_{1}=-r_{ss}z_{s}+r_{s}z_{ss}\), \(\kappa_{2}=\frac{z_{s}}{r}\). It is convenient to introduce the tilt angle \(\psi(s)\) (an angle between the \(Or\)-axis and tangent vector to the profile curve). Writing geometric quantities in terms of \(\psi\), one gets
\[r_{s}=\cos\psi,\quad z_{s}=\sin\psi,\qquad\kappa_{1}=\psi_{s},\quad\kappa_{2} =\tfrac{\sin\psi}{r}.\]
One also computes \(\Delta_{\Gamma}\kappa=\frac{1}{r}(r\kappa_{s})_{s}\). Denote the length of the profile curve by \(L\). Then the boundary conditions at \(s=0\) and \(s=L\) are obviously \(r(0)=0\), \(r(L)=0\), \(\psi(0)=0\), \(\psi(L)=\pi\). The area and volume of the surface \(\Gamma\) can be computed as \(2\pi\int_{0}^{L}r\,ds\) and \(\pi\int_{0}^{L}r^{2}\sin\psi\,ds\), respectively. Now, we can formulate the problem of finding a stationary shape as follows:
Given an angular velocity \(w\geq 0\), surface area \(A>0\) and volume \(V>0\) (satisfying the isoperimetric inequality \(V\leq 1/(6\pi^{2})A^{\frac{3}{2}}\), i.e. necessary condition for a surface to exist), find \(L\in\mathbb{R}_{+}\), \(\psi(s),r(s):[0,L]\to\mathbb{R}\), \(p_{0},p^{\mathrm{ext}}\in\mathbb{R}\) satisfying the following system of ODEs, integral and boundary conditions:
\[-\rho w^{2} r(\tfrac{1}{2}r\psi_{s}+\tfrac{3}{2}\sin\psi)\] \[= p_{0}\kappa+c_{\kappa}\big{(}r^{-1}(r\kappa_{s})_{s}+\tfrac{1}{ 2}\kappa^{3}-2K\kappa\big{)}+p^{\mathrm{ext}}, \tag{19}\] \[r_{s}= \cos\psi,\] (20) \[2\pi\int_{0}^{L} r\,ds=A,\quad\pi\int_{0}^{L}r^{2}\sin\psi\,ds=V\] (21) \[r(0)= 0,\quad r(L)=0,\quad\psi(0)=0,\quad\psi(L)=\pi. \tag{22}\]
with \(\kappa=(\psi_{s}+\tfrac{\sin\psi}{r}),\ K=\tfrac{\psi_{s}\sin\psi}{r}\).
The system (19)-(22) is further discretized using a staggered grid for \(\psi\) and \(r\) with a uniform mesh step \(\Delta s=L/N\). We prescribed \(r\)-unknowns to nodes \(x_{i}=i\Delta s\), \(i=0,\ldots,N\) and \(\psi\)-unknowns to nodes \(\hat{x}_{j}=(j-\tfrac{1}{2})\Delta s\), \(j=0,\ldots,N+1\). Then equations (19)-(20) are discretized (using standard finite differences) in the inner \(\psi\)-nodes, integrals (21) are computed with the help of composite trapezoid and rectangular (using averaging for \(r\) unknowns), respectively. After we approximate boundary conditions in (22) by \(r(x_{0})=r(x_{N})=0\), \(\psi(\hat{x}_{0})+\psi(\hat{x}_{1})=0\), and \(\psi(\hat{x}_{N})+\psi(\hat{x}_{N+1})=2\pi\), we obtain a non-linear system of \(2N+6\) algebraic equations for \(2N+6\) unknowns: \(L\), \(p_{0}\), \(p^{\mathrm{ext}}\), \(r(x_{i})\), \(i=0,\ldots,N\), and \(\psi(\hat{x}_{j})\), \(j=0,\ldots,N+1\). The system of algebraic equations is solved using a non-linear least-square method with the trust-region-dogle algorithm, which finds search directions and is implemented in the 'fsolve()' Matlab(tm)' procedure. To verify the convergence of the numerical method, solutions were computed for a sequence of refined meshes with \(N\) taking values from \(40,80,160,320,640\). The finest grid solution was taken as the reference, and the error was computed as the \(\ell_{\infty}\) norm of the difference between the solutions for \(N\in 40,80,160,320\) and the finest grid solution. The method demonstrates second-order convergence, as shown in Fig. 1, for two examples of shapes, prolate and oblate.
Assuming axial symmetry is a common approach to simplify the numerical study of minimal energy shapes. In particular, a shape parametrization using \(\psi\) and \(r\) was employed in, e.g., Refs. [3; 4; 30; 43; 44]. However, we believe that the numerical scheme presented in this work is novel.
Figure 1: Convergence of the numerical solutions for refined meshes.
Stationary shapes
To minimize the number of parameters we let
\[c_{\kappa}\in\{0,1\},\quad\rho/2=1,\quad A=4\pi^{2}.\]
This can be always ensured by a proper re-scaling of \(w\), \(p_{0}\), and \(p^{\rm ext}\). We then vary \(w\) and \(\widehat{V}\) and solve (17) to find \(\Gamma\), \(p_{0}\), \(p^{\rm ext}\).
### Case \(w=0\), \(c_{\kappa}=1\).
Setting \(w=0\) (pure elasticity, no fluidity) results in two branches of solutions to (9), consisting of oblate and prolate shapes, as shown in Fig 2. To initiate each branch, we perturb the unit sphere by the second spherical harmonic as an initial guess for our nonlinear solver. The branch of oblate shapes continues with biconcave discocytes until approximately \(\widehat{V}\simeq 0.51\), while the branch of prolate shapes continues with increasingly elongated dumbbell forms. The resulting shapes and corresponding \(p_{0}\) and \(p^{\rm ext}\) are in perfect agreement with results known in the literature [2; 4; 6].
### Case \(w=4\), \(c_{\kappa}=1\).
We now set \(w=4\) to investigate how the equilibrium state is affected by the balance between bending forces and forces generated by fluid motion. Starting from an oblate perturbation of the unit sphere, we find a branch of oblate ellipsoids that continues with biconcave shapes as the reduced volume \(\widehat{V}\) decreases; see Fig. 3. However, the transition to biconcave forms occurs later than for \(w=0\). The surface begins to self-intersect for \(\widehat{V}\lesssim 0.41\). Similar oblate ellipsoidal shapes were reported in Ref. [19] as limit equilibrium solutions to the full system (1)-(2) with \({\bf b}^{\rm ext}=0\) and \({\bf b}^{\rm elst}\) as in (7). These solutions were obtained as stationary limits of 3D numerical solutions that start from a spherical shape with a Killing field as the initial condition.
Starting with a prolate perturbation of the sphere, we were unable to find a branch of prolate ellipsoidal shapes in the vicinity of the unit sphere. Instead, we discovered two branches of oblique forms, as shown in Figs.4 and 5. We were unable to compute shapes on these branches much beyond the smallest reported reduced volumes, i.e., \(\widehat{V}=0.35\) and \(\widehat{V}=0.625\), respectively. It is worth noting that the limit shape is close to pearling, a phenomenon known for pure Helfrich membranes with nonzero spontaneous curvature [4].
Figure 3: A branch of oblate–biconcave shapes for fluid–elastic membrane with \(w=4\). Top panel visualizes the 3D shape for \(\widehat{V}=0.51\).
Figure 2: Branches of oblate–biconcave and prolate–dumbbell shapes for pure elastic membrane.
Figure 4: The first branch of oblique shapes for \(w=4\). Left panel visualizes the 3D shape with \(\widehat{V}=0.66\).
Figure 5: The second branch of oblique shapes for \(w=4\). Left panel visualizes the 3D shape with \(\widehat{V}=0.66\).
Figure 6: A branch of sand watch shapes for \(w=4\). Left panel visualizes the 3D shape with \(\widehat{V}=0.66\).
Another branch of solutions was found for reduced volumes \(\widehat{V}\in[0.62,0.85]\). The branch consists of sand watch shapes, as shown in Fig. 6. For \(\widehat{V}<0.62\) the neck of the shape is closing. The non-linear solver also failed to converge to any solution for reduced volumes larger than \(\widehat{V}\approx 0.85\).
Two branches of dumbbell shapes were found for reduced volumes \(\widehat{V}\leq 0.6\), as shown in Figs.7 and 8. The first branch somewhat resembles the dumbbell shapes found for \(w=0\); compare the shape profiles in Fig.7 and Fig.2. In the second branch, the dumbbell surfaces are distinctly different, featuring flatter concave discs as the reduced volume decreases. Our results suggest that around \(\widehat{V}\simeq 0.61\), there may be transition points where the oblique II and sand watch shapes yield two branches of dumbbell shape solutions.
In the present study, we do not address an important question regarding the stability of the newly found equilibrium states. It should be noted that the existing stability analysis of shapes of minimal bending energy, as presented in, for example, Refs. [45; 46], does not directly apply to dynamic equilibrium since the latter is not known to minimize an energy functional. In particular, when the membrane relaxes from any non-axisymmetric perturbation of an equilibrium shape with \(w\neq 0\), it must dissipate kinetic energy. Therefore, under general shape perturbations, the system cannot relax to the same state and may find a close stationary state, transit to another branch, or relax to complete rest with \(w=0\). A numerical illustration of the fluid deformable surface evolution from an oblate-biconcave shape towards prolate-dumbbell shapes with different symmetry axes can be found in Ref. [19].
**Remark 4**: The shape branches reported above were computed by trying different initial guesses in the algebraic solver. For example, the shapes in Fig. 4 resulted from setting the initial guess to be the perturbation of the unit sphere by \(\frac{1}{\gamma 0}S_{2}\), where \(S_{2}\) is the 2nd spherical harmonic, and \(\widehat{V}=0.95\). After the first shape was computed, we continued to "move" along the branch by gradually decreasing \(\widehat{V}\) until the solver failed to converge starting from the previous shape as an initial guess.
### Larger \(w\) and \(c_{\kappa}=0\) cases.
To conclude this section, we will examine some shape transformations that occur when fluid inertia forces dominate over bending forces. Figure 9 depicts a branch of disc-like shapes for a reduced volume of \(\widehat{V}=0.51\). The branch begins with a biconcave shape that solves equation (17) for \(w=0\), and continues with solutions for a sequence of increasing \(w\). As \(w\) becomes larger, we observe that the discs become less concave and eventually converge to a shape resembling an oblate ellipsoid.
If the limit (for \(w\to\infty\)) smooth surface exists, it solves the shape equation (17) for the "pure fluid" case, in which the elastic forces are neglected by setting \(c_{\kappa}=0\). Mechanically this models a "heavy" fluid membrane such that inertia dominates over elasticity. We solved (17) for \(c_{\kappa}\!\!=\!0\) and found two branches of solutions consisting of oblate and prolate shapes, as shown in Fig.10. In this limit case, we did not find any equilibrium states with concave or saddle shapes, as both principle curvatures were always positive. Additionally, we did not find any other solutions besides the two branches illustrated in Fig.10. The coefficients \(p_{0}\) and \(p^{\rm ext}\) corresponding to the shapes in Fig.10 are reported in Table 1.
When seeking solutions to (17) using the non-linear solver, we always used initial guesses that were symmetric with respect to the \(xy\)-plane. Therefore, asymmetric solutions are not reported in this study. However, we note that for the pure elastic model, asymmetric solutions are known to bifurcate from branches of symmetric shapes; see, for example, Refs. [2; 6], or the discussion in Sections 3.1.4, 3.4 of Ref. [6]. Therefore, asymmetric stationary solutions may also exist for \(w\neq 0\).
## V Conclusions
The mechanical equilibrium of a fluid inextensible membrane with non-negligible mass is achieved through steady-state solutions of the surface Navier-Stokes equations coupled with an out-of-plane elasticity model. Assuming the Boussinesq-Scriven constitutive law for viscous stresses, we derived three conditions for membrane equilibrium, namely (8), (11), and (12), which are independent of the elasticity model.
The second condition implies that there are only two possible scenarios: either the lateral motions of the membrane completely cease, or the equilibrium shape supporting a stationary flow is the surface with a Killing field. Accounting for a specific elasticity model leads to the shape equation. For elasticity models with an energy functional, the shape equation under the first scenario reduces to the optimality condition for the functional with area and volume constraints. For the second scenario, the shape equation represents a balance between the normal components of centrifugal, elastic, tension, and external forces. For axisymmetric surfaces, the equation can be efficiently parameterized and solved numerically.
Numerical studies using the simplest Helfrich elasticity model show that the equilibrium shapes depend on lateral motions and may differ significantly from those known for \(w=0\). In particular, new branches of solutions appear. We also found some equilibrium states for a pure fluid membrane, which correspond to stationary solutions of the evolving-surface Navier-Stokes equations with no elastic forces and external forces given by a constant force acting in the normal direction (i.e., the constant osmotic pressure). Determining which of the computed equilibrium states are stable is an important open question that we leave for future research.
Figure 8: The second branch of dumbbell shapes at \(w=4\). Left panel visualizes the 3D shape with \(\widehat{V}=0.48\).
Figure 7: The first branch of dumbbell shapes for \(w=4\). Left panel visualizes the 3D shape with \(\widehat{V}=0.48\).
###### Acknowledgements.
The author was supported in part by the U.S. National Science Foundation under awards DMS-2011444 and DMS-1953535. It is a pleasure to thank Robert Bryant and Gordon Heier for their help in understanding surfaces with Killing fields.
## Data Availability Statement
A Matlab script used to generate the results presented in the paper can be obtained from the author upon a reasonable request.
## Appendix A
Consider a smooth closed \(\Gamma\) embedded in \(\mathbb{R}^{3}\). For a smooth vector field \(\mathbf{v}:\mathcal{O}(\Gamma)\to\mathbb{R}^{3}\) define \(\Gamma(t)=\{\mathbf{y}\in\mathbb{R}^{3}\mid\mathbf{y}=\mathbf{x}(t,\mathbf{z} ),\;\mathbf{z}\in\Gamma\}\), \(t\in[0,\varepsilon)\), where the trajectories \(\mathbf{x}(t,\mathbf{z})\) solve the Cauchy problem \(\frac{d\mathbf{x}}{dt}=\mathbf{v}(\mathbf{x}),\mathbf{x}(0,\mathbf{z})= \mathbf{z}\in\Gamma\), and a small \(\varepsilon>0\) such that \(\Gamma(t)\in\mathcal{O}(\Gamma)\) for all \(t\in[0,\varepsilon)\). Obviously, we have \(\Gamma(0)=\Gamma\). Applying the surface Reynolds transport theorem, one computes
\[\begin{split}\frac{dH}{d\Gamma}\bigg{|}_{\mathbf{v}}& =\left.\left(\frac{d}{dt}\frac{c_{\kappa}}{2}\int_{\Gamma(t)} \kappa^{2}\,ds\right)\right|_{t=0}\\ &=\frac{c_{\kappa}}{2}\int_{\Gamma}\left(\kappa^{2}+\kappa^{2} \,\mathrm{div}_{\Gamma}\mathbf{v}\right)ds\\ &=\frac{c_{\kappa}}{2}\int_{\Gamma}\left(2\kappa\dot{\kappa}+ \kappa^{2}\,\mathrm{div}_{\Gamma}\mathbf{v}\right)ds.\end{split} \tag{10}\]
Let \(d=d(t):\mathcal{O}(\Gamma)\to\mathbb{R}\) be a sign distance function for \(\Gamma(t)\) which is smooth in the sufficiently small neighborhood \(\mathcal{O}(\Gamma)\). Then \(\mathbf{n}=\nabla d\), \(\kappa=\mathrm{div}_{\Gamma}\,\mathbf{n}\) are extensions of the normal vector field and mean curvature to \(\mathcal{O}(\Gamma)\). We split \(\mathbf{v}\) into tangential and normal component:
\[\mathbf{v}=\mathbf{v}_{T}+v_{N}\mathbf{n}.\]
Since \(\kappa\) is defined in a neighborhood of \(\Gamma\), we can expand
\[\kappa=\frac{\partial\kappa}{\partial t}+\mathbf{v}\cdot\nabla\kappa=\frac{ \partial\kappa}{\partial t}+\mathbf{v}_{T}\cdot\nabla_{\Gamma}\kappa+v_{N}( \mathbf{n}\cdot\nabla)\kappa\quad\mathrm{on}\;\Gamma. \tag{11}\]
Integration by parts along \(\Gamma\) proves the identity
\[2\int_{\Gamma}\kappa\mathbf{v}_{T}\cdot\nabla_{\Gamma}\kappa\,ds=-\int_{ \Gamma}\kappa^{2}\,\mathrm{div}_{\Gamma}\mathbf{v}_{T}\,ds. \tag{12}\]
\begin{table}
\begin{tabular}{l|c c c c c c|c c c c c c c} & \multicolumn{8}{c}{Oblate shapes} & \multicolumn{8}{c}{Prolate shapes} \\ \(\hat{V}\) & 0.51 & 0.59 & 0.67 & 0.75 & 0.83 & 0.91 & 0.99 & 0.40 & 0.498 & 0.597 & 0.695 & 0.793 & 0.892 & 0.99 \\ \hline \multicolumn{11}{c}{\(w=0\), \(c_{\kappa}=1\)} \\ \(p_{0}\) & 4.11 & 4.64 & 5.12 & 5.55 & 5.91 & 6.17 & 6.17 & 21.1 & 13.6 & 9.46 & 6.93 & 5.15 & 5.08 & 5.74 \\ \(p^{\mathrm{ext}}\) & -16.1 & -15.7 & -15.3 & -14.8 & -14.2 & -13.6 & -12.5 & -105 & -54.5 & -31.7 & -19.9 & -13.0 & -11.4 & -11.6 \\ \hline \multicolumn{11}{c}{\(w=1\), \(c_{\kappa}=0\)} \\ \(p_{0}\) & -1.02 & -1.04 & -1.07 & -1.11 & -1.19 & -1.39 & -3.06 & 2.17 & 0.424 & 0.232 & 0.147 & 0.097 & 0.063 & 0.038 \\ \(p^{\mathrm{ext}}\) & 0.19 & 0.26 & 0.36 & 0.50 & 0.72 & 1.21 & 4.70 & -5.57 & -1.879 & -1.367 & -1.086 & -0.881 & -0.709 & -0.555 \\ \end{tabular}
\end{table}
Table 1: Surface tension coefficient \(p_{0}\) and osmotic pressure \(p^{\mathrm{ext}}\) recovered for branches of oblate and prolate shapes for pure elastic and pure fluid shapes.
Figure 10: Two branches of shapes for a pure fluid membrane, \(c_{\kappa}=0\).
The identity \(\operatorname{div}_{\Gamma}\mathbf{n}=\kappa\) yields
\[\operatorname{div}_{\Gamma}\!\mathbf{v} =\operatorname{div}\mathbf{v}_{T}+\operatorname{div}_{\Gamma}(v_{N} \mathbf{n})\] \[=\operatorname{div}\mathbf{v}_{T}+\mathbf{n}\cdot\nabla_{\Gamma}v _{N}+v_{N}\kappa \tag{10}\] \[=\operatorname{div}\mathbf{v}_{T}+v_{N}\kappa.\]
Using (11), (12) and (10) in (11) gives
\[\frac{dH}{d\Gamma}\bigg{|}_{\mathbf{v}}=\frac{c_{\kappa}}{2}\int_{\Gamma} \left(2\kappa(\frac{\partial\kappa}{\partial t}+v_{N}(\mathbf{n}\cdot\nabla) \kappa)+\kappa^{3}v_{N}\right)ds \tag{11}\]
We assume that the neighborhood \(\mathcal{O}(\Gamma)\) is sufficiently small such that the closest point projection \(p:\mathcal{O}(\Gamma)\to\Gamma(t)\), \(p(x,t)=x-d\mathbf{n}\) is well defined. We then have
\[\frac{\partial d}{\partial t}=-v_{N}^{e}\quad\text{in }\mathcal{O}(\Gamma) \tag{12}\]
where \(v_{N}^{e}(x,t)=v_{N}(p(x,t),t)\). With the help of (12) and \(\kappa=\operatorname{div}_{\Gamma}\mathbf{n}=\operatorname{div}\mathbf{n}= \Delta d\), we compute
\[\begin{split}\frac{\partial\kappa}{\partial t}&+v_{N} (\mathbf{n}\cdot\nabla)\kappa\\ &=\Delta\frac{\partial d}{\partial t}+v_{N}\mathbf{n}\cdot\nabla \operatorname{div}\mathbf{n}\\ &=-\Delta v_{N}^{e}+v_{N}\mathbf{n}\cdot\nabla\operatorname{div} \mathbf{n}\\ &=-\Delta_{\Gamma}v_{N}+v_{N}\mathbf{n}\cdot\nabla\operatorname{ div}\mathbf{n}\end{split}\quad\text{on }\Gamma(t). \tag{13}\]
Taking the divergence of the identity \(\nabla\mathbf{n}^{2}=0\), we get \(0=\mathbf{n}\cdot\Delta\mathbf{n}+\nabla\mathbf{n}:\nabla\mathbf{n}\) implying that \(-\mathbf{n}\cdot\Delta\mathbf{n}=\operatorname{tr}((\nabla\mathbf{n})^{2})= \operatorname{tr}(\mathbf{H}^{2})=\kappa_{1}^{2}+\kappa_{2}^{2}\). We use this and \(\Delta=\nabla\operatorname{div}-\nabla\times\nabla\times\) to handle the last term in the right-hand side of (13):
\[\begin{split}\mathbf{n}\cdot\nabla\operatorname{div}\mathbf{n}& =\mathbf{n}\cdot\Delta\mathbf{n}+\mathbf{n}\cdot(\nabla\times \nabla\times\mathbf{n})\\ &=-(\kappa_{1}^{2}+\kappa_{2}^{2})+\mathbf{n}\cdot(\nabla\times \nabla\times\mathbf{n})\\ &=-(\kappa_{1}^{2}+\kappa_{2}^{2}).\end{split} \tag{14}\]
For the last equality we used \(\nabla\times\mathbf{n}=\nabla\times(\nabla d)=0\). Substituting (13)-(14) in (11) we obtain
\[\begin{split}\frac{dH}{d\Gamma}\bigg{|}_{\mathbf{v}}& =\frac{c_{\kappa}}{2}\int_{\Gamma}2\kappa(-\Delta_{\Gamma}v_{N}-( \kappa_{1}^{2}+\kappa_{2}^{2})v_{N})+\kappa^{3}v_{N}\,ds\\ &=\frac{c_{\kappa}}{2}\int_{\Gamma}2\kappa(-\Delta_{\Gamma}v_{N} -(\kappa^{2}-2K)v_{N})+\kappa^{3}v_{N}\,ds.\end{split}\]
Integration by parts yields the result in (7).
|
2302.10102 | Revisiting the Gamow Factor of Reactions on Light Nuclei | This study provides an improved understanding of the penetration
probabilities (PPs) in nuclear reactions of light nuclei by correcting the
assumptions used in the conventional Gamow factor. The Gamow factor effectively
describes the PP in nuclear reactions based on two assumptions: low particle
energy than the Coulomb barrier and neglecting the dependence of nuclear
interaction potential. However, we find that the assumptions are not valid for
light nuclei. As a result of a calculation that excludes the assumptions, we
obtain the PP that depends on the nuclear interaction potential depth for the
light nuclei. For the potential depth fitted by the experimental fusion
cross-section, we present that PPs of light nuclei (D+D, D+T, D+$^3$He, p+D,
p+$^6$Li, and p+$^7$Li) become higher than the conventional one near the
Coulomb barrier. We also discuss the implications of the modified PP, such as
changes in the Gamow peak energy, which determine the measurement of the energy
range of the nuclear cross-section in experiments, and the electron screening
effect. | Eunseok Hwang, Heamin Ko, Kyoungsu Heo, Myung-Ki Cheoun, Dukjae Jang | 2023-02-20T17:04:07Z | http://arxiv.org/abs/2302.10102v2 | # Revisiting the Gamow Factor of Reactions on Light Nuclei
###### Abstract
This study provides an improved understanding of the penetration probabilities (PPs) in nuclear reactions of light nuclei by correcting the assumptions used in the conventional Gamow factor. The Gamow factor effectively describes the PP in nuclear reactions based on two assumptions: low particle energy than the Coulomb barrier and neglecting the dependence of nuclear interaction potential. However, we find that the assumptions are not valid for light nuclei. As a result of a calculation that excludes the assumptions, we obtain the PP that depends on the nuclear interaction potential depth for the light nuclei. For the potential depth fitted by the experimental fusion cross-section, we present that PPs of light nuclei (D+D, D+T, D+\({}^{3}\)He, p+D, p+\({}^{6}\)Li, and p+\({}^{7}\)Li) become higher than the conventional one near the Coulomb barrier. We also discuss the implications of the modified PP, such as changes in the Gamow peak energy, which determine the measurement of energy range of the nuclear cross-section in experiments, and the electron screening effect.
Introduction
One of the ultimate goals of nuclear astrophysics is to elucidate the origin of elements in the universe. For this purpose, astrophysical processes such as \(r-\), \(p-\), and \(s-\)processes [1; 2; 3; 4; 5] have been well established, and recently, the \(rp-\)[6] and \(\nu-\)induced processes [7; 8; 9; 10] have been developed to supplement the conventional astrophysical processes. Such studies on nucleosynthesis are commonly evaluated by the network calculation, in which thermonuclear reaction rate plays a crucial role as the main building block.
For the two-body reaction of \(i+j\to k+l\), the thermonuclear reaction rate per a pair of particles is generally given as1
Footnote 1: We adopt the natural unit of \(\hbar=c=k_{B}\equiv 1\) for all equations in this paper.
\[\langle\sigma v\rangle_{ij}=\sqrt{\frac{8}{\pi\mu_{ij}T^{3}}}\int Ee^{-E/T} \sigma(E)dE, \tag{1}\]
where \(\sigma(E)\) is the cross-section for the given nuclear reaction, \(v\) is the relative velocity between species \(i\) and \(j\), \(\mu_{ij}\) is the reduced mass of \(i\) and \(j\), \(T\) is the temperature, and \(E=\mu_{ij}v^{2}/2\) is the relative energy of \(i+j\) particles. In the typical temperature range of nucleosynthesis environments, the thermonuclear reaction rate in Eq. (1) is characterized by the cross-section in the energy region from sub-keV to a few MeV. However, it still remains a challenge to measure the low energy cross-section of nuclear reactions as these energy regions are near or below the Coulomb barrier.
The low energy cross-section can be explained by factorizing \(\sigma(E)\) into the contributions by nuclear and Coulomb interaction as follows:
\[\sigma(E)=\sigma_{\rm nuc}(E)\hat{P}(E) \tag{2}\]
where \(\sigma_{\rm nuc}(E)\) represents the cross-section by nuclear interaction, and \(\hat{P}(E)\) is the penetration probability (PP) for the Coulomb barrier. For the nuclear interaction part, the \(\sigma_{\rm nuc}(E)\) can be expressed as \(\sigma_{\rm nuc}(E)=S(E)/E\) using the astrophysical S-factor, \(S(E)\). For the Coulomb interaction part, the \(\hat{P}(E)\) is given as
\[\hat{P}(E)=e^{-2\pi\eta}, \tag{3}\]
where \(\eta\) is the Sommerfeld parameter defined as \(\eta\equiv Z_{i}Z_{j}e^{2}/v\) with nuclear charges \(eZ_{i}\) and \(eZ_{j}\). The PP in Eq. (3) is known as the Gamow factor, which effectively describes the
probability of the charged nuclei penetrating the given Coulomb barrier. Then, using \(S(E)\) and \(\hat{P}(E)\), the nuclear cross-section is rewritten as
\[\sigma(E)=\frac{S(E)}{E}e^{-2\pi\eta}, \tag{4}\]
When a low-lying resonance is absent, the \(S(E)\) is monotonic at the low energy region. To determine the \(S(E)\) precisely at these low energies, the R-matrix theory [11; 12] has been utilized, providing a more precise evaluation of \(S(E)\). Therefore, even with the lack of low energy cross-section data, the nuclear cross-section for nucleosynthesis can be evaluated using the extrapolated \(S(E)\) and the Gamow factor, a widely adopted method in nuclear astrophysics [13; 14].
However, it is questionable whether the conventional Gamow factor is valid for reactions of light nuclei. The Gamow factor in Eq. (3) is derived from the Wentzel-Kramers-Brillouin (WKB) approximation in the low energy limit, with the assumption that the particle energy is much lower than the Coulomb barrier. This WKB approximation results in the Gamow factor that depends only on the nuclear charge and energy. While the low energy assumption is suitable for reactions involving heavy nuclides, the Coulomb barriers for the light nuclei could be comparable to the thermal kinetic energy in specific astrophysical environments. For instance, in Big-Bang nucleosynthesis or supernova explosions, temperature can reach a few MeV, which are not significantly lower than the Coulomb barriers (\(V_{1}=Z_{1}Z_{2}e^{2}/R_{0}\) where \(R_{0}=r_{0}(A_{1}^{1/3}+A_{2}^{1/3})\) with \(r_{0}\) in [15]) tabulated in Table 1. This implies that the assumption used to derive the conventional Gamow factor may not be proper.
Hence, this paper aims to numerically investigate the penetration probability (PP) for charged nuclei without the assumptions adopted for the Gamow factor. Our findings present
\begin{table}
\begin{tabular}{c|c} \hline \hline
**reaction** & **Coulomb barrier (MeV)** \\ \hline D + D & 0.206 \\ D + T & 0.338 \\ D +\({}^{3}\) He & 0.320 \\ p + D & 0.541 \\ p +\({}^{6}\) Li & 1.300 \\ p +\({}^{7}\) Li & 1.115 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Couloumb barrier for light nuclei reactions.
a PP that depends on the nuclear interaction potential depth for the reactions listed in Table 1. We compare the fully calculated PP to the conventional Gamow factor, using a potential depth fitted by the fusion cross-section data. Furthermore, we discuss the implication of the modified PP on the Gamow peak energy, the measurement of nuclear cross-section in experiment, and the electron screening effect.
This paper is organized as follows: In section II, we introduce the modified PP that depends on potential depth, evaluating the validity of assumptions for the conventional Gamow factor. In section III, we explore the modified Gamow peak energy resulting from the change in the PP. In Section IV, we discuss possible change in the electron screening effect due to the modified PP. Section V provides the conclusion of this paper.
## II Modified Gamow Factor
In a reaction between two charged nuclides, we should take into account two kinds of potentials for nuclear and Coulomb interactions. The description of the nuclear interaction potential is model-dependent, so various effective potentials are used to describe the low energy reactions, such as optical potential [16], folding potential [17], Akyuz-Winther potential [18], and so on. However, it is challenging to obtain the precise bare potential for light nuclei from these models [19]. In this study, we adopt the square well potential commonly used for the reactions of light nuclei, which has been shown to describe the fusion cross-section to be consistent with the result based on the complex potentials [20; 21; 22]. Using the square well potential with the real potential, we derive the penetration probability. On the other hand, the Coulomb potential is exactly given as a shape of \(1/r\), so we can treat it as a summation of infinitesimal square barriers.
For the potentials of the square well and infinitesimal barrier, we can derive the PP of the reaction between \(i\) and \(j\), respectively [23],
\[\hat{P}_{W}(E,V_{0})=\frac{4\sqrt{(E+V_{0})E}}{\left[\sqrt{(E+V_{0})}+\sqrt{E }\right]^{2}}, \tag{5}\]
\[\hat{P}_{B}(E,V_{1})=\frac{4E}{\left[4E+\left(E+V_{1}+\frac{E^{2}}{V_{1}-E} \right)\sinh^{2}(\sqrt{2\mu_{ij}(V_{1}-E)}\Delta)\right]}, \tag{6}\]
where \(\hat{P}_{W}(E,V_{0})\) and \(\hat{P}_{B}(E,V_{1})\) represent the PP for the square well and infinitesimal square
barrier, respectively. The \(\hat{P}_{W}\) depends on the particle energy \(E\) and the depth of the well \(V_{0}\), and the \(\hat{P}_{B}(E,V_{1})\) depends on the height of the barrier \(V_{1}\), \(\mu_{ij}\), and width of the square barrier potential \(\Delta\) as well as \(E\). Then, combining both potentials, we can obtain the PP as [23]
\[\hat{P}_{WB}(E,V_{0},V_{1}) = 4\sqrt{E(E+V_{0})}\left[2E+V_{0}+2\sqrt{E(E+V_{0})}\right. \tag{7}\] \[\left.+\left(E+V_{0}+V_{1}+\frac{E(E+V_{0})}{V_{1}-E}\right) \sinh^{2}(\sqrt{2\mu_{ij}(V_{1}-E)}\Delta)\right]^{-1}.\]
To derive the conventional Gamow factor, two approximations are used based on the following assumptions. First, the particle energy is much lower than the Coulomb barrier, i.e., \(E\ll V_{1}\), which leads Eq. (7) to
\[\hat{P}_{WB}(E,V_{0},V_{1})\approx\frac{16\sqrt{E(E+V_{0})}(V_{1}-E)}{V_{1}(V _{0}+V_{1})}\exp\left(-2\sqrt{2\mu_{ij}(V_{1}-E)}\Delta\right). \tag{8}\]
Second, the coefficient of the exponential factor in Eq. (8) is assumed to be an order of unity for reasonable physical values of \(E\), \(V_{0}\), and \(V_{1}\), which results in
\[\hat{P}_{WB}(E,V_{1})\approx\exp\left(-2\sqrt{2\mu_{ij}(V_{1}-E)}\Delta\right). \tag{9}\]
We note that the last approximation removes the dependence of the nuclear interaction potential \(V_{0}\) on PP. Since the PP includes only the infinitesimal barrier in Eq. (9), the total PP is obtained by accumulating each PP from the classical turning point (\(R_{C}\)) to the radius of the potential well (\(R_{0}\)). This product can be expressed as the summation of each exponent. For the infinitesimal width in the summation, \(\Delta\ll 1\), we can convert the total PP to the following integration form:
\[\hat{P}_{G}(E)\approx\exp\left[-2\int_{R_{0}}^{R_{C}}\sqrt{2\mu_{ij}\left( \frac{Z_{i}Z_{j}e^{2}}{r}-E\right)}dr\right], \tag{10}\]
The leading term of the integration in Eq. (10) is equivalent to Eq. (3), the conventional Gamow factor.
Although the assumptions used in Eqs. (8) and (9) are mostly valid, especially for heavy nuclei, the Coulomb barriers are not significantly high for reactions of light nuclei as shown in Table 1. Thus, we modify the conventional PP using Eqs. (5) and (6) without the aforementioned two approximations, whose formula is written as follows:
\[\hat{P}_{mod}(E,V_{0},V_{1})=\hat{P}_{WB,1}(E,V_{0},V_{1})\prod_{m=2}^{n}\hat {P}_{B,m}(E,V_{1,m}), \tag{11}\]
where \(\hat{P}_{mod}(E,V_{0},V_{1})\) represents the modified PP and the subscript \(m\) indicates \(m\)-th PP for the infinitesimal barrier from the innermost barrier. Each barrier in Eq. (11) depends on the height of \(m\)-th barrier, \(V_{1,m}\), at each position, \(R_{m}\). These quantities are respectively given as:
\[R_{m} = \left(\frac{R_{c}-R_{0}}{n}\right)(m-1)+R_{0}, \tag{12}\] \[V_{1,m} = \frac{Z_{1}Z_{2}e^{2}}{R_{m}}. \tag{13}\]
For the innermost barrier, i.e., \(m=1\), the PP includes both barrier and square well potential. Since we take into account the potential depth, the general form of \(\hat{P}_{mod}(E,V_{0},V_{1})\) in Eq. (11) depends on nuclear interaction potential depth \(V_{0}\) and \(V_{1}\), and \(E\).
Figure 1 shows PP for the D+D reaction as a function of \(E\) and \(V_{0}\). Both \(\hat{P}_{G}(E)\) and \(\hat{P}_{mod}(E,V_{0},V_{1})\) increase with particle energy \(E\), indicating that the higher the energy of a particle, the higher the probability of penetrating the Coulomb barrier. However, the difference between \(\hat{P}_{G}(E)\) and \(\hat{P}_{mod}(E,V_{0},V_{1})\) is the dependence on \(V_{0}\). While \(\hat{P}_{G}(E)\) does not depend on \(V_{0}\), \(\hat{P}_{mod}(E,V_{0},V_{1})\) increases as \(V_{0}\) decreases. This is because \(\hat{P}_{mod}(E,V_{0},V_{1})\) is similar to \(\hat{P}_{W}(E,V_{0})\), that decreases by increase of \(V_{0}\) near the Coulomb barrier where \(E\simeq V_{1}\).
For the square well potential, parameters of \(V_{0}\) and \(r_{0}\) can be fitted using the complex potential for fusion cross-section data, as reported in Ref. [15]. The previous literature modeled the square well potential using a complex value, which provided real and imaginary potential depth. Although the complex potential results in a different formula for the penetration probability, the form can be recovered to the formula in Eq. (11) because the imaginary potential is much smaller than the real potential. We confirm that the given imaginary potential affects the PP by less than 2%. Therefore, it is valid to use only the real \(V_{0}\) with radius \(r_{0}\) in Ref. [15]. Taking those parameters for the reactions in Table 1, we show \(\hat{P}_{G}(E)\) and \(\hat{P}_{mod}(E,V_{0},V_{1})\) in Fig. 2 as a function of \(E\). Near the Coulomb barrier, the given \(V_{0}\) results in higher \(\hat{P}_{mod}(E,V_{0},V_{1})\) than \(\hat{P}_{G}(E)\), except for the case of \(\rm{p+D}\) reaction.
The approximation is valid only for the reaction of \(\rm{p+D}\) due to its small reduced mass and Coulomb barrier. Near the Coulomb barrier, since \(\hat{P}_{mod}(E,V_{0},V_{1})\) is approximately equal
to \(\hat{P}_{W}(E,V_{0})\), the ratio of \(\hat{P}_{mod}(E,V_{0},V_{1})\) to \(\hat{P}_{G}(E)\) can be expressed as
\[\frac{\hat{P}_{mod}(E,V_{0},V_{1})}{\hat{P}_{G}(E)}\simeq\frac{4\sqrt{(E+V_{0})E }/(\sqrt{E+V_{0}}+\sqrt{E})^{2}}{\exp\left[-2\pi Z_{i}Z_{j}e^{2}\sqrt{\frac{ \mu_{ij}}{2E}}\right]}. \tag{14}\]
For the given \(V_{0}\) and condition of \(E\simeq V_{1}\) near the Coulomb barrier, the numerator \(\hat{P}_{mod}(E,V_{0},V_{1})\) in Eq. (14) is a constant and less than unity. Therefore, the ratio in Eq. (14) is determined by the exponential term. Among three reactions of D+D, D+T, and p+D (\(Z_{i}=Z_{j}=1\)), the p+D reaction has the smallest value of \(\sqrt{\mu_{ij}/(2E)}\), resulting in the smallest exponential term. This leads the ratio in Eq. (14) to be close to unity. On the other hand, for the other reactions, \(\hat{P}_{mod}(E,V_{0},V_{1})\) is higher than \(\hat{P}_{G}(E)\) near the Coulomb barrier because of the large value of \(\sqrt{\mu_{ij}/(2E)}\). This inconsistency implies that the previous assumptions adopted for \(\hat{P}_{G}(E)\) oversimplify the PP.
Figure 1: The PP for the D+D reaction as a function of \(E\) and \(V_{0}\). The solid and dot-dashed lines represent \(\hat{P}_{G}(E)\) and \(\hat{P}_{mod}(E,V_{0},V_{1})\), respectively. The color bar on the right indicates value of \(V_{0}\) for \(\hat{P}_{mod}(E,V_{0},V_{1})\). The black-vertical dashed line marks the Coulomb barrier, \(V_{1}=0.206\,\)MeV for the D+D reaction. A subplot provides magnified plot within the energy range from 0.7 keV to 13 keV.
## III Modified Gamow Energy
For the case of the non-resonant reaction, using the \(S(E)\) and \(\hat{P}_{G}(E)\), the thermonuclear reaction rate in Eq. (1) is rewritten as
\[\langle\sigma v\rangle_{ij}=\sqrt{\frac{8}{\pi\mu_{ij}T^{3}}}\int S(E)e^{-E/T} \hat{P}_{G}(E)dE. \tag{15}\]
When the low energy region of \(S(E)\) is considered, the integrand in Eq. (15) depends on the Maxwell-Boltzmann (MB) distribution and \(\hat{P}_{G}(E)\). The combination of these two factors results in a sharp peak in energy, referred to as the Gamow peak. This peak represents the energy range where most of the thermonuclear reactions occur. The Gamow peak energy, \(E_{G}\), is determined by the extremum condition in the integrand in Eq. (15). The expression for \(E_{G}\) is given as \(E_{G}=0.1220\left(Z_{i}^{2}Z_{j}^{2}\mu_{ij}T_{9}^{2}\right)^{1/3}(\text{MeV})\), where \(T_{9}\) is the temperature in GK. This value of \(E_{G}\) can be used to estimate the thermonuclear reaction rate in astrophysical environments or to determine the energy range of interest for measuring the nuclear cross-section in experiments [24; 25; 26].
Since \(\hat{P}_{mod}(E,V_{0},V_{1})\) differs from the \(\hat{P}_{G}(E)\), the \(E_{G}\) is also changed. Replacing \(\hat{P}_{G}(E)\) to \(\hat{P}_{mod}(E,V_{0},V_{1})\) in Eq.(15), the extremum condition is
\[\frac{d}{dE}\left(\hat{P}_{mod}(E,V_{0},V_{1})\exp(-E/T)\right)_{E=E_{G}^{ \prime}}=0, \tag{16}\]
where \(E_{G}^{\prime}\) is the modified Gamow peak energy. We note that \(E_{G}^{\prime}\) depends on \(V_{0}\) and \(r_{0}\) as well as \(T\). Adopting the same potential parameters in Fig. 2, we show \(E_{G}\) and \(E_{G}^{\prime}\) as a function of \(T\) in Fig. 3 for the reactions in Table 1. In most temperature regions, \(E_{G}^{\prime}\)s are lower than \(E_{G}\)s. This is because the increase PP intersects with the MB distribution in the lower energy region. As a result, we find that the \(E_{G}^{\prime}\) is maximally about 5.3 times smaller than \(E_{G}\) at the sub-keV region. This implies that the estimation of interest energy region in experiments or astrophysical environments could be lower than the case when the \(E_{G}\) is adopted.
## IV Discussion: Electron Screening Effect
We also discuss the effects of change in PP on electron screening. In a plasma, the Coulomb potential of the charged nuclei is screened by the electron. As a result, the Coulomb potential
between \(i\) and \(j\) is replaced to
\[V_{s}(r)=\frac{Z_{i}Z_{j}e^{2}}{r}e^{-r/\lambda_{D}}, \tag{17}\]
where subscript \(s\) indicates'screened' and \(\lambda_{D}\) is the Debye length. The Debye screening increases the thermonuclear reaction rate by decreasing the Coulomb barrier.
For the conventional Gamow factor, the enhancement factor is defined as
\[f_{g,s}(\rho,T)\equiv\frac{\left<\sigma v\right>_{s}}{\left<\sigma v\right>_{b} }\simeq\frac{\int e^{-E/T}\hat{P}_{G,s}(E,\rho,T)dE}{\int e^{-E/T}\hat{P}_{G,b} (E)dE}, \tag{18}\]
where subscripts \(b\) indicates 'bare'. The PP for bare potential is equal to the Gamow factor, i.e., \(\hat{P}_{G,b}(E)=\hat{P}_{G}(E)\), and the screened PP is given as
\[\hat{P}_{G,s}(E,\rho,T)=\exp\left[-2\int_{R_{0}}^{R_{C}}\sqrt{2\mu_{ij}\left( \frac{Z_{i}Z_{j}e^{2}}{r}e^{-r/\lambda_{D}}-E\right)}dr\right] \tag{19}\]
\[\approx\exp\left[-2\int_{R_{0}}^{R_{C}}\sqrt{2\mu_{ij}\left(\frac{Z_{i}Z_{j}e ^{2}}{r}-\frac{Z_{i}Z_{j}e^{2}}{\lambda_{D}}-E\right)}dr\right],\]
whose effect on nucleosynthesis has been extensively studied [27; 28; 29; 30; 31; 32; 33; 34]. On the other hand, for \(\hat{P}_{mod}(E,V_{0},V_{1})\), we can write the enhancement factor as follows:
\[f_{mod,s}(\rho,T,V_{0},V_{1})\simeq\frac{\int e^{-E/T}\hat{P}_{mod,s}(E,\rho,T,V_{0},V_{1})dE}{\int e^{-E/T}\hat{P}_{mod}(E,V_{0},V_{1})dE}, \tag{20}\]
where \(\hat{P}_{mod,s}(E,\rho,T,V_{0},V_{1})\) includes the barrier for the Debye potential. Therefore, the enhancement factor in Eq. (20) depends on \(V_{0}\) and \(V_{1}\) as well as \(T\) and \(\rho\).
Figure 4 shows the ratio of both enhancement factors, \(f_{mod,s}(\rho,T,V_{0},V_{1})/f_{g,s}(\rho,T)\). Since the effects of plasma is significant at low temperatures and high densities, both enhancement factors are also meaningful in this region. According to Fig. 4, \(f_{mod,s}(\rho,T,V_{0},V_{1})/f_{g,s}(\rho,T)\) is less than unity at the entire parameter space. This is because the increased of PP reduces the Gamow peak energy, the region determining the enhancement factor dominantly. For the central condition of the Sun in the standard solar model [35], Table 2 shows the \(f_{mod,s}(\rho_{c},T_{c},V_{0},V_{1})/f_{g,s}(\rho_{c},T_{c})\) where \(\rho_{c}\) and \(T_{c}\) are the central density and temperature of the Sun, as an example result. The difference between \(f_{mod,s}(\rho,T,V_{0},V_{1})\) and \(f_{g,s}(\rho_{c},T_{c})\) implies that the electron screening enhancement factor in the Sun (or other astrophysical environments) could be reconsidered along with the present prescription. We leave it as a future work.
## V Conclusion
In conclusion, in this study, we present a modification to the conventional Gamow factor for the reaction of light nuclei discarding previously adopted assumptions: 1) neglecting the nuclear interaction potential term and 2) assuming that the Coulomb barrier is much higher than the particle energy. Our results indicate that the PP is sensitive to the potential parameter of \(V_{0}\), which was not taken into account in the conventional Gamow factor. For the potential parameter fitted to fusion cross-section data, we also show that the modified PP is higher than the conventional one near the Coulomb barrier. This implies that the previous approximations for the Gamow factor oversimplify the PP for reactions of light nuclei. Furthermore, the increase in PP results in a lower Gamow peak energy, which also reduces the enhancement factor of the thermonuclear reaction rate due to the electron screening effect as summarized in Table 2. Therefore, the previous estimation of the interest energy region in experiments or astrophysical environments could decrease, which might be revisited in other studies, such as experiments of reactions of light nuclei, stellar nucleosynthesis, and explosive nucleosynthesis.
## Acknowledgement
The work of E.H., H.K., K.H. and M.K.C. are supported by the National Research Foundation of Korea (Grant Nos. NRF-2021R1A6A1A03043957 and NRF2020R1A2C3006177).
D.J. was supoorted by the Institute for Basic Science under IBS-R012-D1.
|
2308.05232 | SegMatch: A semi-supervised learning method for surgical instrument
segmentation | Surgical instrument segmentation is recognised as a key enabler to provide
advanced surgical assistance and improve computer assisted interventions. In
this work, we propose SegMatch, a semi supervised learning method to reduce the
need for expensive annotation for laparoscopic and robotic surgical images.
SegMatch builds on FixMatch, a widespread semi supervised classification
pipeline combining consistency regularization and pseudo labelling, and adapts
it for the purpose of segmentation. In our proposed SegMatch, the unlabelled
images are weakly augmented and fed into the segmentation model to generate a
pseudo-label to enforce the unsupervised loss against the output of the model
for the adversarial augmented image on the pixels with a high confidence score.
Our adaptation for segmentation tasks includes carefully considering the
equivariance and invariance properties of the augmentation functions we rely
on. To increase the relevance of our augmentations, we depart from using only
handcrafted augmentations and introduce a trainable adversarial augmentation
strategy. Our algorithm was evaluated on the MICCAI Instrument Segmentation
Challenge datasets Robust-MIS 2019 and EndoVis 2017. Our results demonstrate
that adding unlabelled data for training purposes allows us to surpass the
performance of fully supervised approaches which are limited by the
availability of training data in these challenges. SegMatch also outperforms a
range of state-of-the-art semi-supervised learning semantic segmentation models
in different labelled to unlabelled data ratios. | Meng Wei, Charlie Budd, Luis C. Garcia-Peraza-Herrera, Reuben Dorent, Miaojing Shi, Tom Vercauteren | 2023-08-09T21:30:18Z | http://arxiv.org/abs/2308.05232v1 | # SegMatch: A semi-supervised learning method for surgical instrument segmentation
###### Abstract
Surgical instrument segmentation is recognised as a key enabler to provide advanced surgical assistance and improve computer-assisted interventions. In this work, we propose SegMatch, a semi-supervised learning method to reduce the need for expensive annotation for laparoscopic and robotic surgical images. SegMatch builds on FixMatch, a widespread semi-supervised _classification pipeline combining consistency regularization and pseudo-labelling, and adapts it for the purpose of _segmentation_. In our proposed SegMatch, the unlabelled images are weakly augmented and fed into the segmentation model to generate a pseudo-label to enforce the unsupervised loss against the model's output for the adversarial-augmented image on the pixels with a high confidence score. Our adaptation for segmentation tasks includes carefully considering the equivalence and invariance properties of the augmentation functions we rely on. To increase the relevance of our augmentations, we depart from using only handcrafted augmentations and introduce a trainable adversarial augmentation strategy. Our algorithm was evaluated on the MICCAI Instrument Segmentation Challenge datasets Robust-MIS 2019 and EndoVis 2017. Our results demonstrate that adding unlabelled data for training purposes allows us to surpass the performance of fully supervised approaches which are limited by the availability of training data in these challenges. SegMatch also outperforms a range of state-of-the-art semi-supervised learning semantic segmentation models in different labelled to unlabelled data ratios.
## 1 Introduction
Automatic visual understanding in laparoscopic and robotic surgical videos is crucial for enabling autonomous surgery and providing advanced surgical support to clinical teams. Within this field, instrument segmentation, as shown in Figure 1, is a fundamental building block. Example use cases include automatic surgical skill assessment, placing informative overlays on the display, performing augmented reality without occluding instruments, intraoperative guidance systems, surgical workflow analysis, visual servoing, and surgical task automation [37]. Surgical instrument segmentation has advanced rapidly from traditional methods [45, 35], to modern deep learning-based methods [22, 15, 18, 28, 17]. Given the significance of this task, open datasets have been released, in particular, through the organisation of computational challenges such as Robust-MIS 2019 [37].
Most studies in this area exploit fully supervised learning, the performance of which scales with the amount of labelled training data. However, given the required expertise to provide accurate manual segmentation, the collection of annotations is costly and time-consuming. It is thus unsurprising that in comparison to industry standards for natural images, no large-scale annotated datasets for surgical tool
Figure 1: Representative sample images from Robust-MIS 2019 of laparoscopic surgery (left) and state-of-the-art instrument segmentation results (right). True positive (yellow), true negative (black), false positive (purple), and false negative (red).
segmentation currently exist. This leads to significant barriers in establishing the robustness and precision needed to deploy surgical instrument segmentation in the clinic. To tackle this challenge, a number of weak supervision approaches [39, 38, 25, 41, 50] have been proposed which take advantage of unlabelled data or image-level labels as these are easier to capture. While interesting conceptually, weak supervision for surgical instrument segmentation has not yet been demonstrated in practice to generalize to large-scale datasets and achieve the required accuracy for deployment in the operating theatre.
In this work, we propose a new semi-supervised surgical instrument segmentation framework, termed as SegMatch, building upon the state-of-the-art semi-supervised image _classification_ pipeline, FixMatch [43]. During training, FixMatch processes unlabelled input images through two concurrent paths which implement weak (image _image_ flip and rotation) and strong (significant photometric changes) augmentations respectively. Augmented images from both paths are then fed to a shared backbone prediction network. For regions of high confidence, the prediction from the weakly augmented image serves as a pseudo-ground-truth against which the strongly augmented image is compared.
In order to adapt FixMatch to the segmentation task, we make a seemingly small but critical first contribution by changing the augmentation paths in SegMatch. For classification tasks, networks are expected to be invariant with respect to all types of augmentation within a certain range tailored to the specific requirements of the task. In contrast, for segmentation tasks, networks are expected to be invariant with respect to photometric transformations (contrast, brightness, hue changes) but equivariant with respect to spatial transformations (rotations, translations, flips). In SegMatch, spatial transformations that are used as augmentations are inverted after the network prediction.
Our second and main contribution to SegMatch lies in the inclusion of a learned strong augmentation strategy. Promoting prediction consistency between the weakly augmented and strongly augmented branches is what helps FixMatch and SegMatch learn from unlabelled data and generalise better. Yet, there is no guarantee that the fixed, hand-crafted set of strong augmentation types suggested in [43] is optimal. In fact, once the network learns to be sufficiently invariant/equivariant with respect to the fixed set of augmentation, the information learned from the unlabelled data would have saturated. In order to guide the model to learn continuously as the training progresses, we introduce an adversarial augmentation scheme to generate strongly augmented data. We rely on the established iterative fast gradient sign method (I-FGSM) [21] and integrate it into our training loop to dynamically adapt the strong augmentation as we learn from the data.
We conduct extensive experiments on both the Robust-MIS 2019 [37] and EndoVis 2017 datasets [1]. Our study demonstrated that using a ratio of 1:9 or 3:7 of labelled data to unlabelled data within a dataset allowed our model to achieve a considerably higher mean Dice score compared to state-of-the-art semi-supervised methods. Our method shows a significant improvement in surgical instrument segmentation compared to existing fully supervised methods by utilizing a selective set of 17K unlabelled images available in the Robust-MIS 2019 dataset in addition to the annotated subset which most competing methods have exploited.
## 2 Related work
### Semi-supervised learning
Pseudo-labelling [49, 32] is a representative approach for semi-supervised learning. A model trained on labelled data is utilized to predict pseudo-labels for unlabelled data. This in turn provides an extended dataset of labelled and pseudo-labelled data for training. Consistency regularization [47, 34] is also a widespread technique in semi-supervised learning. There, an auxiliary objective function is used during training to promote consistency between several model predictions, where model variability arises from techniques such as weight smoothing or noise injection.
Berthelot [4] introduced MixMatch to incorporate both consistency regularization and the Entropy Minimization strategy of Granvalet and Bengio [14] into a unified loss function for semi-supervised image classification. Aiming at providing a simple yet strong baseline, Sohn [43] introduced FixMatch to combine consistency regularization and pseudo-labelling and achieve state-of-the-art performance on various semi-supervised learning benchmarks.
**Semi-supervised semantic segmentation:** Adapting semi-supervised learning for image segmentation tasks requires dedicated strategies that account for the fact that labels are provided at the pixel level. Previous works explored the adaptation to semantic segmentation of classical semi-supervised learning including pseudo-labelling [20, 24], and consistency regularization [10, 6]. Ouali [30] proposed cross-consistency training (CCT) to force the consistency between segmentation predictions of unlabelled data obtained from a main decoder and those from auxiliary decoders. Similarly, Chen [7] exploited a novel consistency regularization approach called cross pseudo-supervision such that segmentation results of variously initialised models with the same input image are required to have high similarity. Previous work also investigated the use of generative models to broaden the set of unlabelled data from which the segmentation model can learn [44].
Despite such advances, current semi-supervised semantic segmentation models derived from classification models
have not yet demonstrated the performance gains observed with FixMatch for classification. We hypothesise that an underpinning reason is that they do not adequately address the issue of transformation equivariance and invariance and do not exploit modern augmentation strategies as efficiently as FixMatch.
### Surgical instrument segmentation
The majority of surgical instruments segmentation works are supervised methods [18, 12, 28, 33, 19, 42]. Numerous studies have explored different methods to improve the accuracy of surgical instrument segmentation. For instance, Islam [18] took advantage of task-aware saliency maps and the scan path of instruments in their multitask learning model for robotic instrument segmentation. By using optical flow, Jin [19] derived the temporal prior and incorporate it into an attention pyramid network to improve the segmentation accuracy. Recently, Gonzalez [12] proposed an instance-based surgical instrument segmentation network (ISINet) with a temporal consistency module that takes into account the instance's predictions across the frames in a sequence. Seenivasan [42] exploited global relational reasoning for multi-task surgical scene understanding which enables instrument segmentation and tool-tissue interaction detection. Yet, the use of unlabelled data for this task remains relatively untapped.
**Surgical instrument segmentation with limited supervision:** A relatively small number of works exploited surgical instrument segmentation with limited supervision. Jin [19] transferred predictions of unlabelled frames to that of their adjacent frames in a temporal prior propagation-based model. Liu [25] proposed an unsupervised approach that relies on handcrafted cues including color, object-ness, and location to generate pseudo-labels for background tissues and surgical instruments respectively. More recently, Sanchez [41] used image-level labels in a multi-task learning framework to jointly detect and segment surgical instruments. Sahu [39] relies on inherently annotated simulation data and unlabelled real data to train a teacher-student model for simulation-to-real domain adaptation in endoscopic image segmentation. Liu [26] introduced a graph-based unsupervised method for adapting a surgical instrument segmentation model to a new domain with only unlabelled data.
We note that most of the existing works with limited supervision focus on exploring domain adaptation or generating different types of pseudo-labels for surgical tool segmentation models and do not exploit a FixMatch-style semi-supervised learning.
### Adversarial learning for improved generalisation
Deep neural networks (DNNs) have been found to be vulnerable to some well-designed input samples, known as adversarial samples. Adversarial perturbations are hard to perceive for human observers, but they can easily fool DNNs into making wrong predictions. The study of adversarial learning concerns two aspects: 1) how to generate effective adversarial examples to attack the model[13]; 2) how to develop efficient defence techniques to protect the model against adversarial examples [16]. A model which is robust to adversarial attacks is also more likely to generalise better [2]. As such, we hypothesise that adversarial methods may be of relevance for semi-supervised learning.
For adversarial attacks, the earliest methods [46, 13] rely on the gradient of the loss with respect to the input image to generate adversarial perturbations. For instance, fast gradient sign attack (FGSM) [46] perturbs the input along the gradient direction of the loss function to generate adversarial examples. Tramer [48] improved the FGSM by adding a randomization step to escape the non-smooth vicinity of the data point for better attacking. The basic iterative method (BIM) [21], which is also referred to as iterative FGSM (I-FGSM), improves FGSM by performing multiple smaller steps to increase the attack success rate. Carlini [5] introduced a method now referred to as C&W which solves a constrained optimisation problem minimising the size of the perturbation while ensuring a wrong prediction after perturbation. Recently, Rony [36] proposes a more efficient approach to generate gradient-based attacks with low L2 norm by decoupling the direction and norm (DDN) of the adversarial perturbation. For adversarial defence, adversarial training [27] is a seminal method which generates adversarial examples on the fly and trains the model against them to improve the model's robustness. Other defence methods include relying on generative models [40] and leveraging the induced randomness to mitigate the effects of adversarial perturbations in the input domain [8].
Although the adversarial attack has the potential to enhance the performance and robustness of deep learning models, it has not been yet applied to semi-supervised learning methods for semantic segmentation. As we show later in this work, adversarial attacks can indeed be used to effectively augment unlabelled images used for consistency optimization.
## 3 Methods
### Proposed SegMatch algorithm: overview
Our proposed SegMatch algorithm adapts the state-of-the-art semi-supervised image _classification_ framework, FixMatch [43], to semi-supervised semantic _segmentation_.
Our application primarily targets the segmentation of surgical instruments but also has the potential to be utilized for various other semantic segmentation tasks. We follow the basic architecture of FixMatch as illustrated in Figure 2. During training, SegMatch uses a supervised pathway and an unsupervised pathway which share the same model parameters \(\theta\) for segmentation prediction. For the supervised pathway, the prediction from the labelled image is classically optimized against its ground truth using a standard supervised segmentation loss. For the unsupervised pathway, given an unlabelled image, we follow FixMatch and process the input image through two concurrent paths, which implement weak and strong augmentations respectively. The augmented images are then fed to the model to obtain two segmentation proposals. The output from the weak augmentation branch is designed to act as the pseudo-ground-truth for that from the strong augmentation branch.
As detailed in Section 3.2, segmentation models are typically expected to be invariant with respect to bounded photometric transformations and equivariant with respect to spatial transformations of the input images. We use simple spatial transformations for our weak augmentations and propose to apply the inverse spatial transformation after the network prediction to handle spatial transformation equivariance. We use photometric transformations for our strong augmentations and exploit a pixel-wise loss promoting consistency between the segmentation outputs of the two branches.
Given the complexity of determining the suitable range of parameters for hand-crafted strong augmentation, we propose a solution that involves a learning-based approach. As detailed in Section 3.3, in order to gradually increase the difficulty of the prediction from the strongly-augmented branch, we introduce a learnable adversarial augmentation scheme in the strong augmentation branch.
### Weak augmentations, equivariance, pseudo-labels
**Equivariance and invariance in SegMatch:** We start by introducing the notion of equivariance and invariance illustrated in Figure 3. Let us consider a function \(f_{\theta}\) (standing for the neural network, \(\theta\) being the parameters of the network) with an input \(x\) (standing for the input image), and a class of transformation functions \(\mathcal{G}\) (standing for a class of augmentations). If applying \(g\in\mathcal{G}\) (standing for a specific augmentation instance) to the input \(x\) also reflects on the output of \(f_{\theta}\), that is \(f_{\theta}(g(x))=g(f_{\theta}(x))\), then the function \(f_{\theta}\) is said to be equivariant with respect to transformations in \(\mathcal{G}\) (see Figure 3: Left). Conversely, if applying \(g\in\mathcal{G}\) to the input \(x\) does not affect the output of \(f_{\theta}\), that is \(f_{\theta}(g(x))=f_{\theta}(x)\), then the function \(f_{\theta}\) is said to be invariant with respect to transformations in \(\mathcal{G}\) (see Figure 3: Right).
For the classification task, the model is expected to be invariant to all the augmentation strategies. In contrast, given a segmentation model, spatial transformations on the input image should reflect on the output segmentation map while photometric transformations should not. Segmentation models should thus be equivariant with respect to spatial transformations and invariant with respect to photometric transformation. In FixMatch, weak augmentations only comprise simple spatial transformations such as rotation, and flip, which preserve the underlying structure of the image. Meanwhile, strong augmentations only comprise photometric transformations such as posterizing and sharpness changes as provided by the RandAugment [9] algorithm, which shifts the intensity distribution of the original image.
**Inverting transformations from the weak augmentations:** Similar to FixMatch, given an input unlabelled image, we randomly select one simple spatial transformation, rotation, flip, crop or resize, to apply to it in the weak augmentation branch. In order for our SegMatch to take advantage of a consistency loss between the weak augmentation branch where spatial transformations are used (with expected equivariance of the segmentation) and the strong augmentation branch where photometric transformations are used (with expected invariance of the segmentation), we deploy an inverse spatial transformation on the output of the segmentation model in the weak augmentation branch.
Given an unlabelled image \(x^{u}\), we denote its weak augmentation as \(\omega_{e}(x^{u})\). It is fed to the network \(f_{\theta}\) to obtain the segmentation output \(f_{\theta}(\omega_{e}(x^{u}))\). We apply the inverse transformation \(\omega_{e}^{-1}\) to \(f_{\theta}(\omega_{e}(x^{u}))\), _i.e._\(\omega_{e}^{-1}(f_{\theta}(\omega_{e}(x^{u})))\). This addresses the equivariance expectation and allows for the output of the weak augmentation branch to be easily compared with the segmentation output from the strongly-augmented image.
**Soft pseudo-label generation:** Following the inverse transformation, we obtain a segmentation prediction in logit space \(p_{w}=\omega_{e}^{-1}(f_{\theta}(\omega_{e}(x^{u})))\). For the \(i\)-th pixel in \(p_{w}\), _i.e_\(p_{w_{i}}\), a pseudo-label is extracted by applying a sharpened softmax:
\[\widetilde{y}_{i}=\text{Sharpen}(\text{Softmax}(p_{w_{i}}),T) \tag{1}\]
where the distribution sharpening operation of [4] is defined as \(\text{Sharpen}(d,T)_{i}:=d_{i}^{\frac{1}{T}}/\sum_{j=1}^{L}d_{j}^{\frac{1}{T}}\). with \(T\) being a temperature hyper-parameter. The sharpening operation allows us to control the level of confidence in the resulting probabilities.
### Trainable strong augmentations
To tackle the generalization problem typically faced by convolutional neural networks, previous work has em
ployed strong augmentations techniques [43, 4, 3]. However, these augmentations are hand-crafted and designing realistic strong augmentations to bridge large and complex domain gaps is challenging [11]. This challenge is further exacerbated in segmentation tasks which are highly sensitive to pixel-level perturbations due to their pixel-level prediction nature[29]. For these reasons, despite utilizing powerful hand-crafted augmentations during training, existing methods still demonstrate limited generalization capabilities. In this section, we propose to tackle these key limitations by learning to perform data augmentation using adversarial perturbation during model training.
**Strong Augmentation Initialization:** Rather than completely replacing the strong augmentation approach in FixMatch, we build on it to serve as initialization which will be further perturbed. We chose the strong augmentations from the operations in RandAugment [9] based on two criteria. First, we focus on photometric transformations only as these satisfy the invariance expectation and do not require the use and an inverse transformation as discussed in Section 3.2. Second, we select rather basic transformations that provide visually plausible augmentations, thereby leaving the more complex changes to the trainable refinement. More specifically, our initial augmentation for the strong augmentation branch is a composition of three transformations randomly chosen from a collection of handcrafted photometric augmentation strategies. These include adjustments to contrast, brightness, colour, and sharpness, as well as the addition of random noise, posterization, and solarization. The strength of the individual transformations is chosen according to predefined hyper-parameters.
**Adversarial augmentation approach:** As a simple yet powerful adversarial method, we decide to use the iterative fast gradient sign method (I-FGSM) [21], which applies multiple gradient updates by iterative small steps.
I-FGSM is based on FGSM which provides an adversarial perturbation to the input image in a single gradient-based operation. In FGSM, the direction of the perturbation is computed from the gradient of the loss with respect to the input image. The magnitude of the gradient is discarded by keeping only the sign along each dimension. A scaling factor is applied to keep the perturbation small. To compute
Figure 3: Equivariance (left) and invariance (right) properties for an image augmented under different types of augmentations: spatial (left) or photometric (right).
Figure 2: SegMatch training process structure. The top row is the fully supervised pathway which follows the traditional segmentation model training process. The two bottom rows form the unsupervised learning pathway, where one branch uses a weakly augmented image fed into the model to compute predictions, and the second branch obtains the model prediction via strong augmentation for the same image. The model parameters are shared across the two pathways. The hand-crafted photometric augmentation methods are used to initialize the strong augmented image, which is further perturbed by an adversarial attack (I-FGSM) for \(K\) iterations.
a more refined perturbation, I-FGSM applies FGSM multiple times in smaller steps. The perturbations are clipped to make sure they are in the \(\epsilon\)-neighbourhood to the original image. The I-FGSM equation for iteration \(k+1\) out of \(K\) is as follows:
\[x_{k+1}^{s}=\text{Clip}_{x_{0}^{s},\epsilon}\{x_{k}^{s}+\frac{\epsilon}{K}\cdot \text{Sign}(\bigtriangledown_{x_{k}^{s}}(L_{u}(f_{\theta}(x_{k}^{s}),\widetilde {y})))\} \tag{2}\]
where \(x_{0}^{s}\) represents the initial strongly-augmented image; \(\widetilde{y}\) is the pseudo-label obtained from the weak augmentation branch; \(\text{Clip}\{\cdot\}\) is the clipping function which applies to every pixel in the perturbation image to limit the difference between \(x_{K}^{s}\) and \(x_{0}^{s}\) and keep it within an \(\epsilon\)-neighbourhood; and \(L_{u}\) is the unsupervised loss function defined in equation (4). The magnitude of the perturbation \(\epsilon\) and the number of I-FGSM steps \(K\) are hyperparameters to adjust the quality of the adversarial approach.
### Loss functions in SegMatch
In this work, the training objective for the supervised pathway is the standard pixel-wise cross-entropy loss (\(l_{CE}\)) combined with Dice loss (\(l_{DSC}=1-DSC\)), where \(DSC\) represents the Dice coefficient:
\[L_{s}=\frac{1}{|\mathcal{D}^{l}|}\sum_{x^{l}\in\mathcal{D}^{l}}\Big{(}l_{DSC} \big{(}y^{l},f_{\theta}(x^{l})\big{)}+\frac{1}{N}\sum_{i=0}^{N-1}l_{CE}\big{(} y_{i}^{l},f_{\theta}(x^{l})_{i}\big{)}\Big{)} \tag{3}\]
where \(x^{l}\) is a labelled input from the labelled set \(\mathcal{D}^{l}\); \(y^{l}\) is the corresponding ground-truth label; \(x_{i}^{l}\) and \(y_{i}^{l}\) denote the \(i^{\text{th}}\) pixel of \(x^{l}\) and \(y^{l}\), respectively; and \(N\) is the number of pixels in \(x^{l}\).
The training objective for the unsupervised pathway is a cross-entropy loss calculated between a subset of confident pixel-level pseudo-labels \(\widetilde{y}_{i}\) stemming from the weak augmentation branch and the output probability \(p_{i}\) from the strongly augmented image:
\[L_{u}=\frac{1}{|D_{u}|}\sum_{x^{u}\in D_{u}}\frac{1}{|N_{v}^{x^{u}}|}\sum_{i \in N_{v}^{x^{u}}}l_{CE}(\widetilde{y}_{i},p_{i}) \tag{4}\]
where where \(x^{u}\) is an unlabelled input from the unlabelled set \(\mathcal{D}^{u}\); \(c\) denotes a specific class; and \(N_{v}^{x^{u}}\) is the set of pixel indices with confidence score of the most confident class \(\max_{c}(p_{w_{i}}^{c})\) higher than or equal to a hyper-parameter threshold \(t\), i.e. \(N_{v}^{x^{u}}=\{i\,|\,\max_{c}(p_{w_{i}}^{c})\geq t\}\).
The final loss is given by:
\[L=L_{s}+w(t)L_{u} \tag{5}\]
where, following [23], \(w(t)\) is an epoch-dependent weighting function which starts from zero and ramps up along a Gaussian curve so that the supervised loss contributes to the total loss more at the beginning and the unsupervised loss increases contribution in subsequent training epochs.
## 4 Experimental setup
### Dataset
**Robust-MIS 2019:** Robust-MIS 2019 is a laparoscopic instruments dataset including procedures in rectal resection, protococlectomy, and sigmoid resection to detect, segment, and track medical instruments based on endoscopic video images [37]. The training data encompasses 10-second video snippets in the form of 250 consecutive endoscopic image frames and the reference annotation for only the last frame is provided. In total, 10,040 annotated images are available from a total of 30 surgical procedures from three different types of surgery.
As per the original challenge, the samples used for training were exclusively taken from the protococlectomy and rectal resection procedures. These samples comprise a total of 5983 clips, with each clip having only one annotated frame while the remaining 249 frames are unannotated.
The testing set is divided into three phases as per the original challenge, where there was no patient overlap between the training and test datasets. _Stage 1:_ 325 images from the protococlectomy procedure with another 338 images from the rectal resection procedure. _Stage 2:_ 225 images from the protococlectomy procedure and 289 others from the rectal resection procedure. _Stage 3:_ 2880 annotated images from sigmoid resection, an unknown surgery which only appears in the testing stage but not in the training stage.
**EndoVis 2017:** EndoVis 2017 is a robotic instrument image dataset captured from robotic-assisted minimally invasive surgery [1], which comprises a collection of 10 recorded sequences capturing abdominal porcine procedures. For the training phase, the first 225 frames of 8 sequences were captured at a rate of 2Hz and manually annotated with information on tool parts and types. The testing set consists of the last 75 frames from the 8 sequences used in the training data videos, along with 2 full-length sequences of 300 frames each, which have no overlap with the training phase. To prevent overlap between the training and test sets from the same surgical sequence, we followed the same split as described in ISINet [12]. This involved exclusively assigning the 2 full-length sequences for testing while keeping the training set intact with 225 frames from the remaining 8 sequences. Note that there are no additional unannotated images for EndoVis 2017.
**Dataset usage for semi-supervised learning evaluation:** Since the above challenges were designed for fully supervised benchmarking, we make some adaptions to evaluate SegMatch with respect to competitive semi-supervised (and fully supervised) methods.
The Robust-MIS 2019 dataset was split into training and testing sets based on the original challenge splits. The three original challenge testing stages were merged to form a sin
gle combined testing set, comprising 4057 images. For training, we started with the full set of 5,983 labelled original challenge training images and the corresponding 17,617 unlabelled images. In our first experiments, we use only the 5,983 labelled original challenge training images and keep only 10% or 30% of the annotations. This allows for a comparison with supervised methods having access to all 5983 labelled images. To further compare with the state-of-the-art supervised methods, we also conducted experiments using the whole 5,983 images of the training set as a labelled set, and use 17,617 additional unlabelled frames from the original videos.
For EndoVis 2017, as no additional unlabeled data is available, the original training set, which has 1800 images in total, is split into labelled and unlabelled subsets with ratios 1:9 or 3:7.
### Implementation details
During training, for each batch, the same number of images is sampled from the labelled dataset \(D_{l}\) and the unlabelled dataset \(D_{u}\). Within each batch, unlabelled samples are initialized by random but hand-crafted strong augmentations and then adversarially updated by adding the I-FGSM perturbation.
All our experiments were trained on two NVIDIA V100 (32GB) with a learning rate of 0.001. The model was trained using an SGD optimizer with a momentum of 0.95, which we found can provide a smooth convergence trajectory (unreported manual tuning involved experiments with momentum values from 0.9 to 0.99). The learning rate was initialized as \(1.0\times 10^{-2}\) and decayed with the policy \(lr_{ini}\times(1-epoch/epoch_{max})\times\eta\), where \(epoch_{max}\) is the total epochs and \(\eta\) is set to 0.7. The total training epoch number is 1000 and the batch size was set as 64 considering the memory constraints and training efficiency.
In the final model, adversarial augmentation was applied with a magnitude of \(\epsilon=0.08\). Additionally, two types of initial strong augmentation techniques were utilized with pre-defined minimum and maximum magnitude values, specifically from the photometric transformations in RandAugment [9] method.
For the segmentation model backbone, we employed the state-of-the-art OR-Unet [17]. Our OR-Unet contains 6 downsampling and 6 upsampling blocks where the residual blocks were employed in the encoder and the sequence Conv-BN-Relu layers with kernel size \(3\times 3\) were used in the decoder.
### Evaluation metrics
We evaluated our model based on the criterion proposed in Robust-MIS 2019 MICCAI challenge [37], which includes:
* Dice Similarity Coefficient, a widely used overlap metric in segmentation challenges;
* Normalized Surface Dice (NSD) [37] is a distance-based measurement for assessing performance, which measures the overlap of two surfaces (i.e. mask borders). In adherence to the challenge guidelines [37], we set the tolerance value for NSD to 13 pixels, which takes into account inter-rater variability. Note that the value of tolerance was determined by comparing annotations from five annotators on 100 training images as illustrated in the challenge.
## 5 Results and discussion
### Comparison With State-of-the-Art Models
We compare our results to the state-of-the-art on Robust-MIS 2019 and EndoVis 2017 datasets. We categorize comparisons into two groups. First, a head-to-head comparison is made with other semi-supervised methods. Second, we measure the added value of incorporating unlabelled data in addition to using the complete labelled training data in Robust-MIS 2019.
**Comparison with semi-supervised baselines:** For the first group, we adapted the representative semi-supervised classification method Mean-Teacher [47] for the segmentation task using the same backbone network and experimental setting as ours. We also conducted experiments using two existing semi-supervised semantic segmentation models: WSSL [31], and CCT [30]. WSSL is an established yet representative benchmark for semi-supervised semantic segmentation. CCT ensures consistency among high-level features with different contexts, which is accomplished by aligning perturbed features with the main features. The motivation behind CCT is thus similar to ours while CCT requires several decoders and additional computational resources. Illustrative segmentation results for the above methods on Robust-MIS 2019 are presented in Figure 4.
Table 1 shows that, within the dataset of Robust-MIS 2019, for the two labelled to unlabelled data ratios we tested, our SegMatch outperforms other methods with statistical significance (p\(<\)0.05). Comparing SegMatch to the second-best method, CCT, we observed notable improvements in performance. Specifically, when using 10% and 30% labelled data in the Robust-MIS 2019 dataset, SegMatch achieved mean Dice score improvements of 6.2 percentage points (pp) and 9.1 pp, respectively. Similar observations were made on the EndoVis 2017 dataset, where SegMatch outperformed CCT by 4 pp and 4.6 pp when utilizing 10% and 30% labelled data, respectively.
These results demonstrate the superior performance of SegMatch in both datasets and both labelled to unlabelled data ratios, highlighting the effectiveness of our proposed method in scenarios with limited labelled data. Note that,
the training dataset only consists of the protocolectomy procedure and rectal resection procedure, and the sigmoid resection procedure is considered a new type of data for the trained model. Qualitatively, our proposed SegMatch is able to recognize the boundaries between different tools and segment more complete shapes for each individual tool, especially in those areas with high reflection.
**Added value of unlabelled data:** For the second group, we used the whole labelled training set of Robust-MIS 2019 as the labelled set and take advantage of the unlabelled video footage available from the training set of Robust-MIS 2019 to evaluate the impact of adding unlabelled data to an already large labelled dataset. We include 17,617 randomly selected unlabelled images from the training clips as the unlabelled set.
Table 2 shows the comparison between supervised approaches trained on the labelled set and SegMatch trained on the combination of the labelled and unlabelled sets. Compared to the existing model OR-Unet [17], the inclusion of additional unlabelled data in our semi-supervised pipeline SegMatch led to a 5.7 pp higher Dice score. Our model also demonstrates a noteworthy enhancement of 4.8 pp in comparison to the more recent ISInet [12], which is now commonly employed for surgical instrument segmentation. When evaluating the stage 3 testing data, which corresponds to a surgical procedure that was not seen during the training phase, our SegMatch model demonstrated superior performance compared to the official Robust-MIS 2019 challenge winner for stage 3 (haoyun team) by a margin of 3.9 pp. It is noteworthy that the performance improvement from SegMatch over fully supervised baselines is more substantial on stage 3 testing data (unseen procedure types) than on stage 1 and 2 data (procedure types represented in the training data). This indicates that our model exhibits better generalizability. This enhanced generalizability can enables the model to handle diverse surgical scenarios more effectively.
### Ablation and parameter sensitivity study
To evaluate the contribution of the different components of our pipeline, we conducted ablation and parameter sensitivity studies on different strong augmentation strategies and adversarial strong augmentation methods.
**Analysis of our semi-supervised method:**We evaluate the contribution of the semi-supervised learning in SegMatch by training only its fully-supervised branch, essentially turning it into OR-UNet. As discussed previously and shown in Table 2, disabling the unlabelled pathway leads to a drop in terms of Dice and NSD scores thereby confirming the benefits of semi-supervised learning.
We also studied the effectiveness of varying the confidence threshold when generating the pseudo-labels as shown in Figure 5. We observe that when the confidence threshold approaches 1.0, the model returns the worst segmentation performance. When the threshold value is changed within the range of \([0.7,0.9]\), the confidence threshold does not affect the model's performance significantly. However, it should be noted that further reducing the threshold value leads to a rapid decrease in the mean Dice score, which indicates that the quality of contributing pixels to the unsupervised loss may be more important than the quantity.
**Augmentation strategy:**We also operated an ablation study on the weak and strong augmentation in SegMatch as tabulated in Table 3. First, we removed the adversarial augmentation, thereby only keeping handcrafted strong augmentations. Notably, we observed consistent results across different labelled data ratios. Specifically, when utilizing 17K additional unlabelled data, we found that the Dice score decreased by 3.1 pp compared to the full proposed SegMatch model. This suggests that applying adversarial augmentations can prevent the learning saturation caused by handcrafted strong augmentations on unlabelled data, thereby enabling the model to learn continuously. When both the weak augmentation and adversarial augmentation are removed, the results drop by an additional 1.9 pp compared to only removing the adversarial augmentation, indicating that applying the weak augmentation function to the input image, which generates new and diverse examples for the training dataset, can enhance the segmentation performance.
To evaluate the effectiveness of the overall strong augmentation strategies, we replaced the strongly augmented images with the original images. The resulting Dice score dropped by 5.1 pp compared to only removing the adversarial augmentation, and by 3.2 pp compared to removing both the weak and adversarial augmentation. The evidence that
Figure 4: Segmentation results on exemplar images from three different procedures in the testing set. Here, SegMatch, CCT, and WSSL were trained using the whole labelled training set of Robust-MIS 2019 as a labelled set, and 17K additional unlabelled frames from the original videos. The fully supervised learning models (OR-UNet and ISINet) were trained using the whole labelled training set of Robust-MIS 2019 as a labelled set. The first column is the ground truth mask placed on the top of the original image, and the other column is the segmentation results of ablation models of SegMatch and state-of-the-art models. The three rows from up to button are the testing image samples from protocolectomy procedures, sigmoid resection procedure (unseen type), and rectal resection procedure respectively.
removing strong augmentation results in a greater decrease in performance than removing weak augmentation suggests that stronger perturbations are beneficial for learning with consistency regularization. Additionally, when strong augmentation is removed, the model is presented with the same input image from different views, which reduces the benefits of consistency regularization. In this scenario, pseudo-labelling becomes the primary technique for achieving better segmentation performance. Therefore, our results suggest that consistency regularization is more crucial than pseudo-labelling in our pipeline for improving segmentation performance.
**Adversarial augmentation analysis:** We further evaluated the sensitivity of our results to changes in the maximum amplitude value \(\epsilon\) of the adversarial perturbation as shown in Figure 6 and qualitatively illustrated in Figure 7. When increasing from \(\epsilon\) = 0.0 (i.e. no perturbation), we observed a consistent pattern across different ratios of labelled data for both FGSM and I-FGSM. Initially, as \(\epsilon\) increased, the segmentation performance improved, reaching its peak at approximately \(\epsilon\) = 0.08. However, beyond this optimal point, the performance started to decline, indicating that stronger perturbations can enhance the model's performance only within a certain range. In this work, by precisely defining the acceptable range of perturbations, we restrict the perturbations within the \(\epsilon\)-neighbourhood, which ensures that the integrity of the instrument pixels is preserved while introducing subtle variations that aid in improved generalization of the model. Comparing FGSM and I-FGSM, we found that I-FGSM showed superior performance to FGSM before reaching the optimal \(\epsilon\) value. However, after the optimal point, I-FGSM exhibited a more significant decrease in the model's performance compared to FGSM, which suggests that I-FGSM has a higher attack success rate than FGSM but becomes more harmful to the model when the attacking amplitude is large. We also varied the number of I-FGSM iterations as shown in Table 4 and qualitatively illustrated in Figure 7. Increasing the number of iterations in the I-FGSM attack shows a minor improvement consistent with expectations that it increases the attach success rate. However, it is important to consider that this small improvement in segmentation performance through increased iterations comes with a trade-off in computational efficiency.
We also conducted experiments using various adversarial learning methods as shown in Table 4. Our findings
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{4}{c}{Robust-MIS 2019 (5983 training images)} & \multicolumn{4}{c}{EndoVis 2017 (5286 training images)} \\ \cline{2-9} & \multicolumn{2}{c}{Labelled: 598 (10\%)} & \multicolumn{2}{c}{Labelled: 1794 (30\%)} & \multicolumn{2}{c}{Labelled: 528 (10\%)} & \multicolumn{2}{c}{Labelled:1585 (30\%)} \\ \cline{2-9} & Mean Dice & NSD & Mean Dice & NSD & Mean Dice & NSD & Mean Dice & NSD \\ \hline Mean-teacher [47] & 62.1 & 61.8 & 70.2 & 69.0 & 51.6 & 52.7 & 60.2 & 59.9 \\ WSSL[31] & 64.3 & 62.6 & 72.1 & 73.2 & 59.2 & 60.4 & 67.5 & 70.2 \\ CCT[30] & 67.1 & 65.2 & 75.2 & 72.6 & 61.2 & 58.7 & 69.6 & 65.5 \\ \hline SegMatch & 73.3 \(\pm\) 0.42 & 71.2 \(\pm\) 0.48 & 84.3 \(\pm\) 0.61 & 80.2 \(\pm\) 0.51 & 65.2 \(\pm\) 0.26 & 66.3 \(\pm\) 0.42 & 74.2 \(\pm\) 0.33 & 71.0 \(\pm\) 0.45 \\ \hline \hline \end{tabular}
\end{table}
Table 1: State-of-the-art semi-supervised model comparisons for Robust-MIS 2019 dataset (left) and EndoVis 2017 dataset (right) under differently labelled data to unlabelled data ratio
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c}{Labelled: 5983 training images (+17K unlabelled if semi-supervised)} \\ \cline{2-9} & \multicolumn{2}{c}{Whole Testing} & \multicolumn{2}{c}{Stage 1} & \multicolumn{2}{c}{Stage 2} & \multicolumn{2}{c}{Stage 3} \\ \cline{2-9} & Mean Dice & NSD & Mean Dice & NSD & Mean Dice & NSD & Mean Dice & NSD \\ \hline OR-Unet [17]* & 88.0 & 86.2 & 90.2 & 88.5 & 87.9 & 85.6 & 85.9 & 84.5 \\ Robust-MIS 2019 winner [37]* & 90.1 & 88.9 & 92.0 & 92.7 & 90.2 & 88.6 & 89.0 & 86.4 \\ ISINet [12]* & 88.9 & 86.3 & 90.9 & 87.6 & 89.6 & 86.5 & 86.2 & 84.7 \\ \hline SegMatch & 93.7 \(\pm\) 0.28 & 93.6 \(\pm\) 0.24 & 95.1 \(\pm\) 0.23 & 95.5 \(\pm\) 0.19 & 93.1 \(\pm\) 0.49 & 92.5 \(\pm\) 0.31 & 92.9 \(\pm\) 0.12 & 92.8 \(\pm\) 0.22 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison on the Robust-MIS 2019 dataset between fully supervised models and SegMatch with additional unlabelled data. (* indicates only labelled data was used)
Figure 5: Mean Dice score produced by varying the confidence threshold for pseudo-labels
indicate that performance achieved by using one-step attack methods is consistently lower compared to our iterative strategy, which suggests when attempting to manipulate image samples to create adversarial examples, breaking the attack down into smaller steps can improve the overall success rate, as noted in previous research [21].
## 6 Conclusion
In this paper, we introduced SegMatch, a semi-supervised learning algorithm for surgical tool segmentation that achieves state-of-the-art results across two of the most commonly used datasets in this field. Our algorithm, SegMatch, was adapted from a simple semi-supervised classification algorithm FixMatch which combines consistency regularization and pseudo-labelling. During training, SegMatch makes use of a standard labelled image pathway and an unlabelled image pathway with training batches mixing labelled and unlabelled images. The unlabelled image pathway is composed of two concurrent branches. A weak augmentation branch is used to generate pseudo-labels against which the output of a strong augmentation branch is compared. Considering the limitation of fixed handcrafted strongly augmentation techniques, we introduced adversarial augmentations to increase the performance of strongly augmented images. We also highlighted the importance of considering equivariance and invariance properties in the augmentation functions used for segmentation. Putting our work into its application context, automatic visual understanding is critical for advanced surgical assistance but labelled data to train supporting algorithms are expensive and difficult to obtain. We believe that simple but strong-performance semi-supervised segmentation learning algorithms, such as our proposed SegMatch, will accelerate the deployment of surgical instrument segmentation in the operating theatre.
|
2301.09103 | A Model for Understanding and Reducing Developer Burnout | Job burnout is a type of work-related stress associated with a state of
physical or emotional exhaustion that also involves a sense of reduced
accomplishment and loss of personal identity. Burnt out can affect one's
physical and mental health and has become a leading industry concern and can
result in high workforce turnover. Through an empirical study at Globant, a
large multi-national company, we created a theoretical model to evaluate the
complex interplay among organizational culture, work satisfaction, and team
climate, and how they impact developer burnout. We conducted a survey of
developers in software delivery teams (n=3281) to test our model and analyzed
the data using structural equation modeling, moderation, and multi-group
analysis. Our results show that Organizational Culture, Climate for Learning,
Sense of Belonging, and Inclusiveness are positively associated with Work
Satisfaction, which in turn is associated with Reduced Burnout. Our model
generated through a large-scale survey can guide organizations in how to reduce
workforce burnout by creating a climate for learning, inclusiveness in teams,
and a generative organizational culture where new ideas are welcome,
information is actively sought and bad news can be shared without fear. | Bianca Trinkenreich, Klaas-Jan Stol, Igor Steinmacher, Marco Gerosa, Anita Sarma, Marcelo Lara, Michael Feathers, Nicholas Ross, Kevin Bishop | 2023-01-22T11:07:11Z | http://arxiv.org/abs/2301.09103v2 | # A Model for Understanding and Reducing Developer Burnout
###### Abstract
Job burnout is a type of work-related stress associated with a state of physical or emotional exhaustion that also involves a sense of reduced accomplishment and loss of personal identity. Burnt out can affect one's physical and mental health and has become a leading industry concern and can result in high workforce turnover. Through an empirical study at Globular, a large multi-national company, we created a theoretical model to evaluate the complex interplay among organizational culture, work satisfaction, and team climate, and how they impact developer burnout. We conducted a survey of developers in software delivery teams (n=3,281) to test our model and analyzed the data using structural equation modeling, moderation, and multi-group analysis. Our results show that Organizational Culture, Climate for Learning, Sense of Belonging, and Inclusiveness are positively associated with Work Satisfaction, which in turn is associated with Reduced Burnout. Our model generated through a large-scale survey can guide organizations in how to reduce workforce burnout by creating a climate for learning, inclusiveness in teams, and a generative organizational culture where new ideas are welcome, information is actively sought and bad news can be shared without fear.
job burnout, work satisfaction, culture, belonging, inclusiveness
## I Introduction
Developers' well-being and work satisfaction have a strong influence on workforce retention [1, 2]. When organizations invest in the health and safety of its workforce, it is linked to organizational commitment among employees [3] and results in returns that is 2x the amount invested [4, 5]. On the other hand, employee attrition has significant costs, including disruption of ongoing working in a team as well as costs involved in recruiting and training a new team member. This is particularly important for the software industry where 'job-hopping' is quite normal, with many developers changing jobs every few years [6]. Developer retention is, therefore, a key to the long-term success of software organizations.
Prior research in other fields suggests that burnout is an important factor in employees' intention to leave their job [7]. Burnout refers to an individual's experiences of exhaustion on physical, emotional, and cognitive levels [8]. Freudenberger was among the first to explore this concept, invoking a dictionary definition as _"to fail, wear out, or become exhausted by making excessive demands on energy, strength, or resources"_[9]. While there has been considerable attention in the software engineering literature for themes such as job satisfaction [10, 11, 12], there is a surprising paucity of research on burnout. Job burnout has become increasingly relevant in today's discussions on retaining talent. The COVID-19 pandemic caused a major shift in working patterns for knowledge workers starting in March 2020. Many developers felt overwhelmed working from home while also needing to take care of family and children. Others missed human contact with colleagues and support structures available in the office. As the pandemic has started to wind down (at the time of writing), scholars have coined the term "Great Resignation" to refer to initial observations that many workers across a variety of domains are voluntarily resigning from their job; one explanation is that the pandemic has triggered people to rethink their goals and ambitions in life [13]. As in other fields [14], burnout is also likely playing a role in IT staff's decisions to leave an organization.
It is important, therefore, to understand what _causes_ burnout and factors that can mitigate it. Following prior studies [15, 16], we look at how organizational culture relates to burnout. In particular, we unpack this relationship by investigating a number of salient themes that have attracted interest in recent years, including employees' sense of belonging and work satisfaction.
Our goal is to identify the organizational and cultural antecedents that can reduce burnout. To achieve our goal, we defined the following research questions:
_RQI. How are organizational culture and burnout related?_
_RQ2. Does the relationship between organizational culture and burnout vary by gender and leadership position?_
We answer these questions within the context of software delivery teams in Globular, a large company employing 25,000 people, and a global presence in 36 cities in 17 countries across five continents, which provides services in digital transformation and assisting IT organizations in automation. Globular invests in continuous training of its talent pool on technical and social skills and has several initiatives in place to retain talent and avoid attrition. Globular places the well
being of its employees at the forefront, investing in research to identify and proactively implement strategies to reduce employee burnout and attrition.
To answer our research questions, we developed a theoretical model of factors associated with burnout grounded in prior literature and tested it using structural equation modeling, moderation, and multi-group analysis. We tested the model with data collected via an online questionnaire (n=3,281) for current members of software delivery teams who work on different projects at Globular. Fig. 1 summarizes the study design.
Our results show that Organizational Culture, Climate for Learning, Sense of Belonging, and Inclusiveness are positively associated with Work Satisfaction, which in turn is associated with Reduced Burnout. A Climate for Learning improves Work Satisfaction for employees who do not hold leadership positions as compared to leadership. Team inclusiveness is positively associated with work satisfaction and has a bigger impact on women. Women are 2x more satisfied and less burnt out when their team is inclusive. National culture also plays a role between work satisfaction and burnout. Living in a masculine (and more competitive) culture further helps reduce burnout when men have work satisfaction; national culture does not play a role for women. An understanding of how these factors interplay can help organizations create a welcoming environment that improves developers' well-being and reduces workforce attrition.
## II Background and Hypothesis Development
We review prior work related to this study and develop a theoretical model that reflects the interests of Globular managers.
### _The Role of Organizational Culture in Sense of Belonging, Climate for Learning, and Inclusiveness_
An organization's culture affects people's daily work activities. Organizational culture has been shown to influence software delivery performance [17, 15], staff well-being, and retention [16], while also enticing software developers to support the company's business [18]. Westrum developed a typology of organizational cultures based on human factors in system safety, particularly in the context of accidents in technological domains, such as aviation and healthcare [19]. The typology defines three types of organizations in terms of information flow and psychological safety. Pathological organizations exhibit low levels of cooperation across groups and a culture of blame. Bureaucratic cultures emphasize rules and positions and compartmentalize responsibilities by departments. Generative organizations are performance-oriented, with good information flow, high levels of cooperation and trust, and bridging between teams. The generative level can be achieved by creating cross-functional teams to improve cooperation, holding blameless postmortems, sharing risks and responsibilities, breaking down silos, and encouraging bridging, experimentation, and novelty. An organizational culture where members of the team cooperate with each other and share responsibilities [19] creates feelings of membership or being part of a team [20]. This organizational culture presents the organization as an extended family, leading employees to develop a strong sense of belonging to the organization [21]. Hence, we posit:
**Hypothesis 1 (H1).** A Generative Organizational Culture has a positive association with a Sense of Belonging.
An organization that exhibits a climate for learning makes resources available for continued education and offers continuous encouragement to teams to learn by providing them space and time to acquire new knowledge and explore ideas [17]. Organizational culture fosters the process of learning [22]. When holding blameless retrospectives and having out-of-box thinking, a generative organizational culture [23] creates a positive Climate for Learning [24] as instead of punishing, the team is trained to learn from failures. Thus, our second hypothesis is:
**Hypothesis 2 (H2).** A Generative Organizational Culture exhibits a Climate for Learning.
When welcoming new ideas, a generative culture brings a positive tone to a welcoming space and a spirit of friendliness that leads to feelings of inclusiveness among the members of a team [20]. When engaging in organizational culture, team members perceive an inclusive climate that leads to increased work satisfaction [25]. This leads to our third hypothesis:
**Hypothesis 3 (H3).** A Generative Organizational Culture exhibits Inclusiveness.
Fig. 1: Methodological approach
### _The Role of Sense of Belonging, Climate for Learning and Inclusiveness in Work Satisfaction_
The need to belong is a powerful, fundamental, and pervasive force that has multiple strong effects on emotional patterns and cognitive processes, across all cultures and different types of people [26]. Maslow [27] positioned 'belonging' as a basic human need, and Hagerty et al. [28] posited that a Sense of Belonging represents a unique mental health concept. A sense of belonging is key to work satisfaction [29], and productivity [26], and can help to avoid attrition [30]. References to the importance of a sense of belonging are found throughout the psychological, health care, and education literature. On the other hand, a lack of a sense of belonging is linked to a variety of ill effects on health, adjustment, and well-being [26]. Hence, we propose our fourth hypothesis:
**Hypothesis 4 (H4).** Sense of belonging has a positive association with Work Satisfaction.
Prior research on software delivery teams has shown that learning is associated with Work Satisfaction for software delivery teams, as learning is a valuable investment into the project's future and also into the employee's own career [31]. Moreover, satisfying an employee's need for growth requires that the employee is satisfied with the opportunities to learn and advance at work [21]. Thus we propose:
**Hypothesis 5 (H5).** Climate for Learning has a positive association with Work Satisfaction.
When feeling included by the team, employees believe they are valued for their unique personal characteristics and recognized as important members of the organization [25]. A perception of being socially included improves an individual's well-being [32] and enhances their self-esteem and work satisfaction [33]. So, our sixth hypothesis is:
**Hypothesis 6 (H6).** Inclusiveness has a positive association with Work Satisfaction.
### _The Role of Work Satisfaction in Reducing Burnout_
A decline in work satisfaction could signal burnout [34]. Indeed, previous research showed that Burnout has an inverse relationship with Work Satisfaction [35, 36]. Thus, we propose:
**Hypothesis 7 (H7).** Work Satisfaction has a reverse association with Burnout.
_The Moderating Role of National Cultural Values on the association between Work Satisfaction and Burnout_
An individual's response to stress is embedded within cultural beliefs. Cultural values are being accredited with a prominent role in various work-related predictor-outcome relationships, such as satisfaction, burnout [37], and turnover [38]. Globular has geographically distributed teams and needs to mitigate the social-derived challenges that are inherent in cultural differences. There are various classifications attempting to quantify cultural values such as the work by Hofstede [39], Schwartz [40], and the GLOBE study [41]. In this study, we adopt Hofstede's classification, which was previously used to analyze the culture of software engineers [42] and to investigate burnout [37]. Hofstede [39] defined the Hofstede's 6-D framework with the following six dimensions of culture per country that assume values from zero to one hundred [43]:
_Power Distance_ refers to authority and hierarchy and expresses the degree to which less powerful members of a society accept and expect that power is distributed unequally. High power distance means an acceptance of hierarchical order in which people have a determined place. Low power distance means a desire for an egalitarian distribution of power [43, 44]. In high power distance cultures, social hierarchy is established and executed clearly and without reason [45]. Hierarchy in an organization is seen as reflecting inherent inequalities, centralization is popular, subordinates expect to be told what to do, and the ideal boss is a benevolent autocrat [43, 44]. In Hofstede's classification [43], Mexico and India are examples of hierarchical societies with high Power Distance.
_Individualism_ represents the degree to which people in a society are integrated into groups. High individualism indicates people who take care of only themselves and their immediate families and should not rely (too much) on authorities for support. In contrast, low individualism (collectivism) reflects a closer integration into cohesive in-groups in which people protect each other with unquestioning loyalty [43, 44]. Collectivis mostly pursue group goals and improve group level engagement [46]. In Hofstede's classification [43], the United States is an example of a society with high Individualism.
_Masculinity_ is defined as a preference for achievement, heroism, assertiveness, and material rewards for success. While high masculinity societies are materialist and competitive, low masculinity culture (femininity) is more cooperative, consensus-oriented, caring for the weak, and prevailing the life quality [43, 44]. Japan is an example of high degree of masculinity in Hofstede's classification [43].
_Uncertainty Avoidance_ expresses the degree to which people keep away from ambiguity. Cultures high in uncertainty avoidance tend to focus on rules, structured activities, employee security, and stability. Low levels of uncertainty avoidance have a more relaxed attitude in which practice is more important than rules [43, 44]. In Hofstede's classification [43], Uruguay is an example of a score in this dimension while China is the opposite.
_Long Term Orientation_ measures the degree a culture will keep some links with its own past while dealing with the challenges of the present and the future. A high degree in this index indicates more pragmatic people who have perseverance and patience to prepare for the future. On the contrary, a lower value on this index (indicating short-term orientation) indicates that people have a more narrow-minded focus and sensitivity to immediate outcomes of their actions, tending to value steadfastness, and considering a societal change with suspicion [43, 44]. People in such societies have a strong concern with establishing the absolute truth; they are normative in their thinking, exhibit great respect for traditions, a relatively
small propensity to save for the future, and a focus on achieving quick results. Argentina, for example, has a high score in this dimension in Hofstede's classification [43].
_Indulgence_ is related to the degree of freedom the societal norms give citizens to fulfill human desires. A high degree indicates a society that relatively allows free gratification of basic and natural human desires related to hendonism. Conversely, low levels of indulgence (restraint) indicate a society that controls gratification of needs and regulates it using strict social norms [43, 44].
Studies from two decades ago showed that Long Term Orientation [47] and Power distance [48] could help foster organizational well-being. Subordinates who are surrounded by a high power distance cultural value evaluate abusive supervision as irrelevant to their well-being [48]. Individualism could reduce well-being [49], as unpleasant life events are not met with sufficient social support. In workplace contexts, managers face increasingly complex and subtle differences among employees that reflect cultural influences from the country's culture. Thus, we argue that country culture moderates the link between Work Satisfaction and Burnout. Conceptually, we model this as a single moderator (see Fig. 2), but we propose that each of Hofstede's six dimensions has such a moderating role. Thus, we propose:
**Hypothesis 8 (H8).** (a) Power distance, (b) individualism, (c) masculinity, (d) uncertainty avoidance, (e) long-term orientation, and (f) indulgence moderate the effect of Work Satisfaction on Burnout.
Fig. 2 presents the theoretical model.
## III Research Design
Management at Globular was been to understand the relationships that we proposed in Fig. 2 and obtain an answer to RQ1, which develops an understanding of how organizational culture and burnout in software delivery teams are related. To evaluate the model, we conducted a survey among software delivery team members at Global and analyzed the data using Partial Least Squares (PLS) Structural Equation Modeling (SEM) [50]. SEM facilitates the simultaneous analysis of relationships among constructs, each measured by one or more indicator variables. Then, to answer RQ2, we used a Multi-Group Analysis (MGA) to establish whether these relationships vary by gender and leadership position.
A recent survey of the use of PLS in software engineering (which also provides an introduction to PLS) revealed that PLS-SEM has been used to study a variety of phenomena in software engineering [51]. For example, it has previously been used to study job satisfaction and turnover intentions of software engineering teams [11] and the success factors of a large-scale Agile Software transformation process [51].
In this section, we discuss the measurement model (how each theoretical construct was measured) and the data collection and analysis.
### _Measurement model_
The theoretical model comprising the hypotheses is based on a number of theoretical concepts; some of the concepts cannot be directly observed (e.g., Climate for Learning, Organizational Culture, and Work Satisfaction)--these concepts are represented as _latent_ variables. A latent variable cannot be directly measured or observed, but instead is measured through a set of indicators or manifest variables. For the latent variables in this study, we adapted existing measurement instruments when possible. We define the constructs below and list the complete questionnaire in the replication package [52].
**Organization Culture** was measured as a latent construct represented by six five-point Likert questions. The questions were adapted from the Westrum Culture [19], which has previously been used as an instrument to measure organizational culture in software delivery teams [53, 15].
**Belonging** was measured using a five-point Likert question to assess aspects of membership, as being part of the team.
**Climate for Learning** was measured as a latent construct represented by two five-point Likert questions. The questions were inspired by DORA Research Program [17] and designed to evaluate if members of the team perceive that the team considers learning as an investment rather than a cost, and essential for continued progress.
**Inclusiveness** was measured through one question to assess if the team has a safe space for diversity in which everyone is welcomed and treated equally and fairly.
**Work Satisfaction** was measured as a latent construct composed of two Employee Net Promoter 10-scale Score (eNPS) questions [54, 55] and one five-point Likert question
Fig. 2: Theoretical model
about enjoyment. The eNPS questions were towards the team and towards the company, which relies on asking the respondent the willingness to recommend the team and the company to friends and colleagues.
**Burnout** was measured as a latent construct composed of two five-point Likert items. While instruments exist to measure burnout, these are very long, which would result in an overly long survey instrument. We took a pragmatic approach and focused on two statements: (1) the extent to which a team has a manageable workload with sustainable levels of stress, and burnout is not perceived as a significant problem or risk; and (2) the extent to which tasks are assigned in a way that allows enough time to achieve commitments, and team members are able to focus on one process at a time. Both were measured as'reversed' items, i.e., a strong _disagreement_ indicated a higher level of burnout.
**National Culture:** Based on the respondents' country of residence, we used Hofstede's classification of National Culture as moderators (Sec. II-D). This classification's six-dimensional approach to cultural variation includes power distance, individualism/collectivism, masculinity/femininity, uncertainty avoidance, long-term/short-term orientation, and indulgence [39].
### _Data Collection and Analysis_
We administered an online questionnaire using Globular Glow,1 which was answered by members of software delivery teams at Globular.
Footnote 1: [https://os.starmeup.com/en.html](https://os.starmeup.com/en.html)
The survey was sent to respondents by email using a corporate address. The leader of each team encouraged team members to fill the questionnaire out during regular meetings. We received 10,566 responses; however, our analysis techniques require complete responses, and we removed 7,285 responses that contained blanks. Our final sample size has 3,281 responses. Table I presents a summary of the respondents' characteristics.
We used SmartPLS version 4 for the analyses; SmartPLS is a proprietary package for analyzing PLS models. The analysis comprised three steps, with tests and procedures in each step. The first step was to evaluate the measurement model (Sec. IV), which empirically assesses the relationships between the constructs and indicators. The second step was to evaluate the theoretical model that represents the set of hypotheses (Sec. V-A). The third step was to evaluate observed heterogeneity by multi-group analysis of gender and leadership position (Sec. V-B).
PLS does not make assumptions about the distribution (such as a Normal distribution) of the data; without knowing the distribution of the data, parametric tests (which are based on a distribution with certain parameters) cannot be used to establish the standard error and thus significance. Instead, PLS packages employ a 'bootstrapping' procedure: it draws a large number (e.g. 5,000) of random'subsamples' of the same size as the original sample (using replacement). The model is estimated for each subsample, generating a sampling distribution, which is used to determine a standard error [56], which can subsequently be used to make statistical inferences. The mean path coefficient determined by bootstrapping can differ slightly from the path coefficient calculated directly from the sample.
The online appendix provides results of a variety of additional analyses and validity checks [52].
## IV Measurement Validity and Model Fit
### _Measurement Validity_
As a first step, we conducted two recommended tests to ensure that a dataset is suitable for factor analysis, i.e., that the variables in a dataset can be reduced to a smaller number of factors [57, 58]. The first test is Bartlett's test of sphericity [57] on all constructs. We found a p-value \(<\).01 (p-values less than.05 indicate that factor analysis may be suitable). Second, we calculated the Kaiser-Meyer-Olkin (KMO) measure
of sampling adequacy. Our result (.92) is well above the recommended threshold of.60 [58].
Afterward, we conducted several tests to validate the measurement of our theoretical concepts, including convergent validity, internal consistency reliability, discriminant validity, and collinearity, as discussed next.
#### Iv-A1 Convergent Validity
First, we assess the convergent validity of the measurement instrument, i.e., we assess whether the questions (indicators) that represent each latent variable are understood by the respondents in the same way as they were intended by the designers of the questions [59]. This assessment relates to the degree to which a measure correlates positively with alternative measures of the same construct. Our model contains four latent variables (Climate for Learning, Organizational Culture, Work Satisfaction, and Burnout). Changes in the theoretical, latent construct should be'reflected' in changes in the indicator variables [56]; for example, if Work Satisfaction increases, a concept we cannot measure or observe _directly_, we expect to see this change reflected in the values of its indicators that we _can_ measure or observe directly.
We used two metrics to assess convergent validity: the Average Variance Extracted (AVE) and the loading of an indicator onto its construct (the outer loading). The AVE is the proportion of variance that is shared across indicators. The AVE should be at least 50%, indicating that it explains most part the variation in its indicators [56]. All AVE values for the three latent constructs in our model are above this threshold of 50% (see appendix).
A latent variable is measured by two or more indicators; each indicator is expected to have a loading of at least 50%, because the square of the loading indicates the variance in the indicator that is explained, which should be at least 50% (and 70%\({}^{2}\approx\) 50%) [56]. All loadings of the indicators of all four latent constructs exceeded this as shown in Figure 3, which we considered acceptable.
#### Iv-A2 Internal Consistency Reliability
Second, we verified how well the different indicators are consistent with one another and able to reliably and consistently measure the constructs. A high degree of consistency means that indicators refer to the same construct. There are several tests to measure internal consistency reliability. We performed both the Cronbach's \(\alpha\) and Composite Reliability tests; Cronbach's \(\alpha\) frequently shows lower values, whereas the Composite Reliability (CR) is a more liberal test, which sometimes overestimates the values [56]. A desirable range of values for both Cronbach's \(\alpha\) and CR is between.7 and.9 [56]. Values below.6 suggest a lack of internal consistency reliability, whereas values over.95 suggest that indicators are too similar and thus are not desirable. All Cronbach \(\alpha\) and CR values fell between.7 and.9 (see appendix).
#### Iv-A3 Discriminant Validity
Third, we verified whether each construct represents characteristics that are not measured by other constructs, i.e., we assessed the discriminant validity of the constructs. A primary means to assess discriminant validity is to investigate the Heterotrait-monotrait (HTMT) ratio of correlations [60]. The discriminant validity could be considered problematic if the HTMT ratio exceeds.9 [60]; some scholars recommend a more conservative cut-off of.85 [56]. The HTMT ratio between the four latent constructs ranged between.65 and.71 (see appendix).
#### Iv-A4 Assessing Collinearity
To ensure that the variables are independent, we calculate their collinearity by means of the Variance Inflation Factor (VIF). In our model all VIF values are below 1.7, well below the cut-off of 5 [56] (see appendix).
### _Model Fit_
The overall model was measured through a standardized root mean square residual (SRMR) composite factor model, which should be lower than.08 [61]. Thus, the values obtained for the complete model (.060), the men's model (.061), the women's model (.061), the leaders' model (.070), and the non-leaders model (.60) have a good fit.
## V Results
To answer RQ1, we evaluated the hypotheses in the structural model (Sec. V-A) and, to answer RQ2, we performed a Multi-Group Analysis (Sec. V-B).
### _RQ1_. How are organizational culture and burnout in software delivery teams related?_
Table II shows the results for our hypotheses, including the mean of the bootstrap distribution (_B_), the standard deviation (_SD_), the 95% confidence interval, and the p-values.
Fig. 3: Path coefficients (p \(<\) 0.05 indicated by a full line). Latent constructs are represented as circles.
Path coefficients (_B_) are interpreted as follows in this example for H1: having _B_=.48 means that a unit-change of Organizational Culture's standard deviation triggers a direct change in Belonging of.48 \(\times\) Belonging's standard deviation.
Results revealed that a Generative Organizational Culture has a positive significant association with Sense of Belonging (H1, _B_=.48), Climate for Learning (H2, _B_=.54), and Inclusiveness (H3, _B_=.45). We also found that a Sense of Belonging to the team (H4, _B_=.24), Climate for Learning (H5, _B_=.22), and Inclusiveness (H6, _B_=.12) have a positive and significant association with Work Satisfaction, which includes feelings of joy and enthusiasm to recommend the team and company as a place to work to friends former colleagues. Finally, Work Satisfaction has a reverse (negative) and significant association with Burnout (H7, _B_=-0.29). Hence, hypotheses H1 to H7 were supported with p-values \(<\) 0.001.
We also investigated whether the association between Work Satisfaction and Burnout would change when considering respondents' national culture, as defined by Hofstede's six dimensions (see Sec. II-D) (H8). This association is not significantly affected (neither increased nor reduced) by any of these six cultural dimensions. Power Distance (H8.a) and Individualism (H8.b) have a positive, but insignificant moderation effect in the association between Work Satisfaction and Burnout. Masculinity (H8.c), Uncertainty Avoidance (H8.d), Long Term Orientation (H8.e), and Indulgence (H8.f) have a negative, but insignificant moderation effect on the association between Work Satisfaction and Burnout. Hence, H8.(a-f) are not supported for the complete dataset.
We assessed the relationship between constructs and the predictive capabilities of the model. The _R\({}^{2}\)_ ranges from 0 to 1, with higher values indicating a greater explanatory power. _R\({}^{2}\)_ values of.75,.50, and.25 are considered substantial, moderate, and weak, respectively [56]. However, such thresholds are rather arbitrary and generic, and do not consider the specific context or research area; in some fields, an _R\({}^{2}\)_ value as low as.10 is considered satisfactory [62]. The _R\({}^{2}\)_ values of the five endogenous variables in our model (Belonging, Climate for Learning, Inclusiveness, Work Satisfaction, and Burnout) are shown in Table III are acceptable ranging between.20 and.39 (column 'All'; further variation in range emerged after the multi-group analysis, discussed later).
We also assessed the predictive relevance of the model, using the Stone-Geisser _Q\({}^{2}\)_ measure. For this, we used the PLSPredict algorithm that is available in the SmartPLS v. 4 package [63, 64]. Values larger than 0 indicate the construct has predictive relevance, while negative values (smaller than zero) indicate that the model does not perform better than the simple average of the endogenous variable would do. The values were all positive, indicating the construct has predictive relevance [56].
_RQ2: Does the relationship between organizational culture and burnout vary by gender and leadership position?_
RQ2 seeks to establish whether the theorized relationship between organizational culture and burnout (as investigated for RQ1), varies when we consider gender and leadership position. Are some of the hypothesized links stronger for men than for women, or vice versa? Or are these associations different for people in leadership positions vs. people not in leadership positions? To answer this, we used multi-group analyses splitting by gender and leadership position and exploring differences that can be traced back to observable characteristics and may not be evident when examined as a whole. The multi-group analysis involves running the PLS path model multiple times for different groups, once for each group; groups are captured through categorical variables (in this case, binary variables). Hair et al. [61] proposed three steps to conduct such an analysis: (1) group creation; (2) invariance test; and (3) result analysis.
#### Iv-B1 Step 1. Groups Creation
We grouped our participants to observe heterogeneity according to two variables: gender (male = 0 and female = 1) and leadership (leadership role = 1 and non-leadership role = 0). We used pre-existing demographic data the company maintains for its reporting requirements under government laws to split the participants into different groups.
#### Iv-B2 Step 2. Evaluation of measurement invariance of composite models (MICOM)
Measurement invariance is a mechanism to assess whether or not the loadings of the items that represent the latent variables differ significantly across different groups. In other words, we want to assess whether the differences can be attributed to the theoretical constructs and not to how we measured those constructs [61]. Comparing group-specific model relationships for significant differences using a multi-group analysis requires establishing configural and compositional invariance [65, 61]. Configural invariance does not include a test and is a qualitative assessment of making sure that all of the composites are equally defined for all of the
groups such as equivalent indicators per measurement model, equivalent treatment of the data, and equivalent algorithm settings or optimization criteria. The configural invariance is established in our model. Following that, compositional invariance exists when the composite scores are the same across both groups, and is statistically tested to assess whether the composite scores differ significantly across the groups. For this purpose, the MICOM procedure examines the correlation between the composite scores of both groups and requires that the correlation equals 1. We ran the permutation test in SmartPLS and verified that compositional invariance is established for all latent variables in the PLS path model. We established partial measurement invariance and thus multi-group analysis is suitable [66].
#### V-B3 Step 3. Groups Comparison and Analysis
Path coefficients generated from different samples are usually numerically different, but the question is whether the differences are statistically significant. We analyzed the differences between the coefficients' paths for the groups. If they are significant, they can be interpreted as having moderating effects.
**Gender:** As Table III shows, Generative Organizational Culture has a strong and significant relationship with Sense of Belonging and Climate for Learning for both men and women. However, although Organizational Culture is also associated with Inclusiveness for both men and women, the association is stronger for women (\(\beta\) =.53) than for men (\(\beta\) =.41). Sense of Belonging and Climate for Learning have a significant and similar relationship with Work Satisfaction for both genders. Although both genders are satisfied by Inclusiveness, women (\(\beta\) =.20) are two times more satisfied when the team is Inclusive compared to men (\(\beta\) =.10). Lastly, the link between Work Satisfaction and Burnout is the same for men and women. However, men (but not women) who live in a competitive national culture, where people want to be the best (i.e., high degree of masculinity), have even less burnout.
**Leadership Position:** As Table III shows, Climate for Learning has a strong and significant relationship with Work Satisfaction for those who are not in leadership positions. However, Climate for Learning is not associated with Work Satisfaction for leaders. Lastly, leaders (\(\beta\) =.41) are close to two times more satisfied by a Generative Organizational Culture when compared to those who are not in leadership positions (\(\beta\) =.24).
## VI Discussion
Human factors are receiving increasingly more attention in software engineering research and industry. Themes such as work satisfaction have been studied extensively and have been linked to employees' intention to stay with (or leave, when it is lacking) an organization. This has direct effects on organizations' capacity to deliver services and software products.
The past several years have seen dramatic changes in the way people work, driven in large part by the Covid-19 pandemic and the resulting lockdowns, forcing people to work from home. There are numerous studies on how this has affected people in negative ways (e.g. [67]). One important theme in this context is Burnout, which is the result of continuous exposure to unhealthy levels of stress. However, there is a paucity of research in software engineering on this topic. Globular, a major provider of software services that has operations across five continents, is reliant on a healthy workforce to conduct its business. Globular is interested in developing better insights into the various factors that might play a role in employee burnout. Thus, in this paper, we report on a large-scale survey at Globular with a primary focus has been on Burnout and its antecedents.
In particular, we sought to understand the role of organizational culture in relation to Burnout. Organizational culture has been shown to be an important factor in the performance of employees and teams, including in software delivery teams [53]. Our theoretical argument in this paper is that all employees within an organization are exposed to the same organizational culture; while they will experience this differently, we believe other factors play a role in why people might experience burnout. In particular, we looked at three factors: people's Sense of Belonging, whether or not an organization advocates a Climate for Learning, and people experiencing Inclusiveness. Further, rather than having a direct link to Burnout, we believe that these factors affect employees' Work Satisfaction, and this is a major predictor for people's experienced Burnout.
We used questions based on an Organizational Culture typology that is focused on how organizations process information and behave when things are not going well, bringing together not only culture, but also management style. We analyzed Organizational Culture as a latent variable that included attributes about sharing bad news with no fear, considering failures as learning opportunities, encouraging cross-functional collaboration, welcoming new ideas, sharing responsibilities, and actively seeking information when needing it.
We also considered the moderating role of national culture, considering the six dimensions identified by Hofstede. Finally, we conducted our analysis for the whole sample and conducted two different multi-group analyses, distinguishing respondents by gender, and whether or not they are in a leadership role.
Before we discuss the implications of our results, we discuss a number of limitations of this study that should be kept in mind while interpreting the findings.
### _Threats to Validity_
#### Vi-A1 Construct Validity
We adopted and tailored existing measurement instruments when possible, and developed measurement instruments for some constructs based on prior literature. Our analysis of the measurement model confirmed that our constructs were internally consistent, and scored well on convergent and discriminant validity tests. We defined new a construct called Work Satisfaction that included hendism and satisfaction towards the team and the company. We acknowledge the fact that Burnout is a construct that can be measured by more complex instruments [68]. However, many existing instruments contain a large number of items (questions), which would be impractical in organizational
settings because this would negatively affect the response rate. In this study we have used respondents' country of residence as a proxy for Power Distance as a dimension of culture as defined by Hofstede [43]. While also used in other studies [69], we acknowledge it is an approximation and not a perfect measure. One potential issue is that we do not know how _long_ respondents have lived in their current country of residence. Another potential issue is that contributors' original culture that they grew up with may differ from the culture they now live in. This is why we report the metric as being surrounded by a specific culture instead of having a specific culture. Measuring culture in a more precise way is an important avenue for future work in general.
#### Iv-A2 Internal Validity
We propose a series of hypotheses as associations between different constructs rather than causal relationships, as the present study is a sample study, rather than an experimental study [70]. Our overall argument is that employees who perceive their organization to have what Westrum [19] labeled a Generative Organizational Culture are satisfied and tend to experience levels of burnout; this line of reasoning is easier to theoretically justify than the suggestion that Burnout leads to a negative organizational culture (what Westrum referred to as a Pathological organization) [19]. Further, it is likely that other factors are at play. The coefficient of determination (\(R^{2}\)) of the endogenous variables ranged between.2 and.4 which in the software engineering context can be considered reasonable. Thus, these results represent a useful starting point for future studies.
Respondents are current employees and we did not collect data from past employees. As the company does not offer the same questionnaire to people who are leaving, we would not have the same data to compare the perspectives of current and past employees. The relationship between burnout and intentions to leave, and also the actual act of leaving are of interest for future work.
#### Iv-A3 External Validity
This survey was conducted within Globular. The response rate numbers are aligned with the overall distribution of the company and therefore can be generalized across the company. Globular is a multi-national company with more than 25,000 employees working in different national cultures. The responses were sufficiently consistent to find full or partial empirical support for the hypotheses. Additional studies that replicate our findings in other companies can further bolster our results.
### _Implications of results_
Our analysis highlights several key findings and implications. In the following, we discuss the supported hypotheses.
**H1. Generative Organizational Culture \(\rightarrow\) Belonging:** Our results align with previous research that showed a sense of belonging emerges from a people-centered culture [71] and that an openness to innovation and shared responsibility helps to develop organizational belonging [72]. When working in a team that welcomes new ideas, fosters collaboration, and shares responsibilities [19], people understand how their work contributes toward a common goal leading to affective commitment to the team. Our results showed there was no significant difference between the groups for H1, which proposed
a positive association between a Generative Organizational Culture and a Sense of Belonging, as shown in Table III.
**H2. Generative Organizational Culture \(\rightarrow\) Climate for Learning:** In a generative organizational culture, people do not fear failure because they are trained to learn from mistakes [19]; learning is a key point, considered essential and an investment. An inspiring culture that encourages and enables employees to bring their best efforts and ideas to the team promotes a Climate for Learning [71]. We found that a generative organizational culture is positively associated with Climate for Learning and that there is no significant difference between different groups (either in terms of gender or whether or not respondents fulfilled a leadership role) for this association.
**H3. Generative Organizational Culture \(\rightarrow\) Inclusiveness:** Although the association between Organizational Culture and Inclusiveness is not significantly different according to the leadership position, it is stronger for women (\(B\)=.53) than for men (\(B\)=.41). This difference opens a path to discuss what brings Inclusiveness for minority and majority groups. A lot has been researched about providing safe place for diversity so that everyone feels equally welcomed. However, results shed light that different factors bring the feeling of being included for minority and majority groups.
Hypotheses 1-3 show the benefits of a generative organizational culture where employees have the psychological safety to talk about failures and present new ideas. Therefore, companies should reflect on their team and leadership culture to promote the ideas of generative organizational culture.
Changing the way people behave and work changes culture. Teams can identify helpful practices to create a generative culture that fosters information flow and trust by examining the aspects of Westrum's model of organizational culture [19], focusing on those behaviors seen in the generative culture:
* Cooperation and bridging. Break down silos and create cross-functional teams that include representatives from each functional area of the software delivery process, so everyone shares the responsibility for the software delivery life-cycle. Encourage informal meetings between people who do not understand (or are frustrated by) each other's work. Ask them to understand each other why they do what they do-and invite people to come up with new ideas together.
* Train the messengers and let failure lead to inquiry. People must be able to take risks and feel safe to fail, and also to bring bad news without fear in order to make improvements. Hold blameless postmortems, so teams surface problems as early as possible, and solve them more effectively. Instead of blaming, ask questions about the root-cause of failures, in order to improve technical systems, processes, and the organizational culture.
* Share risks and responsibilities. Quality, availability, reliability, and security should be everyone's job. One practical example can be ensuring that developers share responsibility for maintaining their code in production.
* Encourage novelty. Encouraging employees to explore new ideas can lead to great outcomes. One example of this practice can be giving people time each week for experimentation, hosting internal hackathons and conferences to share ideas and collaborate. When releasing employees from habitual pathways and repetitive tasks, they can be creative, bringing new ideas for processes and products.
**H4. Sense of Belonging \(\rightarrow\) Work Satisfaction:** A Sense of Belonging is a human need [73, 74, 26]. Although it has several different antecedents for people, employees from different groups showed higher levels of Work Satisfaction when they feel they are part of a team. Our analysis based on groups showed similar path coefficients for the association between Sense of Belonging and Work Satisfaction, as we present in Table III. This indicates that strategies that focus on making employees feel part of the team or the company, pay off, because this positively influences satisfaction, regardless of the group to which the developers belong. Therefore, companies could invest in cohort building, creating opportunities where developers can socialize and develop an emotional connection. Additionally, belonging can be fostered via a team culture where individuals' contributions are appreciated and they can see how their work fits the team's overall goals.
**H5. Climate for Learning \(\rightarrow\) Work Satisfaction:** The association between Climate for Learning and Work Satisfaction is significantly different between leaders and non-leaders. While those not in leadership positions are satisfied by having the Climate for Learning, the association was not significant for leaders. There is also no difference when we group developers by their gender. Based on these findings, in order to keep employees satisfied in their work, we recommend that companies offer professional development opportunities, where employees can learn new technology and management skills needed to advance their careers.
**H6. Inclusiveness \(\rightarrow\) Work Satisfaction:** The association between Inclusiveness and Work Satisfaction holds positively across the whole sample. Additionally, the association showed different strengths when we compared genders. Women are twice more satisfied (\(B\)=.20) when having a welcoming and safe space for diversity than men (\(B\)=.10). Women represent one of the gender minorities and face prejudice and challenges in the tech workplace. A welcoming and safe space allows them to thrive. Companies should evaluate the gender diversity of their tech workforce, expending effort in diversity recruitment and hiring, as well as in training programs that instill diversity and inclusion principles in their teams.
**H7. Work Satisfaction\(\rightarrow\)Burnout:** Work Satisfaction has a negative association with Burnout, with no significant difference across groups.Burnout is the exhaustion caused by excessive and prolonged workplace stress, which can happen in software delivery teams due to the pressure of deadlines and high performance. However, we showed that Work Satisfaction represents an alleviating factor in reducing Burnout, which is aligned with previous research in software delivery teams [34]. So, although there were different antecedents to Work Satisfaction when achieved, satisfaction reduces Burnout across the groups of gender and leadership positions.
**HS.c. Masculinity \(\mathbf{\times}\) Work Satisfaction \(\mathbf{\rightarrow}\) Burnout** In a competitive and materialist culture (high levels of Masculinity), there was a great impact of work satisfaction in reducing burnout (This moderation had no effect for women.) According to Hofstede's framework of national cultures [43], in Masculine cultures, men "should be" and women "may be" ambitious, work prevails over family, there is an admiration for the strong, fathers deal with facts and mothers with feelings, and girls cry and boys do not. In feminine cultures, fulfilling multiple social roles without social judgment is encouraged, so both men and women receive cultural support for prioritizing family time over time spent on the job [43]. Finding differences between groups is aligned with previous research that showed that people's well-being is achieved according to their current specific needs [21]. The result can be interpreted as masculine cultures drive men to strive for achievement and success, and when they perceive that they are successful (satisfied with their work) it reduces the perception of burnout. In feminine societies, men might feel uncomfortable and burn out more. Another possible interpretation is that men can take other measures to avoid burnout in masculine cultures, because the visible expressions of stress behavior may threaten the masculine value of heroism [75]. Therefore, organizations should consider the national culture where they operate and structure their incentives and career advancement opportunities accordingly.
## VII Conclusion
Attention to human factors is critical to software development employees' ability to perform. Globular is a large software services organization whose management sought to understand the concept of Burnout among their workforce. In this paper, we report on a theoretical model that seeks to explain how organizational culture and burnout in software delivery teams are associated. A large-scale survey with over 3,000 respondents provided sufficient data to test this model, and to distinguish between different subgroups (i.e., men/women and people on leadership/non-leadership roles). We argue that, given the international nature of this study that also considers the role of national culture (according to the Hofstede 6-D framework), albeit at one company, these findings are of interest to other large multinational organizations. Additionally, there are clear extension points of our study, as well as opportunities to replicate this study, which we think can contribute to a body of knowledge that considers critical human factors such as Burnout.
## Acknowledgments
We thank all the survey participants for their time and insights. This work is partially supported by the National Science Foundation under Grant Numbers 1815486, 1815503, 1900903, and 1901031, and Science Foundation Ireland grants no. 13/RC/2094-P2 to Lero, the SFI Research Centre for Software, and no. 15/SIRG/3293.
|
2310.10313 | On relative constructible sheaves and integral transforms | The aim of this note is threefold. The first is to obtain a simple
characterization of relative constructible sheaves when the parameter space is
projective. The second is to study the relative Fourier-Mukai for relative
constructible sheaves and for relative regular holonomic $\mathcal D$-modules
and prove they induce relative equivalences of categories. The third is to
introduce and study the notions of relative constructible functions and
relative Euler-Poincar\'e index. We prove that the relative Euler-Poincar\'e
index provides an isomorphism between the Grothendieck group of the derived
category of complexes with bounded relative $\mathbb R$-constructible
cohomology and the ring of relative constructible functions. | Luisa Fiorot, Teresa Monteiro Fernandes | 2023-10-16T11:47:15Z | http://arxiv.org/abs/2310.10313v4 | # A note on the ring of relative constructible functions
###### Abstract.
The aim of this note is twofold. The first is to obtain a characterization of relative constructible sheaves under a natural condition on the parameter space. The second is to introduce and study the notions of relative constructible functions and relative Euler-Poincare index. We prove that the relative Euler-Poincare index provides an isomorphism between the Grothendieck group of the derived category of complexes with bounded relative \(\mathbb{R}\)-constructible cohomology and the ring of relative constructible functions.
Key words and phrases:Relative constructible sheaf, constructible functions, Grothendieck group 2020 Mathematics Subject Classification: 32S60, 18F30, 14C35 The research of L. Firoot was supported by project PRIN 2022. The research of T. Monteiro Fernandes was supported by Fundacao para a Ciencia e Tecnologia, under the project: UIDB/04561/2020.
Introduction
Let \(X\) be a smooth smooth manifold and \(\mathbb{R}\) a smooth smooth manifold. Let \(\mathcal{M}\) be a smooth manifold and \(\mathcal{M}\) be a smooth manifold.
Then the natural generalization of the local Euler-Poincare index \(\chi\) is the relative Euler-Poincare index \(\chi^{S}\) (Definition 2.6) obtained by assigning to \(F\) the class of \(F|_{\{x\}\times S}\) in \(\mathsf{K}_{0}(S)\).
Note that, in the case of \(X=\{pt\}\), \(\chi^{S}\) reduces to the identity. The proof of Theorem 2.10 is stepwise similar to that of [7, Th. 9.7.1].
_Organization of the paper._ In Section 1 we present a list of results on relative \(\mathbb{R}\)-constructible sheaves which will be used in the sequel. Lemma 1.4 gives a precision to the characterization of \(\mathsf{D}^{\mathrm{b}}_{\mathbb{R}\text{-}\mathrm{c}}(p_{X}^{-1}\mathcal{O}_ {S})\) in [3, Cor. A.4], closing with the statements and proofs of Theorem 1.5 and Proposition 1.6. In Section 2 we introduce the notions of relative constructible functions and of relative Euler-Poincare index culminating with the statement and proof of Theorem 2.10. Section 3 is devoted to study of the operations on \(S\)-constructible functions, essentially checking that the well-known properties in the absolute case, which have already found a large range of applications (cf. for instance [12], [8]), still hold true in the relative case with a possibly wider range.
We warmly thank Pierre Schapira for corrections and suggestions on a previous version.
## 1. Relative \(\mathbb{R}\)-constructible sheaves
Let \(X\) be a real analytic manifold and let \(S\) be a complex manifold. Following [9, SS2], a sheaf \(F\) of \(p_{X}^{-1}\mathcal{O}_{S}\)-module is called \(S\)-locally constant coherent if, for each point \((x_{0},s_{0})\in X\times S\), there exists a neighborhood \(U=V_{x_{0}}\times T_{s_{0}}\) and a coherent sheaf \(G^{(x_{0},s_{0})}\) of \(\mathcal{O}_{T_{s_{0}}}\)-modules such that \(F|_{U}\cong p_{V_{x_{0}}}^{-1}(G^{(x_{0},s_{0})})\). As explained in [9, SS. 2.2], if we choose a sufficiently small contractible neighborhood \(V_{x_{0}}\) of \(x_{0}\) then \(U=V_{x_{0}}\times S\) and \(G^{(x_{0},s_{0})}=F|_{\{x_{0}\}\times S}\) is a coherent \(\mathcal{O}_{S}\)-module.
We define \(\mathsf{D}^{\mathrm{b}}_{\mathrm{lc\ coh}}(p_{X}^{-1}\mathcal{O}_{S})\) to be the full subcategory of \(\mathsf{D}^{\mathrm{b}}(p_{X}^{-1}\mathcal{O}_{S})\) whose complexes have \(S\)-locally constant coherent cohomologies (hence \(F|_{\{x_{0}\}\times S}\in\mathsf{D}^{\mathrm{b}}_{\mathrm{coh}}(\mathcal{O}_{S})\)).
We denote by \(\mathrm{Mod}_{\text{\tiny w-$\mathbb{R}$-$c}}(p_{X}^{-1}\mathcal{O}_{S})\) (resp. \(\mathrm{Mod}_{\mathbb{R}\text{-}\mathrm{c}}(p_{X}^{-1}\mathcal{O}_{S})\)) the full subcategory of \(\mathrm{Mod}(p_{X}^{-1}\mathcal{O}_{S})\) whose objects \(F\) are such that, for a suitable \(\mu\)-stratification \((X_{\alpha})\) of \(X\), \(F|_{X_{\alpha}\times S}\) is \(S\)-locally constant (resp. \(S\)-locally constant coherent). We call these objects weakly \(S\)-\(\mathbb{R}\)-constructible sheaves (resp. \(S\)-\(\mathbb{R}\)-constructible sheaves). Let us denote by \(\mathsf{D}^{\mathrm{b}}_{\mathbb{R}\text{-}\mathrm{c}}(p_{X}^{-1}\mathcal{O}_ {S})\) (resp. \(\mathsf{D}^{\mathrm{b}}_{\text{\tiny w-$\mathbb{R}$-$c}}(p_{X}^{-1}\mathcal{O}_ {S})\)) the full subcategory of \(\mathsf{D}^{\mathrm{b}}(p_{X}^{-1}\mathcal{O}_{S})\) whose objects \(F\) have \(S\)-\(\mathbb{R}\)-constructible cohomologies (resp. weakly \(S\)-\(\mathbb{R}\)-constructible cohomologies). Let us recall some results on (weakly) \(S\)-\(\mathbb{R}\)-constructible sheaves:
* According to [3, Cor. A.4] we have the following equivalence of triangulated categories \(\mathsf{D}^{\mathrm{b}}(\mathrm{Mod}_{\text{\tiny w-$\mathbb{R}$-$c}}(p_{X}^{- 1}\mathcal{O}_{S}))\simeq\mathsf{D}^{\mathrm{b}}_{\text{\tiny w-$\mathbb{R}$-$c }}(p_{X}^{-1}\mathcal{O}_{S})\) and \(\mathsf{D}^{\mathrm{b}}_{\mathbb{R}\text{-}\mathrm{c}}(\mathrm{Mod}_{\text{ \tiny w-$\mathbb{R}$-$c}}(p_{X}^{-1}\mathcal{O}_{S}))\simeq\mathsf{D}^{\mathrm{ b}}_{\mathbb{R}\text{-}\mathrm{c}}(p_{X}^{-1}\mathcal{O}_{S})\). In order to keep the notation clean in the sequel, we will write \(\mathsf{D}^{\mathrm{b}}_{\mathbb{R}\text{-}\mathrm{c}}(p_{X}^{-1}\mathcal{O}_ {S})\) instead of \(\mathsf{D}^{\mathrm{b}}_{\mathbb{R}\text{-}\mathrm{c}}(\mathrm{Mod}_{\text{ \tiny w-$\mathbb{R}$-$c}}(p_{X}^{-1}\mathcal{O}_{S}))\) i.e. the complexes in \(\mathsf{D}^{\mathrm{b}}_{\mathbb{R}\text{-}\mathrm{c}}(p_{X}^{-1}\mathcal{O}_ {S})\) are always assumed to have weakly \(S\)-\(\mathbb{R}\)-constructible entries.
* Following [9, Prop. 2.17], the contravariant functor \(\boldsymbol{D}_{X}(\bullet)=R\mathcal{H}\text{\rm{\footnotesize$\mathcal{S}$}} \text{\rm{\footnotesize$\mathcal{U}$}}_{p^{-1}_{X}\mathcal{O}_{S}}(\bullet,p^{-1} _{X}\mathcal{O}_{S})[d_{X}]\) is a duality in \(\mathsf{D}^{\text{\rm{\footnotesize$\mathcal{b}$}}}_{\mathbb{R}\text{-c}}(p^{-1 }_{X}\mathcal{O}_{S})\).
* Following [9, Prop. 2.23], let \(F\) in \(\mathsf{D}^{\text{\rm{\footnotesize$\mathcal{b}$}}}_{\mathbb{R}\text{-c}}(p^{-1 }_{Y}\mathcal{O}_{S})\) and let \(f:Y\to X\) be a morphism of real analytic manifolds which is proper on \(\text{\rm{\footnotesize$\mathcal{W}$}}_{Y}(F)\). Then \(Rf_{*}F\) belongs to \(\mathsf{D}^{\text{\rm{\footnotesize$\mathcal{b}$}}}_{\mathbb{R}\text{-c}}(p^{-1 }_{X}\mathcal{O}_{S})\) where we keep the same notation \(f\) for the induced morphism \(f\times\operatorname{Id}:X\times S\to Y\times S\).
For the sake of completeness we include two functorial properties which were tacitly assumed but not written in [9]. Let \(\pi:T\to S\) be a morphism of complex manifolds and let us keep the same notation \(\pi\) for the induced morphism \(\operatorname{Id}\times\pi:X\times T\to X\times S\).
**Proposition 1.1**.: _For each \(F\in\mathsf{D}^{\text{\rm{\footnotesize$\mathcal{b}$}}}_{\mathbb{R}\text{-c}}(p ^{-1}_{X}\mathcal{O}_{S})\), \(L\pi^{*}_{X}F:=p^{-1}_{X}\mathcal{O}_{T}\otimes^{L}_{p^{-1}_{X}\pi^{-1} \mathcal{O}_{S}}\pi^{-1}F\) is a complex in \(\mathsf{D}^{\text{\rm{\footnotesize$\mathcal{b}$}}}_{\mathbb{R}\text{-c}}(p^{ -1}_{X}\mathcal{O}_{T})\)._
Proof.: According to the triangulation Theorem ([7, Th. 8.2.5]) and [7, Th. 8.3.20], we can choose a \(\mu\)-stratification \(X=\bigsqcup_{\alpha}X_{\alpha}\) such that \(F|_{X_{\alpha}\times S}\) is isomorphic to \(p^{-1}_{X_{\alpha}}G_{\alpha}\) for some \(G_{\alpha}\in\mathsf{D}^{\text{\rm{\footnotesize$\mathcal{b}$}}}_{\mathrm{ coh}}(\mathcal{O}_{S})\). Thus, \(L\pi^{*}F|_{X_{\alpha}\times T}\simeq p^{-1}_{X_{\alpha}}L\pi^{*}G_{\alpha}\), where \(L\pi^{*}G_{\alpha}\in\mathsf{D}^{\text{\rm{\footnotesize$\mathcal{b}$}}}_{ \mathrm{coh}}(\mathcal{O}_{T})\). q.e.d.
**Proposition 1.2**.: _Let us be given \(G\in\mathsf{D}^{\text{\rm{\footnotesize$\mathcal{b}$}}}_{\mathbb{R}\text{-c}}(p ^{-1}_{X}\mathcal{O}_{T})\) and let us assume that \(\pi\) is proper on \(\text{\rm{\footnotesize$\mathcal{W}$}}_{S}G:=\overline{p(\text{\rm{\footnotesize $\mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$ \mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm{ \footnotesize$\mathcal{W}$}}\text{\rm{\footnotesize$\mathcal{W}$}}\text{\rm
_where the families \((U_{i_{\alpha}})_{i_{\alpha}\in I_{\alpha_{k}}}\) are locally finite families of relatively compact contractible subanalytic open sets in \(X\) and each \(G_{i_{\alpha}}\), for \(i_{\alpha}\in I_{\alpha_{k}}\), is a locally free \(\mathcal{O}_{S}\)-module of finite rank._
Proof.: (1) We will use the notations of [7, SS8.1]. Let us consider a subanalytic stratification \((X_{\alpha})_{\alpha\in A}\) of \(X\) such that \(F|_{X_{\alpha}\times S}\) is locally isomorphic to \(p_{X_{\alpha}}^{-1}G\), where \(G\in\operatorname{D}_{\operatorname{coh}}^{\operatorname{b}}(\mathcal{O}_{S})\). We then choose a subanalytic triangulation of \(X\) adapted to \((X_{\alpha})_{\alpha\in A}\) provided by theorem ([7, Th. 8.2.5]). Let \(\mathbf{T}=(T,\Delta)\) be the associated simplicial complex on \(X\). We identify each simplex to its homeomorphic image in \(X\), so that each \(X_{\alpha}\) is a union of simplices and each simplex is contained in some \(X_{\alpha}\).
For any simplex \(\sigma\), its open star \(U(\sigma)\) is contractible. The argument already used in [10, Proof of Prop. 3.3(i))] shows that the entries of \(F\) are \(p_{U(\sigma)*}\)-acyclic and that, for any \(x\in|\sigma|\), for any open subset \(V\) of \(S\)
\[\Gamma(U(\sigma)\times V;F)\stackrel{{\sim}}{{\longrightarrow}} \Gamma(V;F|_{\{x\}\times S}). \tag{1.1}\]
is an isomorphism.
We conclude that the natural morphism of complexes
\[G_{\sigma}(F):=p_{U(\sigma)*}F|_{U(\sigma)\times S}\longrightarrow F|_{\{x\} \times S}, \tag{1.2}\]
is a quasi-isomorphism for any \(x\in|\sigma|\). By adjunction there exists a canonical morphism
\[p_{U(\sigma)}^{-1}G_{\sigma}(F)\longrightarrow F|_{U(\sigma)\times S}, \tag{1.3}\]
and by extension a canonical morphism of complexes
\[u_{G_{\sigma}(F)}:\mathbb{C}_{U(\sigma)\times S}\otimes p_{X}^{-1}G_{\sigma}(F )\longrightarrow F. \tag{1.4}\]
Thus we conclude that, for any \(\sigma\), for any \(x\in|\sigma|\), and any \(s\in S\), the germ
\[u_{G_{\sigma}(F),\,(x,s)}:(\mathbb{C}_{U(\sigma)}\boxtimes G_{\sigma}(F))_{(x, s)}\longrightarrow F_{(x,s)} \tag{1.5}\]
is a quasi-isomorphism.
Because \(F|_{\{x\}\times S}\in\operatorname{D}_{\operatorname{coh}}^{\operatorname{b}} (\mathcal{O}_{S})\), Assumption 1.3 entails the existence of a bounded complex \(L_{\sigma}(F)\) (depending functorially on \(G_{\sigma}(F)\)) of locally free \(\mathcal{O}_{S}\)-modules of finite type and of a morphism of complexes \(L_{\sigma}(F)\to G_{\sigma}(F)\) which is a quasi-isomorphism. By (1.3) we obtain a morphism
\[p_{U(\sigma)}^{-1}L_{\sigma}(F)\longrightarrow F|_{U(\sigma)\times S} \tag{1.6}\]
and thus a natural morphism
\[u_{L_{\sigma}(F)}:\mathbb{C}_{U(\sigma)\times S}\otimes p_{X}^{-1}L_{\sigma}(F )\longrightarrow F \tag{1.7}\]
such that
\[u_{L_{\sigma}(F),\,(x,s)}:(\mathbb{C}_{U(\sigma)}\boxtimes L_{\sigma}(F))_{(x, s)}\longrightarrow F_{(x,s)}\]
is a quasi-isomorphism, for any \(\sigma\), for any \(x\in|\sigma|\), and any \(s\in S\).
We shall now adapt the proof of Lemma A.2 (i) of [2]. Namely we shall prove by induction on \(i\) that there exist morphisms of complexes \(u_{i}:H_{i}\to F\) such that:
* \((*)\) Each \(H_{i}^{k}\) is a locally finite sum \(\bigoplus_{l\in I_{k,i}}\mathbb{C}_{U_{l}}\boxtimes H_{l}\) where each \(H_{l}\) is \(\mathcal{O}_{S}\)-locally free of finite rank and \(U_{l}=U(\sigma_{l})\), for some \(i\)-simplex \(\sigma_{l}\),
* \(u_{i|\;|\mathbf{T}_{\mathbf{l}}|\times\mathbf{S}}:H_{i|\;|\mathbf{T}_{\mathbf{ l}}|\times\mathbf{S}}\to F_{|\;|\mathbf{T}_{\mathbf{l}}|\times\mathbf{S}}\) is a quasi-isomorphism.
The desired result follows with \(i=d_{X}\). For \(i=0\) we set \(u_{0}=\oplus_{\sigma\in|\mathbf{T}_{\mathbf{0}}|}u_{I_{\sigma}(F)}\) which satisfies the desired condition in view of (1.5). The argument then follows as in Lemma A.2 (i) of [2] since condition \((*)\) is stable by taking mapping cones. q.e.d.
**Theorem 1.5**.: _The natural functor_
\[\mathcal{I}:\mathsf{D}^{\mathrm{b}}(\mathrm{Mod}_{\mathbb{R}\text{-}\mathrm{c }}(p_{X}^{-1}\mathcal{O}_{S}))\longrightarrow\mathsf{D}^{\mathrm{b}}_{\mathbb{ R}\text{-}\mathrm{c}}(p_{X}^{-1}\mathcal{O}_{S}) \tag{1.8}\]
_is an equivalence of categories._
Proof.: Lemma 1.4 shows that \(\mathcal{I}\) is essentially surjective. For the sake of completeness we provide the proof of the fully faithfulness of the functor \(\mathcal{I}\). We have to prove that, given \(F,G\in\mathsf{D}^{\mathrm{b}}(\mathrm{Mod}_{\mathbb{R}\text{-}\mathrm{c}}(p_ {X}^{-1}\mathcal{O}_{S}))\) the morphism
\[\mathcal{I}_{F,G}:\mathrm{Hom}_{\mathsf{D}^{\mathrm{b}}(\mathrm{Mod}_{\mathbb{ R}\text{-}\mathrm{c}}(p_{X}^{-1}\mathcal{O}_{S}))}(F,G)\longrightarrow\mathrm{Hom}_{ \mathsf{D}^{\mathrm{b}}_{\mathbb{R}\text{-}\mathrm{c}}(p_{X}^{-1}\mathcal{O}_ {S})}(F,G)\]
is bijective.
Let us prove that \(\mathcal{I}_{F,G}\) is injective. Let \(\phi\in\mathrm{Hom}_{\mathsf{D}^{\mathrm{b}}(\mathrm{Mod}_{\mathbb{R}\text{-} \mathrm{c}}(p_{X}^{-1}\mathcal{O}_{S}))}(F,G)\) be represented by the diagram below in \(\mathsf{K}^{\mathrm{b}}(\mathrm{Mod}_{\mathbb{R}\text{-}\mathrm{c}}(p_{X}^{- 1}\mathcal{O}_{S}))\)
If \(\mathcal{I}_{F,G}(\phi)=0\) as a morphism in \(\mathrm{Hom}_{\mathsf{D}^{\mathrm{b}}_{\mathbb{R}\text{-}\mathrm{c}}(p_{X}^{ -1}\mathcal{O}_{S})}(F,G)\), we can insert the preceding diagram in the following commutative one, where \(E\in\mathsf{K}^{\mathrm{b}}(\mathrm{Mod}_{\text{-}\mathbb{R}\text{-}\mathrm{c }}(p_{X}^{-1}\mathcal{O}_{S}))\), \(H_{E}^{\star}\in\mathsf{K}^{\mathrm{b}}(\mathrm{Mod}_{\mathbb{R}\text{-} \mathrm{c}}(p_{X}^{-1}\mathcal{O}_{S}))\) and \(H_{E}^{\star}\to E\) is a quasi-isomorphism given by Lemma 1.4:
Hence \(\phi=0\) as a morphism in \(\mathsf{D}^{\mathrm{b}}(\mathrm{Mod}_{\mathbb{R}\text{-}\mathrm{c}}(p_{X}^{- 1}\mathcal{O}_{S}))\).
Let us now prove the surjectivity of \(\mathcal{I}_{F,G}\). Let \(\phi\in\mathrm{Hom}_{\mathsf{D}^{\mathrm{b}}_{\mathbb{R}\text{-}\mathrm{c}}(p_ {X}^{-1}\mathcal{O}_{S})}(F,G)\) be given. Then we have a commutative diagram in \(\mathsf{K}^{\mathrm{b}}(\mathrm{Mod}_{\text{-}\mathbb{R}\text{-}\mathrm{c}}(p_ {X}^{-1}\mathcal{O}_{S}))\) where \(H_{E}^{\star}\to E\) is a quasi-isomorphism provided by Lemma 1.4:
which proves that \(\mathcal{I}_{F,G}(\overline{\phi})=\phi\). q.e.d.
Let us prove that Assumption 1.3 is a necessary condition for Theorem 1.5.
**Proposition 1.6**.: _Let \(X\) be a real analytic manifold and let \(S\) be a complex manifold such that the natural functor_
\[\mathcal{I}:\mathsf{D}^{\mathrm{b}}(\mathrm{Mod}_{\mathbb{R}\text{-}\mathrm{ c}}(p_{X}^{-1}\mathcal{O}_{S}))\longrightarrow\mathsf{D}^{\mathrm{b}}_{\mathbb{R} \text{-}\mathrm{c}}(p_{X}^{-1}\mathcal{O}_{S})\]
_is an equivalence of categories. Then Assumption 1.3 holds true._
Proof.: If \(X\) is empty the result is trivial. Let us assume that \(X\) is non empty and let us choose a point \(x_{0}\in X\). Let \(\mathrm{Mod}_{\mathbb{R}\text{-}\mathrm{c},x_{0}}(p_{X}^{-1}\mathcal{O}_{S})\) (resp. \(\mathrm{Mod}_{x_{0}}(p_{X}^{-1}\mathcal{O}_{S})\)) be the full abelian subcategory of \(\mathrm{Mod}_{\mathbb{R}\text{-}\mathrm{c}}(p_{X}^{-1}\mathcal{O}_{S})\) (resp. \(\mathrm{Mod}(p_{X}^{-1}\mathcal{O}_{S})\)) whose objects have support contained in \(\{x_{0}\}\times S\). In view of [7, Prop. 1.7.11], \(\mathsf{D}^{\mathrm{b}}(\mathrm{Mod}_{\mathbb{R}\text{-}\mathrm{c},x_{0}}(p_ {X}^{-1}\mathcal{O}_{S}))\) is equivalent to the full subcategory of \(\mathsf{D}^{\mathrm{b}}(\mathrm{Mod}_{\mathbb{R}\text{-}\mathrm{c}}(p_{X}^{- 1}\mathcal{O}_{S}))\) (resp. \(\mathsf{D}^{\mathrm{b}}(\mathrm{Mod}_{x_{0}}(p_{X}^{-1}\mathcal{O}_{S}))\) is equivalent to the full subcategory of \(\mathsf{D}^{\mathrm{b}}(\mathrm{Mod}(p_{X}^{-1}\mathcal{O}_{S}))\)) whose objects have cohomology supported on \(\{x_{0}\}\times S\). Let \(\mathsf{D}^{\mathrm{b}}_{\mathbb{R}\text{-}\mathrm{c},x_{0}}(p_{X}^{-1} \mathcal{O}_{S})\) denote the full subcategory of \(\mathsf{D}^{\mathrm{b}}_{\mathbb{R}\text{-}\mathrm{c}}(p_{X}^{-1}\mathcal{O}_ {S})\) whose objects have support contained in \(\{x_{0}\}\times S\), by the previous remark this is equivalent to \(\mathsf{D}^{\mathrm{b}}_{\mathbb{R}\text{-}\mathrm{c}}(\mathrm{Mod}_{x_{0}}(p _{X}^{-1}\mathcal{O}_{S}))\simeq\mathsf{D}^{\mathrm{b}}_{\mathrm{coh}}( \mathcal{O}_{S})\). Then again \(\mathcal{I}\) induces an equivalence \(\mathcal{I}_{x_{0}}:\mathsf{D}^{\mathrm{b}}(\mathrm{Mod}_{\mathbb{R}\text{-} \mathrm{c},x_{0}}(p_{X}^{-1}\mathcal{O}_{S}))\to\mathsf{D}^{\mathrm{b}}_{ \mathbb{R}\text{-}\mathrm{c},x_{0}}(p_{X}^{-1}\mathcal{O}_{S})\). We have the following commutative diagram where the vertical arrows are given by the fully faithful functor \(\mathbb{C}_{\{x_{0}\}}\boxtimes\).
The vertical and the bottom arrows are equivalences thus the top arrow is also an equivalence of categories. q.e.d.
**Remark 1.7**.: The equivalence (1.8) encodes the following equivalences:
* if \(S=\{pt\}\) we get \(\mathsf{D}^{\mathrm{b}}(\mathrm{Mod}_{\mathbb{R}\text{-}\mathrm{c}}(X))\to \mathsf{D}^{\mathrm{b}}_{\mathbb{R}\text{-}\mathrm{c}}(X)\) see [7, Th. 8.4.5],
* if \(X=\{pt\}\) we get \(\mathsf{D}^{\mathrm{b}}(\mathrm{coh}(\mathcal{O}_{S}))\to\mathsf{D}^{ \mathrm{b}}_{\mathrm{coh}}(\mathcal{O}_{S})\) which is nothing but Assumption 1.3.
## 2. Relative constructible functions
### \(\mathcal{A}\)-constructible functions
Let \(\mathcal{A}\) be a commutative unital ring.
**Definition 2.1**.: A function \(\varphi:X\to\mathcal{A}\) is called \(\mathcal{A}\)-constructible if:
1. For each \(y\in\mathcal{S}\), \(\varphi^{-1}(y)\) is subanalytic.
2. The family \((\varphi^{-1}(y))_{y\in\mathcal{A}}\) is locally finite.
In particular, given \(\varphi\in\mathrm{CF}(X)\) and \(y\in\mathcal{A}\),
\[(y\cdot\varphi)(x):=y\cdot\varphi(x)\]
is a well defined \(\mathcal{A}\)-constructible function.
**Remark 2.2**.: In the case of \(\mathcal{A}=\mathbb{Z}\), the previous definition reduces to that of constructible function in [7, (9.7.1)]. We recall that the ring of constructible functions on \(X\) is denoted by \(\mathrm{CF}(X)\) in [7].
As a consequence, the set of \(\mathcal{A}\)-constructible functions is naturally endowed with a structure of an \(\mathcal{A}\)-algebra which we denote by \(\mathrm{CF}^{\mathcal{A}}(X)\). The presheaf
\[U\mapsto\mathrm{CF}^{\mathcal{A}}(U)\]
defines a sheaf which we denote by \(\mathcal{C}\mathcal{F}^{\mathcal{A}}_{X}\) (or \(\mathcal{C}\mathcal{F}^{\mathcal{A}}\) if there is no ambiguity) which is soft. In this way we can identify \(\mathrm{CF}(X)\) with a subring of \(\mathrm{CF}^{\mathcal{A}}(X)\) by associating to \(\varphi\in\mathrm{CF}(X)\) the \(\mathcal{A}\)-constructible function \(1\cdot\varphi\). We can adapt the contents of [7, Pag. 399] as follows:
**Proposition 2.3**.: _A function \(\varphi\) is \(\mathcal{A}\)-constructible if and only if there exists a \(\mu\)-stratification \(X=\bigsqcup_{\alpha}X_{\alpha}\) such that \(\varphi|_{X_{\alpha}}\) is constant for each \(\alpha\)._
Proof.: If \(\varphi\) is \(\mathcal{A}\)-constructible then, according to [7, Th. 8.3.20], we can choose a \(\mu\)-stratification \(X=\bigsqcup_{\alpha}X_{\alpha}\) refining the locally finite family of subanalytic subsets \((\varphi^{-1}(y))_{y\in\mathcal{A}}\). Thus for each \(\alpha\), \(\varphi|_{X_{\alpha}}\) is constant. The converse is clear. q.e.d.
Another characterization is the following. We denote by \(\mathbf{I}_{Y}\) the characteristic function of a subset \(Y\).
**Proposition 2.4**.: _The function \(\varphi\) is \(\mathcal{A}\)-constructible if and only if there exists a locally finite covering \(X=\bigcup_{\beta}Y_{\beta}\) where each \(Y_{\beta}\) is compact subanalytic and contractible, and a family \((y_{\beta})\in\mathcal{A}\) such that_
\[\varphi=\sum_{\beta}y_{\beta}\cdot\mathbf{I}_{Y_{\beta}} \tag{2.1}\]
Proof.: If \(\varphi\) is \(\mathcal{A}\)-constructible, considering a \(\mu\)-stratification as in Proposition 2.3, we apply the Triangulation Theorem (cf. [7, Th. 8.3.25]) to obtain a subanalytic triangulation of \(X\) such that each \(X_{\alpha}\) is a union of simplices \(Y_{\alpha_{j}}\) (hence compact subanalytic and contractible). We define \(y_{\alpha_{j}}=\varphi(x)\) with \(x\in Y_{\alpha_{j}}\). Thus \(Y_{\alpha_{j}}\) with \(\beta=\alpha_{j}\) and \(y_{\beta}=y_{\alpha_{j}}\) satisfy the desired conditions. Conversely, given the family \((Y_{\beta})\) satisfying (2.1), we can apply again [7, Th. 8.3.20] to obtain, by refining \((Y_{\beta})\), a \(\mu\)-stratification in the conditions of Proposition 2.3. q.e.d.
Here and in the sequel, unless ambiguity, \([\bullet]\) will denote the class of an object \(\bullet\) in a given Grothendieck group.
**Definition 2.5** (Index over \(\mathcal{A}\)).: An index over \(\mathcal{A}\) is a ring morphism
\[\psi^{\mathcal{A}}(\bullet):\mathsf{K}_{0}(\mathsf{D}^{\mathrm{b}}_{\mathbb{R }\cdot\mathrm{c}}(p_{X}^{-1}\mathcal{O}_{S}))\longrightarrow\mathrm{CF}^{ \mathcal{A}}(X).\]
### \(S\)-constructible functions
For the sake of simplicity, we shall denote by \(\mathsf{K}_{0}(S)\) the Grothendieck group of the triangulated category \(\mathsf{D}^{\mathrm{b}}_{\mathrm{coh}}(\mathcal{O}_{S})\) and also denote by \(\mathsf{K}_{\mathbb{R}\text{-}\mathrm{c}}(p_{X}^{-1}\mathcal{O}_{S})\) the Grothendieck group \(\mathsf{K}_{0}(\mathsf{D}^{\mathrm{b}}_{\mathbb{R}\text{-}\mathrm{c}}(p_{X}^ {-1}\mathcal{O}_{S}))\). We notice that, by [1, Prop. A.9.5], the natural morphism \(\mathsf{K}_{0}(\mathrm{coh}(\mathcal{O}_{S}))\to\mathsf{K}_{0}(S)\) (resp. \(\mathsf{K}_{0}(\mathrm{Mod}_{\mathbb{R}\text{-}\mathrm{c}}(p_{X}^{-1} \mathcal{O}_{S}))\to\mathsf{K}_{\mathbb{R}\text{-}\mathrm{c}}(p_{X}^{-1} \mathcal{O}_{S})\)) is an isomorphism of groups. In both cases the inverse morphism is given by
\[[F]\longmapsto\sum_{j}(-1)^{j}[\mathcal{H}^{j}F]. \tag{2.2}\]
We recall that \(\mathsf{K}_{0}(S)\) (resp. \(\mathsf{K}_{\mathbb{R}\text{-}\mathrm{c}}(p_{X}^{-1}\mathcal{O}_{S})\)) is a commutative ring whose product is defined by:
\[[F]\cdot[G]:=[F\otimes_{\mathcal{O}_{S}}^{L}G] \tag{2.3}\]
(resp.
\[[F]\cdot[G]:=[F\otimes_{p_{X}^{-1}\mathcal{O}_{S}}^{L}G]) \tag{2.4}\]
for \(F,\,G\in\mathsf{D}^{\mathrm{b}}_{\mathrm{coh}}(\mathcal{O}_{S})\) (resp. \(F,G\in\mathsf{D}^{\mathrm{b}}_{\mathbb{R}\text{-}\mathrm{c}}(p_{X}^{-1} \mathcal{O}_{S})\)). Clearly, \([\mathcal{O}_{S}]\) (resp \([p_{X}^{-1}\mathcal{O}_{S}]\)) is the unit.
In the sequel we replace the commutative algebra \(\mathcal{A}\) by \(\mathsf{K}_{0}(S)\) and, for simplicity, we also replace the notations \(\mathrm{CF}^{\mathcal{A}}\) by \(\mathrm{CF}^{S}\) and \(\mathcal{A}\)-constructible by \(S\)-constructible. Thus from Definition 2.1 we obtain the notion of \(S\)-constructible function (or relative constructible function).
We define the morphism \(\chi^{S}:\mathsf{K}_{\mathbb{R}\text{-}\mathrm{c}}(p_{X}^{-1}\mathcal{O}_{S}) \to\mathrm{CF}^{S}(X)\)
\[\chi^{S}([F])(x)=[F|_{\{x\}\times S}]_{\mathsf{K}_{0}(S)}=\sum_{i}(-1)^{i}[ \mathcal{H}^{i}(F|_{\{x\}\times S})]_{\mathsf{K}_{0}(S)} \tag{2.5}\]
for any \([F]\in\mathsf{K}_{\mathbb{R}\text{-}\mathrm{c}}(p_{X}^{-1}\mathcal{O}_{S})\). This definition is clearly independent of the choice of the representative of \([F]\) in \(\mathsf{D}^{\mathrm{b}}_{\mathbb{R}\text{-}\mathrm{c}}(p_{X}^{-1}\mathcal{O}_ {S})\).
**Definition 2.6**.: For \([F]\in\mathsf{K}_{\mathbb{R}\text{-}\mathrm{c}}(p_{X}^{-1}\mathcal{O}_{S})\), \(\chi^{S}(F)\) is the Euler-Poincare index of \([F]\).
**Remark 2.7**.: If \(\dim S=0\) the previous definition reduces to the definition of the Euler-Poincare index \(\chi:\mathsf{K}_{\mathbb{R}\text{-}\mathrm{c}}(X)\to\mathrm{CF}(X)\).
**Remark 2.8**.: In particular given \(\varphi=\chi([F])\) with \([F]\in\mathsf{K}_{\mathbb{R}\text{-}\mathrm{c}}(X)\) and \(G\in\mathsf{D}^{\mathrm{b}}_{\mathrm{coh}}(\mathcal{O}_{S})\) then \(\varphi[G]=\chi^{S}[q_{X}^{-1}F\otimes p_{X}^{-1}G]\). Here \(q_{X}\) denotes the projection of \(X\times S\) on \(X\).
**Remark 2.9**.: Let us recall that any triangulated functor between two triangulated categories induces a functor between their \(\mathsf{K}_{0}\). Let \(Z\) be a subanalytic subset of \(X\) and let \(G\), \(H\) be complexes of coherent \(\mathcal{O}_{S}\)-modules. Then, \([G]_{\mathsf{K}_{0}(S)}=[H]_{\mathsf{K}_{0}(S)}\) implies \([\mathbb{C}_{Z}\boxtimes G]_{\mathsf{K}_{\mathbb{R}\text{-}\mathrm{c}}(p_{X}^{ -1}\mathcal{O}_{S})}=[\mathbb{C}_{Z}\boxtimes H]_{\mathsf{K}_{\mathbb{R}\text{-} \mathrm{c}}(p_{X}^{-1}\mathcal{O}_{S})}\). Similarly, if \([\mathbb{C}_{Z}]_{\mathsf{K}_{\mathbb{R}\text{-}\mathrm{c}}(X)}=[\mathbb{C}_{Z ^{\prime}}]_{\mathsf{K}_{\mathbb{R}\text{-}\mathrm{c}}(X)}\) for some subanalytic locally closed subsets \(Z\) and \(Z^{\prime}\) in \(X\), then \([\mathbb{C}_{Z}\boxtimes G]_{\mathsf{K}_{\mathbb{R}\text{-}\mathrm{c}}(p_{X}^{ -1}\mathcal{O}_{S})}=[\mathbb{C}_{Z^{\prime}}\boxtimes G]_{\mathsf{K}_{ \mathbb{R}\text{-}\mathrm{c}}(p_{X}^{-1}\mathcal{O}_{S})}\). We may of course replace \(\mathbb{C}\) by any complex of finite dimensional vector spaces.
We have the variant of [7, Th. 9.7.1]:
**Theorem 2.10**.: _Let \(X\) be a real analytic manifold and let \(S\) be a complex manifold. Then the morphism \(\chi^{S}\) in (2.5) is an isomorphism._
Proof.: Isomorphism (2.2) shows that, for \(F\in\mathsf{D}^{\mathrm{b}}_{\mathbb{R}\text{-}\mathrm{c}}(p_{X}^{-1}\mathcal{O} _{S})\), \([F]=0\) if and only if
\[\sum_{j\,\text{odd}}[\mathcal{H}^{j}F]=\sum_{j\,\text{even}}[\mathcal{H}^{j}F].\]
Similarly to the proof in [7, Th. 9.7.1], any \(u\in\mathsf{K}_{\mathbb{R}\text{-}\mathrm{c}}(p_{X}^{-1}\mathcal{O}_{S})\) may be represented by a single complex \(F\in\mathsf{D}^{\mathrm{b}}_{\mathbb{R}\text{-}\mathrm{c}}(p_{X}^{-1}\mathcal{ O}_{S})\) (i.e. \(u=[F]\)). We shall follow the line of the proof of [7, Th. 9.7.1] in the absolute case from which we take the notations.
(a) Let us prove that \(\chi^{S}\) is surjective. Let \(\varphi\in\mathrm{CF}^{S}(X)\). According to Proposition 2.4 we may choose a subanalytic stratification \(X=\bigsqcup_{\alpha\in A}X_{\alpha}\) and coherent complexes \(G_{\alpha}\in\mathsf{D}^{\mathrm{b}}_{\mathrm{coh}}(\mathcal{O}_{S})\) such that \(\varphi=\sum_{\alpha\in A}\mathbf{I}_{X_{\alpha}}[G_{\alpha}]\). Then, \(F=\bigoplus_{\alpha\in A}\mathbb{C}_{X_{\alpha}}\boxtimes G_{\alpha}\) satisfies \(\chi^{S}([F])=\varphi\).
(b) Let us now prove the injectivity of \(\chi^{S}\). Let \(u\in\mathsf{K}_{\mathbb{R}\text{-}\mathrm{c}}(p_{X}^{-1}\mathcal{O}_{S})\). As in the proof of [7, Th. 9.7.1], \(u\) may be represented by a single complex \(F\in\mathsf{D}^{\mathrm{b}}_{\mathbb{R}\text{-}\mathrm{c}}(p_{X}^{-1}\mathcal{ O}_{S})\). We may choose a \(\mu\)-stratification \(X=\bigsqcup_{\alpha}Z_{\alpha}\) such that \(\mathcal{H}^{j}F|_{Z_{\alpha}\times S}\) is isomorphic to \(p_{Z_{\alpha}}^{-1}G_{\alpha,j}\), for some coherent \(\mathcal{O}_{S}\)-module \(G_{\alpha,j}\).
Let \(X_{k}\) denote the (disjoint) union of the \(k\)-codimensional strata. By the same argument of [7, Th. 9.7.1] we have \(u=[F]=\sum_{k}[F_{X_{k}\times S}]\) and so
\[[F]=\sum_{j,k}(-1)^{j}[(\mathcal{H}^{j}F)_{X_{k}\times S}]\]
By assumption \(\chi^{S}(F)=0\) hence
\[[\oplus_{j\,\text{even}}(\mathcal{H}^{j}F)_{\{x\}\times S}]=[\oplus_{j\,\text {odd}}(\mathcal{H}^{j}F)_{\{x\}\times S}]\]
in \(\mathsf{K}_{0}(S)\), which implies by Remark 2.9
\[[\oplus_{j\,\text{even}}(\mathcal{H}^{j}F)_{Z_{\alpha}\times S}]=[\oplus_{j\, \text{odd}}(\mathcal{H}^{j}F)_{Z_{\alpha}\times S}]\]
for each \(\alpha\). Because the \(X_{k}\) are disjoint unions of \(Z_{\alpha}\)'s, we conclude that, for each \(k\),
\[[\oplus_{j\,\text{even}}(\mathcal{H}^{j}F)_{X_{k}\times S}]=[\oplus_{j\,\text {odd}}(\mathcal{H}^{j}F)_{X_{k}\times S}]\]
which implies that \([F]=0\). q.e.d.
**Remark 2.11**.: The proof of Theorem 2.10 shows that, given \([F]\in\mathsf{K}_{\mathbb{R}\text{-}\mathrm{c}}(p_{X}^{-1}\mathcal{O}_{S})\) with \(F\in\mathsf{D}^{\mathrm{b}}_{\mathbb{R}\text{-}\mathrm{c}}(p_{X}^{-1}\mathcal{ O}_{S})\), the support \(\operatorname{Supp}\chi^{S}([F])\), where we regard \(\chi^{S}([F])\) as a global section of the sheaf \(\mathbb{G}\mathcal{F}^{S}_{X}\), coincides with \(\operatorname{Supp}_{X}F\) (i.e. the projection on \(X\) of \(\operatorname{Supp}F\)).
**Example 2.12**.: Let us recall that given \(S\) a smooth projective algebraic curve we have \(\mathsf{K}_{0}(S)\simeq\mathbb{Z}\times\operatorname{Pic}(S)\simeq\mathbb{Z}^ {2}\times\operatorname{Pic}_{0}(S)\) where \(\operatorname{Pic}_{0}(S)\) is the kernel of the degree map. In particular \(\mathsf{K}_{0}(\mathbb{P}^{1}(\mathbb{C}))\) is thus isomorphic to the unital ring \(\mathbb{Z}^{2}\) with the usual addition and the multiplication \((a,b)\cdot(c,d):=(ac,b+d)\) where, as expected, the unit is \((1,0)\) which corresponds to \([\mathcal{O}_{S}]\). In particular, if to \(F\in\mathsf{D}^{\mathrm{b}}_{\mathbb{R}\text{-}\mathrm{c}}(p_{X}^{-1} \mathcal{O}_{S})\) corresponds \((a,b)\in\mathbb{Z}^{2}\) by the previous isomorphism \(\chi^{S}\), to \([\boldsymbol{D}F]\) corresponds \((a,-b)\).
The datum of a \(\mathbb{P}^{1}(\mathbb{C})\)-constructible function \(f:X\to\mathsf{K}_{0}(\mathbb{P}^{1}(\mathbb{C}))\) is equivalent to the datum of a locally finite sum
\[\sum_{i,k}(z_{i},z_{k})\mathbf{I}_{Z_{i,k}}\]
where each \((z_{i},z_{k})\in\mathbb{Z}^{2}\) and each \(Z_{i,k}\) is compact subanalytic contractible in \(X\).
## 3. Operations on \(S\)-constructible functions
### Restriction to the fibers of \(p\)
The functor \(Li_{s}^{*}:\mathsf{D}^{\mathrm{b}}_{\mathrm{coh}}(\mathcal{O}_{S})\to\mathsf{ D}^{\mathrm{b}}_{\mathrm{fd}}(\mathbb{C})\) induces the functor
\[\begin{array}{r@{\quad}l}Li_{s}^{*}:\mathsf{K}^{S}_{\mathbb{R}\text{-c}}(X) \longrightarrow&\mathsf{K}_{\mathbb{R}\text{-c}}(X)\\ \quad&[F]\longmapsto\ [Li_{s}^{*}F]\end{array}\]
which is well defined by Remark 2.9. In the case \(X=\{pt\}\) it reduces to \(Li_{s}^{*}:\mathsf{K}_{0}(S)\to\mathsf{K}_{0}(\mathbb{C})\simeq\mathbb{Z}\). We define the functor
\[\begin{array}{r@{\quad}l}Li_{s}^{*}:\mathrm{CF}^{S}(X)\longrightarrow& \mathrm{CF}(X)\\ \quad&\varphi\longmapsto Li_{s}\circ\varphi\end{array}\]
thus \(Li_{s}^{*}\circ\chi^{S}\simeq\chi\circ Li_{s}^{*}\).
### Integration on \(X\)
Let \(F\in\mathsf{D}^{\mathrm{b}}_{\mathbb{R}\text{-c}}(p_{X}^{-1}\mathcal{O}_{S})\) and let us assume that \(\mathrm{Supp}_{X}\,F\) is compact. Then, according to [9, Prop. 2.17], \(Rp_{X*}F\) is an object of \(\mathsf{D}^{\mathrm{b}}_{\mathrm{coh}}(\mathcal{O}_{S})\) (by identifying \(p_{X}\) to \(C_{pt}\times Id\) where \(C_{pt}:X\to\{pt\}\) is the constant map which is proper on \(\mathrm{Supp}_{X}\,F\)). Moreover, the class of \(Rp_{X*}F\) in \(\mathsf{K}_{0}(S)\) only depends on the class \([F]\) in \(\mathsf{K}_{\mathbb{R}\text{-c}}(p_{X}^{-1}\mathcal{O}_{S})\). Following the analog terminology in the absolute case (see [7, SS9.1]) we introduce the functor \(\chi^{S}(X;\bullet):=[Rp_{X*}\bullet]\) from \(\mathsf{D}^{\mathrm{b}}_{\mathbb{R}\text{-c}}(p_{X}^{-1}\mathcal{O}_{S})\) to \(\mathsf{K}_{0}(S)\).
In view of Theorem 2.10, the following definition is well posed:
**Definition 3.1**.: Let us assume that \(\mathrm{Supp}\,\varphi\) is compact. We set
\[\int_{X}^{S}\varphi:=\chi^{S}(X;F)\]
for any \(F\) such that \(\chi^{S}([F])=\varphi\).
**Lemma 3.2**.: _Let \(\varphi\) be a constructible function on \(X\) with compact support and let \(G\) belong to \(\mathsf{D}^{\mathrm{b}}(\mathrm{coh}(\mathcal{O}_{S}))\). Then_
\[\int_{X}^{S}\varphi[G]=(\int_{X}\varphi)[G].\]
_In particular, if \(Z\) is a locally closed relatively compact subanalytic subset of \(X\), then_
\[\int_{X}^{S}\mathbf{I}_{Z}[G]=\int_{X}^{S}\boldsymbol{D}(\mathbf{I}_{Z})[G]\]
Proof.: Let \(F\in\mathsf{D}_{\mathbb{R}\text{-c}}(X)\) such that \(\varphi=\chi(F)\). By definition and Remark 2.8\(\int_{X}^{S}\varphi[G]=[Rp_{X*}(q_{X}^{-1}F\otimes p_{X}^{-1}G)]\) and by adjunction
\([Rp_{X*}(q_{X}^{-1}F)\otimes G]\). Then it suffices to remark that \(Rp_{X*}(q_{X}^{-1}F)\) is the constant complex \(R\Gamma(X;F)_{S}\) so that
\[[Rp_{X*}(q_{X}^{-1}F)\otimes G]=[R\Gamma(X;F)_{S}]\dot{[}G]=\chi(X;F)\dot{[}G]=( \int_{X}\varphi)[G]\]
where last equality by Remark 2.9.
For the second statement we recall that, by Poincare duality
\[\int_{X}\mathbf{I}_{Z}=\int_{X}\boldsymbol{D}(\mathbf{I}_{Z})\]
q.e.d.
Let \(\varphi\) be an \(S\)-constructible function on \(X\). The condition on \(\varphi\) allows us to assume that \(\varphi=\sum_{j\in J\,finite}\mathbf{I}_{Z_{j}}[G_{j}]\) where the \(Z_{j}\)'s are compact subanalytic contractible and the \(G_{j}\)'s are in \(\mathsf{D}^{\mathrm{b}}_{\mathrm{coh}}(\mathcal{O}_{S})\).
For each \(j\), we have by Lemma 3.2
\[\int_{X}^{S}\mathbf{I}_{Z_{j}}[G_{j}]=(\int_{X}\mathbf{I}_{Z_{j}})[G_{j}]=[G_{j}]\]
where the last equality holds because \(Z_{j}\)'s are contractible. Hence
\[\int_{X}^{S}\varphi=\sum_{j}[G_{j}].\]
**Example 3.3**.: Set \(X=\mathbb{R}^{2}\), \(S=\mathbb{P}^{1}(\mathbb{C})\), let \(Z_{1},Z_{2}\subset X\) be any subanalytic, contratible compact subsets of \(X\), and let \(\varphi=\mathbf{I}_{Z_{1}}[\mathcal{O}^{k_{1}}_{S}(i)]+\mathbf{I}_{Z_{2}}[ \mathcal{O}^{k_{2}}_{S}(j)]=\mathbf{I}_{Z_{1}}(k_{1},i)+\mathbf{I}_{Z_{2}}(k_ {2},j)\) for some positive integers \(k_{1},k_{2}\) and some integers \(i,j\). Then, following Example 2.12, \(\mathsf{K}_{0}(\mathbb{P}^{1}(\mathbb{C}))=\mathbb{Z}^{2}\) and
\[\int_{X}^{S}\varphi=(k_{1}+k_{2},i+j).\]
### Operations on \(\mathrm{CF}^{S}(X)\)
Let \(X\) and \(Y\) be real analyitic manifolds while \(S\) and \(T\) are complex manifolds. Similarly to the absolute case treated in [7] we can define:
* The external tensor product \(\bullet\boxtimes_{S}\bullet\) from \(\mathrm{CF}^{S}(X)\boxtimes\mathrm{CF}^{S}(Y)\) to \(\mathrm{CF}^{S}(X\times Y)\) is given by \((\varphi\boxtimes_{S}\psi)(x,y):=\varphi(x)\cdot\psi(y)\) which extends as a morphim of sheaves \[\mathcal{C}\mathcal{F}^{S}_{X}\boxtimes_{S}\mathcal{C}\mathcal{F}^{S}_{Y} \longrightarrow\mathcal{C}\mathcal{F}^{S}_{X\times Y}.\]
* The external tensor product \(\bullet\boxtimes_{X}\bullet\) from \(\mathrm{CF}^{S}(X)\boxtimes_{X}\mathrm{CF}^{T}(X)\) to \(\mathrm{CF}^{S\times T}(X)\) is given by \((\varphi\boxtimes_{X}\psi)(x):=\varphi(x)\boxtimes_{X}\psi(y)\) extends as a morphism of sheaves \[\mathcal{C}\mathcal{F}^{S}_{X}\boxtimes_{X}\mathcal{C}\mathcal{F}^{T}_{X} \longrightarrow\mathcal{C}\mathcal{F}^{S\times T}_{X}.\]
* For \(f:Y\to X\) and \(\varphi\in\mathrm{CF}^{S}(X)\), the pullback \(f^{*}\varphi\in\mathrm{CF}^{S}(Y)\) is given by \(f^{*}\varphi(y):=\varphi(f(y))\) and provides a morphism of sheaves \(f^{*}:f^{-1}\mathcal{C}\mathcal{F}^{S}_{X}\rightarrow\mathcal{C}\mathcal{F}^{S} _{Y}\).
* _Pullback in the parameter variable._ Let \(\pi:T\to S\) be a morphism and let \(\varphi\in\operatorname{CF}^{S}(X)\); we have \(\varphi=\chi^{S}([F])\) for some \(F\in\mathsf{D}^{\mathrm{b}}_{\mathbb{R}\text{-}\mathrm{c}}(p_{X}^{-1}\!\! \operatorname{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\texttexttexttexttexttext \text
We have
\[\chi^{S}(\delta Z;\mathbb{C}_{\delta_{Z}}\boxtimes_{S}G)=\int_{X}^{S}\mathbf{I}_{ \delta_{Z}\times S}[G]\]
\[\simeq\int_{X}^{S}\mathbf{I}_{\overline{Z}\times S}[G]-\int_{X}^{S}\mathbf{I}_ {Z\times S}[G]\]
According to Lemma 3.2
\[\int_{X}^{S}\mathbf{I}_{Z\times S}[G]=\int_{X}^{S}\boldsymbol{D}(\mathbf{I}_{Z \times S})[G]\simeq\int_{X}^{S}(-1)^{d}\mathbf{I}_{\overline{Z}\times S}[G]\]
therefore we conclude the relative version of the Euler formula (cf. [7, (9.7.13)]):
\[\chi^{S}(\delta Z;\mathbb{C}_{\delta Z}\boxtimes_{S}G)=(1-(-1)^{d})[G].\]
* Let \(f:Y\to X\) be a morphism of real analytic manifolds and let \(\psi\in\Gamma(Y;\mathcal{C}\mathcal{F}_{X}^{S})\) such that \(f\) is proper on \(\operatorname{Supp}\psi\). Then Theorem 2.10 entails \[\int_{f}^{S}\boldsymbol{D}_{Y}(\psi)=\boldsymbol{D}_{X}(\int_{Y}^{S}\psi).\]
* We have the following relative version of [12, Th. 2.2 (iv)]. Let us consider a cartesian diagram of morphisms of real analytic manifolds:
(3.1)
According to [7, Prop. 5.11], we have \(g^{-1}\circ Rf_{!}\stackrel{{\sim}}{{\to}}Rf_{!}^{\prime}\circ g ^{\prime-1}\). Via the isomorphism in Theorem 2.10, we conclude that, for an \(S\)-constructible function \(\psi\) on \(Y\) such that \(f\) is proper on \(\operatorname{Supp}\psi\), we have
\[g^{*}\int_{f}^{S}\psi=\int_{f^{\prime}}^{S}g^{\prime*}\psi.\]
### Complementary results (cf. [12])
Let \(X\) and \(Y\) be real analytic manifolds and let \(T\subset X\times Y\) be a locally closed subanalytic subset. Let \(q_{1}\), respectively \(q_{2}\), be the first, respectively the second, projection and let \(f\), respectively \(g\), be their restrictions to \(T\). We assume that \(q_{2}\) is proper on \(\overline{T}\).
For \(\varphi\in\operatorname{CF}^{S}(X)\), we can define the relative Radon transform \(R_{T}^{S}(\varphi)\) as the \(S\)-constructible function on \(Y\) given by
\[R_{T}^{S}(\varphi):=\int_{g}^{S}f^{*}\varphi=\int_{q_{2}}^{S}(q_{1}^{*}\varphi )\mathbf{I}_{T}\]
We can recover an inversion formula as in [12, Th. 3.11] in the relative setting, following the main ingredients in loc.cit. which we don't recall. For example, in loc.cit., when \(K\) is a compact subanalytic subset of a \(3\)-dimensional vector space, by identifying \(K\) to the constructible function \(\mathbf{I}_{K}\), the inversion formula allowes to reconstruct \(K\) by its number of connected
components and holes. In the relative setting the same result holds true identifying \(K\) to \(\mathbf{I}_{K}[\mathcal{O}_{S}]\).
Let \(Z\) be a real analytic manifold and let \(\tau:E\to Z\) be a vector bundle on \(Z\). Likewise [7, Pag. 402], we may define the subsheaf \(\mathcal{C}\mathcal{F}^{S}_{\mathbb{R}+}\) of \(\mathcal{C}\mathcal{F}^{S}\) consisting of functions constant along the orbits of \(\mathbb{R}^{+}\) and, accordingly, denote by \(\mathrm{CF}^{S}_{\mathbb{R}^{+}}\) the module of global sections of \(\mathcal{C}\mathcal{F}^{S}_{\mathbb{R}+}\). Then we can adapt the results of [7] in order to define the Fourier transform of a function \(\varphi\in\mathrm{CF}^{S}_{\mathbb{R}^{+}}\). As a natural follow-up, one naturally derives the operations of specialization and microlocalization of \(S\)-constructible functions along a closed submanifold \(M\subset X\). However we do not enter into the details of these results because the proofs are the same as those in [7].
|
2303.08464 | A $\mathbb{Z}_2$ invariant for chiral and particle-hole symmetric
topological chains | We define a $\mathbb{Z}_2$-valued topological and gauge invariant associated
to any 1-dimensional, translation-invariant topological insulator which
satisfies either particle-hole symmetry or chiral symmetry. The invariant can
be computed from the Berry phase associated to a suitable basis of Bloch
functions which is compatible with the symmetries. We compute the invariant in
the Su-Schrieffer-Heeger model for chiral symmetric insulators, and in the
Kitaev model for particle-hole symmetric insulators. We show that in both cases
the $\mathbb{Z}_2$ invariant predicts the existence of zero-energy boundary
states for the corresponding truncated models. | Domenico Monaco, Gabriele Peluso | 2023-03-15T09:09:22Z | http://arxiv.org/abs/2303.08464v1 | # \(\mathbb{Z}_{2}\) invariant for CS and PHS topological chains
###### Abstract
We define a \(\mathbb{Z}_{2}\)-valued topological and gauge invariant associated to any 1-dimensional, translation-invariant topological insulator which satisfies either particle-hole symmetry or chiral symmetry. The invariant can be computed from the Berry phase associated to a suitable basis of Bloch functions which is compatible with the symmetries. We compute the invariant in the Su-Schrieffer-Heeger model for chiral symmetric insulators, and in the Kitaev model for particle-hole symmetric insulators. We show that in both cases the \(\mathbb{Z}_{2}\) invariant predicts the existence of zero-energy boundary states for the corresponding truncated models.
## I Introduction
The field of topological insulators has attracted the attention of both the physics and mathematics community, due to their potential applications for solid state devices and quantum computation as well as their rich mathematical structure. A topological insulator is a solid described by a Hamilton operator, acting on an appropriate Hilbert space \(\mathcal{H}\), whose spectrum is gapped; this allows to define the occupied energy levels as those below the spectral gap. For translation invariant systems, the classification of topological phases of matter is then tantamount to the classification of the vector bundles over momentum space spanned by these energy states; usually, their description is mediated by numerical topological invariants, which label the different phases. In particular, different scenarios arise if the system is constrained by certain pseudo-symmetries, which can enrich or trivialize the topology of the manifold of occupied states. From the physical point of view, the most interesting feature of these materials is that, when they are truncated, a non-trivial topology for the infinite system drives the existence of metallic (i.e. gapless) states which are spatially localized near the cut: this statement is the celebrated principle of _bulk-boundary correspondence_ for topological insulators. For extended reviews of this rich research line, the reader is referred to Refs. [1; 3; 4].
In this contribution, we will be concerned with systems which are 1-dimensional, display a discrete translational symmetry modelling the crystalline nature of the solid at hand, and where one of two further special types of (pseudo-)symmetries, namely _particle-hole_ or _chiral_ symmetry, is present. We adopt throughout the established terminology of "symmetries" when referring to the particle-hole or chiral operators, even if they anti-commute rather than commute with the Hamiltonian (see below) [17]; other points of view have been recently advocated [24]. The Hamiltonian of the quantum system acts on the one-particle Hilbert space of a 1-dimensional crystal (see below) and is assumed to be spectrally gapped (without loss of generality, with a spectral gap around zero energy), in order for the system to be classified as an insulator. The crystalline structure of configuration space is reflected at the level of the Hamiltonian by requiring that there is a family of translation operators \(T_{\lambda}\), with \(\lambda\) in a lattice \(\Gamma\subset\mathbb{R}\), such that \([H,T_{\lambda}]=0\) for all \(\lambda\in\Gamma\); for simplicity we will assume \(\Gamma=\mathbb{Z}\) in what follows. By passing to the crystal-momentum representation via the Bloch-Floquet transform (see Section II), the Hamiltonian can then be fibered into a family of matrices \(H(k)\), with \(k\in\mathbb{R}/(2\pi\mathbb{Z})\simeq S^{1}\) denoting the Bloch momentum spanning the Brillouin torus. The associated spectral projection \(P_{-}(k)\) onto the negative energy levels (the "particles"), complemented by the orthogonal projection \(P_{+}(k)=\mathds{1}-P_{-}(k)\) onto positive energy levels (the "holes"), defines then a _vector bundle_ over \(S^{1}\), called the Bloch bundle [11; 13]. It is the geometry of this bundle (or rather couple of bundles) that will be investigated in this paper; the reader should notice that, at this stage, both \(P_{-}(k)\) and \(P_{+}(k)\) define _topologically trivial_ bundles, as Hermitian bundles over \(S^{1}\).
Further symmetries of the Hamiltonian are specified by the existence of a unitary or anti-unitary operator \(S\) (in the case of chiral or particle-hole symmetries, respectively) such that \(SH=-HS\). If, as is common in physical applications, one also requires that \(S\) acts only on internal degrees of freedom in the unit cell of the lattice, then these symmetries descend also to the Bloch-Floquet fibers \(H(k)\) and \(P(k)\) in a \(k\)-independent way. These further symmetries will allow us to detect a finer geometry in the "particle" and "hole" bundles, which will be quantified in the form of a _Berry phase_. We will show that this phase is quantized to an integer, and that its parity is a gauge invariant of the bundle. This will lead us to the definition of a \(\mathbb{Z}_{2}\)-valued gauge invariant of any chiral- or particle-hole-symmetric topological insulator, see Section IV (in particular Definition IV.9).
We will then compute this invariant in two prototypical models for these symmetry classes of topological insulators: the chiral-symmetric SSH model [21], presented in Section V, and the particle-hole-symmetric Kitaev chain [9], discussed in Section VI. In both models, we will also show how a non-trivial invariant predicts the presence of _zero-energy boundary states_ when the 1-dimensional system is truncated to a half-lattice, in agreement with bulk-boundary correspondence: in particular, in the Kitaev
model, these boundary states are predicted to behave as Majorana fermions, and would represent topologically robust states amenable for applications in quantum computation. We do not claim to have a general proof of the bulk-boundary correspondence for our formulation of the bulk topological invariant, which at any rate can at most predict the parity of the number of zero-energy boundary states, and is not expected to yield a complete classification for chiral classes of topological insulators [4; 15]. Indeed, the case of chiral-symmetric topological chains and their bulk-edge correspondence, formulated in terms of a \(\mathbb{Z}\)-valued (rather than \(\mathbb{Z}_{2}\)-valued) topological invariant, has been already mathematically investigated, even in models where disorder breaks translation invariance [19; 5; 15]. Nonetheless the evidence provided by these two paradigmatic models prompt us to the study of the link between our invariant and boundary states, which will be investigated in future work.
## II Fiber decomposition and spectral projections
### Mathematical setting
We now pass to a more precise formulation of the models traded in this paper. We will discuss only so-called _tight-binding models_, and assume therefore that the configuration space of the quantum system is modelled after a 1-dimensional lattice, say \(\mathbb{Z}\). Allowing for \(N\) further internal degrees of freedom per unit cell, possibly including a sub-lattice index, the one-particle Hilbert space will be therefore taken to be
\[\mathcal{H}=\ell^{2}(\mathbb{Z})\otimes\mathbb{C}^{N}. \tag{1}\]
The restriction of our treatment to these models, which are nonetheless of common use in condensed matter physics, is justified as follows: as will be further detailed in Remark II.9, the presence of a chiral or particle-hole symmetry forces the spectrum of the Hamiltonian \(H\) to be symmetric around zero, namely \(\mu\in\sigma(H)\Rightarrow-\mu\in\sigma(H)\). Since it is customary in quantum mechanics to assume a stability condition on the energy operator, namely that \(H\) is bounded from below, the previous condition implies that the Hamiltonian is bounded, and therefore precludes the use of continuum Schrodinger-type operators on \(L^{2}(\mathbb{R})\); notice however that there have been however recent proposals [20] to obtain discrete models for chiral chains starting from Schrodinger operators on the line, under a suitable limit of deep potential wells. We therefore also exclude Dirac-type Hamiltonians (which can be particle-hole symmetric and unbounded both from above and from below) from our treatment. One could at any rate view the discrete models introduced below also as effective models, derived from one in the continuum via projection onto a gapped spectral subspace, under an adiabatic decoupling, consisting of some "conductance" and "valence" bands of the continuum model: provided these bands exhibit this chiral or particle-hole symmetry, the tight-binding approximation would yield a symmetric lattice model.
The first set of hypotheses on the model Hamiltonian deals with the crystalline structure of configuration space. Let us therefore denote by
\[T_{\lambda}:\mathcal{H}\to\mathcal{H},\quad(T_{\lambda}\psi)_{n}:=\psi_{n+ \lambda}\,,\quad\psi=(\psi_{n})\in\ell^{2}(\mathbb{Z})\otimes\mathbb{C}^{N}\]
the translation operators.
**Assumption II.1** (Translation-invariant insulator).: The Hamiltonian \(H\) acts on the Hilbert space \(\mathcal{H}\) as in (1), and satisfies the following properties.
1. It is a bounded, self-adjoint operator on \(\mathcal{H}\).
2. It is _translation-invariant_, namely \([T_{\lambda},H]=0\) for all \(\lambda\in\mathbb{Z}\).
3. The spectrum of \(H\) has a gap around zero, namely there exists \(g>0\) such that \((-g,g)\cap\sigma(H)=\emptyset\).
It is easy to verify that the action of a translation-invariant self-adjoint operator \(H\) onto a vector \(\psi=(\psi_{n})\in\ell^{2}(\mathbb{Z})\otimes\mathbb{C}^{N}\) is necessarily of the form
\[(H\psi)_{n}=A_{0}\,\psi_{n}+\sum_{j\in\mathbb{N}^{*}}A_{j}\,\psi_{n+j}+A_{j}^ {*}\,\psi_{n-j} \tag{2}\]
for an appropriate sequence of matrices \(A_{j}\in M_{N}(\mathbb{C})\), \(j\in\mathbb{N}\), with \(A_{0}=A_{0}^{*}\). Alternatively, if we denote by \(\{e_{n}\}_{n\in\mathbb{Z}}\) the canonical basis of \(\ell^{2}(\mathbb{Z})\) whose components are \((e_{n})_{m}=\delta_{n,m}\) for \(n,m\in\mathbb{Z}\), we can write
\[H\,(e_{n}\otimes v)=e_{n}\otimes A_{0}v+\sum_{j\in\mathbb{N}^{*}}e_{n-j}\otimes A _{j}\,v+e_{n+j}\otimes A_{j}^{*}\,v,\quad n\in\mathbb{Z},\,v\in\mathbb{C}^{N}\,, \tag{3}\]
and extend the definition of \(H\) from there by linearity. From a physical point of view \(A_{j}\) can be interpreted as a matrix-valued "hopping probability" for a particle to make a jump of \(j\) sites in the lattice. Boundedness of \(H\) implies some form of decay in the matrix norms \(\left\|A_{j}\right\|\): to simplify the setting even further, we will be content to treat _finite-range operators_, and assume that
\(\mathbb{Z}_{2}\) invariant for CS and PHS topological chains
**Assumption II.2** (Finite-range condition).: With \(A_{j}\in M_{N}(\mathbb{C})\) as above, there exists \(R\in\mathbb{N}\) such that
\[A_{j}=0\quad\text{for}\quad j>R.\]
Such operators are sometimes called _band Jacobi operators_ in the mathematics literature [22]. Assumptions II.1 and II.2 will be our standing hypotheses throughout this paper, and will not be recalled further.
Finally, the models of interest for this contribution satisfy either particle-hole or chiral symmetry, as specified in the following two Assumptions.
**Assumption II.3** (Particle-Hole Symmetry, PHS).: There exists an _anti-unitary_ operator \(S_{\text{PH}}\) on \(\mathcal{H}=\ell^{2}(\mathbb{Z})\otimes\mathbb{C}^{N}\) such that
1. \(S_{\text{PH}}=\mathds{1}_{\ell^{2}(\mathbb{Z})}\otimes C\), where \(C\) is a anti-unitary matrix on \(\mathbb{C}^{N}\) (namely, \(C=U\,K\), where \(U\in M_{N}(\mathbb{C})\) is a unitary matrix and \(K\) is a complex-conjugation operator on \(\mathbb{C}^{N}\));
2. \(S_{\text{PH}}\,H=-H\,S_{\text{PH}}\).
**Assumption II.4** (Chiral Symmetry, CS).: There exists a _unitary_ operator \(S_{\text{C}}\) on \(\mathcal{H}=\ell^{2}(\mathbb{Z})\otimes\mathbb{C}^{N}\) such that
1. \(S_{\text{C}}=\mathds{1}_{\ell^{2}(\mathbb{Z})}\otimes S\), where \(S\) is a unitary matrix on \(\mathbb{C}^{N}\);
2. \(S_{\text{C}}\,H=-H\,S_{\text{C}}\).
A few comments on the definition of these symmetry operators are in order.
* Contrary to what is usually assumed in the literature on topological phases of matter [17], we do not need to assume anything concerning the squares of the symmetry operators \(S_{\text{PH}}\) and \(S_{\text{C}}\). In view of representation-theoretic reasons, it is always the case that \(S_{\text{C}}^{2}=\mathds{1}_{\mathcal{H}}\) while either \(S_{\text{PH}}^{2}=+\mathds{1}_{\mathcal{H}}\) (_even_ PHS) or \(S_{\text{PH}}^{2}=-\mathds{1}_{\mathcal{H}}\) (_odd_ PHS). These specifications will be immaterial for our discussion.
* The structure required on both symmetry operators, namely that they act trivially on the \(\ell^{2}(\mathbb{Z})\)-leg of the tensor product defining \(\mathcal{H}\) in (1), may seem restrictive, but it is often satisfied in physical models of interest such as the ones presented in Sections V and VI. Notice that they imply in particular that also the symmetry operators are translation-invariant, namely \([S_{*},T_{\lambda}]=0\) for all \(\lambda\in\mathbb{Z}\) and for \(S_{*}\in\{S_{\text{PH}},S_{\text{C}}\}\).
### Bloch-Floquet representation
It is natural and convenient to exploit the translational symmetry of the operators considered in order to simplify their analysis. This can be done by using the theory of decomposable operators, see Ref. [16], Sec. XIII.16, and a vector-valued form of the Fourier series, called _Bloch-Floquet transform_ in the context of condensed matter physics. We review this method in this Section for the finite-range, translation-invariant Hamiltonians described previously, commenting in particular on the interplay of the Bloch-Floquet transform with PHS and CS.
Denote as before by \(\{e_{n}\}_{n\in\mathbb{Z}}\) the canonical basis of \(\ell^{2}(\mathbb{Z})\). Set
\[\mathcal{F}\colon\ell^{2}(\mathbb{Z})\otimes\mathbb{C}^{N}\to L^{2}(S^{1}; \mathbb{C}^{N}),\quad\mathcal{F}(e_{n}\otimes v):=\frac{\mathrm{e}^{\text{ ink}}}{\sqrt{2\pi}}\,v,\qquad n\in\mathbb{Z},\,v\in\mathbb{C}^{N}.\]
This is clearly a bijective linear isometry because it sends an orthonormal basis to another orthonormal basis: \(\mathcal{F}\) is then called the Bloch-Floquet (BF) transform. We then define
\[\tilde{H}:=\mathcal{F}\,H\,\mathcal{F}^{-1}\,. \tag{4}\]
Analogously, if the PHS or CS Assumptions II.3 and II.4 hold, we can similarly define
\[\widehat{S_{\text{PH}}}=\mathcal{F}\,S_{\text{PH}}\mathcal{F}^{-1}\quad\text{ and}\quad\widehat{S_{\text{C}}}=\mathcal{F}\,S_{\text{C}}\,\mathcal{F}^{-1}\,.\]
Assumptions II.3 and II.4 then imply at once that
\[\widehat{S_{\text{PH}}}\tilde{H}=-\widehat{H}\,\widehat{S_{\text{PH}}}\quad \text{or}\quad\widehat{S_{\text{C}}}\tilde{H}=-\widehat{H}\,\widehat{S_{\text{C }}}\,. \tag{5}\]
\(\mathbb{Z}_{2}\) invariant for CS and PHS topological chains
**Lemma II.5**.: _The operator \(\widehat{H}\) is decomposable in the sense of Ref. [16], Sec. XIII.16, namely there exists a family of self-adjoint matrices \(H(k)\in M_{N}(\mathbb{C})\), called the fiber operators, such that_
\[\left(\widehat{H}u\right)(k)=H(k)\,u(k)\quad\text{for }\quad u\in L^{2}(S^{1}; \mathbb{C}^{N})\,. \tag{6}\]
_With \(\{A_{j}\}_{j\in\mathbb{N}}\) as in (2) or (3), the fiber operators are explicitly written in the following way:_
\[H(k)=A_{0}+\sum_{j=1}^{R}\mathrm{e}^{-\mathrm{i}jk}A_{j}+\mathrm{e}^{\mathrm{i }jk}A_{j}^{*}\,. \tag{7}\]
Proof.: In the following, we identify for notational convenience
\[L^{2}(S^{1};\mathbb{C}^{N})\simeq L^{2}(S^{1})\otimes\mathbb{C}^{N}\,.\]
For \(n\in\mathbb{Z}\) and \(v\in\mathbb{C}^{N}\), let us then compute, using (3),
\[\widehat{H}\left(\frac{\mathrm{e}^{\mathrm{i}nk}}{\sqrt{2\pi}} \otimes v\right) =\mathcal{F}\left[e_{n}\otimes A_{0}v+\sum_{j=1}^{R}e_{n-j} \otimes A_{j}v+e_{n+j}\otimes A_{j}^{*}v\right]\] \[=\frac{\mathrm{e}^{\mathrm{i}nk}}{\sqrt{2\pi}}\otimes A_{0}v+ \sum_{j=1}^{R}\frac{\mathrm{e}^{\mathrm{i}(n-j)k}}{\sqrt{2\pi}}\otimes A_{j} v+\frac{\mathrm{e}^{\mathrm{i}(n+j)k}}{\sqrt{2\pi}}\otimes A_{j}^{*}v\] \[=\left[A_{0}+\sum_{j=1}^{R}\mathrm{e}^{-\mathrm{i}jk}A_{j}+ \mathrm{e}^{\mathrm{i}jk}A_{j}^{*}\right]\,\left(\frac{\mathrm{e}^{\mathrm{ i}nk}}{\sqrt{2\pi}}\otimes v\right)\,.\]
Since the functions \(\left\{\mathrm{e}^{\mathrm{i}nk}/\sqrt{2\pi}\right\}_{n\in\mathbb{Z}}\) span \(L^{2}(S^{1})\) orthonormally, the conclusion follows by linearity.
**Remark II.6**.: The matrices \(H(k)\) are \(2\pi\)-periodic and _analytic_ (in fact, entire) in the variable \(k\), as each entry is a finite sum of complex exponential functions. Provided \(H\) is a bounded operator and the matrix norms of the \(A_{j}\)'s decay sufficiently rapidly, the same holds true even if we relax the finite-range condition, Assumption II.2, by the Vitali-Porter theorem [18] (possibly restricting the analyticity domain to a strip of finite width around the real axis).
**Lemma II.7**.: _If the PHS Assumption II.3 or the CS Assumption II.4 holds, then_
\[\left(\widehat{S_{\mathrm{PH}}}\,u\right)(k)=Cu(-k)\quad\text{ or}\quad\left(\widehat{S_{\mathrm{C}}}\,u\right)(k)=Su(k),\qquad u\in L^{2}(S^{1}; \mathbb{C}^{N}),\]
_where the (anti-)unitary operators \(C,S\colon\mathbb{C}^{N}\to\mathbb{C}^{N}\) are as in the aforementioned Assumptions and act point-wise on \(u\in L^{2}(S^{1};\mathbb{C}^{N})\)._
Proof.: As in the previous proof, by (anti-)linearity it suffices to compute the action of \(\widehat{S_{\mathrm{PH}}}\) and \(\widehat{S_{\mathrm{C}}}\) on functions \(u(k)\) of the form \(u(k)=\mathrm{e}^{\mathrm{i}nk}/\sqrt{2\pi}\otimes v\), for \(n\in\mathbb{Z}\) and \(v\in\mathbb{C}^{N}\). We have then
\[\widehat{S_{\mathrm{PH}}}\,\left(\frac{\mathrm{e}^{\mathrm{i}nk}}{\sqrt{2\pi} }\otimes v\right)=\mathcal{F}\left[S_{\mathrm{PH}}(e_{n}\otimes v)\right]= \mathcal{F}(e_{n}\otimes C\,v)=\frac{\mathrm{e}^{\mathrm{i}nk}}{\sqrt{2\pi}} \otimes C\,v=C\,\left(\frac{\mathrm{e}^{-\mathrm{i}nk}}{\sqrt{2\pi}}\otimes v\right)\]
as \(C\) is _anti_-linear.
Similarly we have
\[\widehat{S_{\mathrm{C}}}\,\left(\frac{\mathrm{e}^{\mathrm{i}nk}}{\sqrt{2\pi} }\otimes v\right)=\mathcal{F}\left[S_{\mathrm{C}}(e_{n}\otimes v)\right]= \mathcal{F}(e_{n}\otimes Sv)=\frac{\mathrm{e}^{\mathrm{i}nk}}{\sqrt{2\pi}} \otimes Sv=S\left(\frac{\mathrm{e}^{\mathrm{i}nk}}{\sqrt{2\pi}}\otimes v\right)\]
as instead \(S\) is linear.
**Corollary II.8**.: _If the PHS Assumption II.3 holds, then the fiber operator \(H(k)\) is anti-unitarily intertwined to the fiber operator \(H(-k)\) by the relation_
\[CH(-k)=-H(k)\,C.\]
_Similarly, if the CS Assumption II.4 holds, then the fiber operator \(H(k)\) is unitarily intertwined to the fiber operator \(H(k)\) by the relation_
\[SH(k)=-H(k)\,S.\]
\(\mathbb{Z}_{2}\) invariant for CS and PHS topological chains
Proof.: Both statements immediately follow by combining (5) and (6).
**Remark II.9**.: The above condition gives us useful information regarding the spectrum of the fiber operators \(H(k)\) and therefore of the Hamiltonian \(H\). Indeed, assume that \(v\in\mathbb{C}^{N}\) is an eigenvector of \(H(k)\) of eigenvalue \(E\). Let the PHS Assumption II.3 hold. Then
\[H(-k)Cv=-CH(k)v=-C(Ev)=-\overline{E}Cv.\]
Due to the self-adjointness of \(H(k)\), we have \(E=\overline{E}\) and therefore \(Cv\) is an eigenvector of \(H(-k)\) with opposite eigenvalue \(-E\). Similarly, if instead the CS Assumption II.4 holds, then
\[H(k)Sv=-SH(k)v=-S(Ev)=-ESv\]
and once again we find that \(Sv\) is an eigenvector for \(H(k)\) with opposite eigenvalue \(-E\).
From the theory of decomposable operators presented in Ref. [16], Theorems XIII.85-86, we know that the spectrum of \(\widehat{H}\) (and therefore that of \(H=\mathcal{F}^{-1}\,\widehat{H}\,\mathcal{F}\)) is obtained as the union of the spectra of \(H(k)\) for ranging \(k\in S^{1}\). The above considerations allows us then to deduce that \(\sigma(H)\) must be symmetric around \(E=0\). Moreover, in view of the insulator Assumption II.1, the spectral gap of \(H\) together with the analytic dependence of \(H(k)\) and of its spectrum on \(k\in S^{1}\) (see Remark II.6) imply that \(0\notin\sigma(H(k))\) for all \(k\in S^{1}\). Thefore we have a clear separation of positive and negative energies also for the fibre operators \(H(k)\).
Notice that the Bloch momentum \(k=0\) is fixed by the involution \(k\mapsto-k\). Therefore, in view of what we just concluded, by the spectral theorem \(\mathbb{C}^{N}\) has an orthonormal basis \(v_{1},\cdots,v_{N}\) consisting of eigenvectors the self-adjoint matrix \(H(0)\), where the first vectors \(v_{1},\cdots,v_{m}\) have negative eigenvalues while the second vectors \(v_{m+1},\cdots,v_{N}\) have positive eigenvalues. The presence of \(C\) or \(S\) implies a the existence of a (anti-)linear bijection between the eigenspaces of \(H(0)\) relative to negative and positive eigenvalues, and so \(m=N-m\). Therefore, whenever Assumptions II.3 or II.4 hold, it will be assumed that the dimension of the fiber space \(\mathbb{C}^{N}\) is a an even integer, \(N=2m\).
### Spectral projections
The topological information of the insulator described by \(H\), as we will detail later, is contained in the spectral projections of the fiber operators, which project over the vector subspace of \(\mathbb{C}^{N}\) generated by its eigenvectors with negative eigenvalues. The collection of these subspaces leads to the definition of the _Bloch bundle_[11; 13].
Since \(H(k)\) depends analytically on \(k\) and is \(2\pi\)-periodic, one would expect that their eigenvalues and eigenvectors do the same. This is partially true (compare Ref. [16], Theorem XIII.85-86), but problems can occur if different eigenvalues of \(H(k)\) become degenerate at some \(k_{0}\in S^{1}\). What remains true even in this situation is that the _negative and positive eigenspaces_ of \(H(k)\) stay analytic and \(2\pi\)-periodic with respect to \(k\), thanks to the spectral gap. More precisely, consider a PHS or CS Hamiltonian \(H\) for which the considerations of the previous Subsection hold. Let \(r>0\) be such that \(-2r\) is a lower bound for the spectrum of \(H\), and therefore uniformly for the spectra of \(H(k)\), \(k\in S^{1}\). The _Riesz formula_[8]
\[P_{-}(k):=\frac{1}{2\pi\mathrm{i}}\int_{\partial B_{r}(-r)}\mathrm{d}z\,(z\, \mathds{1}-H(k))^{-1}\,,\quad k\in S^{1}, \tag{8}\]
defines then an orthogonal projection, \(P_{-}(k)=P_{-}(k)^{*}=P_{-}(k)^{2}\), on the subspace generated by the eigenvectors of \(H(k)\) with negative eigenvalues. In the above, the integration is performed in the complex energy plane, along a circumference centered at \(-r\) of radius \(r\); the circle enclosed by this circumference only contains the negative eigenvalues of \(H(k)\), where the resolvent function has its poles. The integral of a matrix should be understood as being performed entry-wise. By functional calculus, it is easily shown, e.g. in Ref. [14], Proposition 2.1, that \(P_{-}(k)\) is as regular as \(H(k)\), and shares with it the same periodicity properties with respect to \(k\). Of course, the same holds for the spectral projection
\[P_{+}(k):=\mathds{1}-P_{-}(k)\]
onto the eigenspace of \(H(k)\) associated to positive eigenvalues.
**Lemma II.10**.: _If either Assumption II.3 or II.4 hold, then_
\[CP_{+}(k)=P_{-}(-k)\,C\quad or\quad SP_{+}(k)=P_{-}(k)\,S\,. \tag{9}\]
Proof.: The statement follows at once from Corollary II.8 and Remark II.9.
\(\mathbb{Z}_{2}\) invariant for CS and PHS topological chains
## III Parallel transport and Bloch bases
We want to quantify the topological content of the spectral projections of the fiber operators \(H(k)\) into a numerical invariant. This will be done in the next Section, starting from a moving basis for the fiber Hilbert space \(\mathbb{C}^{N}\) tailored around the spectral subspace of the fiber operators \(H(k)\). We therefore set the following
**Definition III.1** (Bloch basis).: Given a family of rank-\(m\) projections \(P(k)\in M_{N}(\mathbb{C})\) which is \(2\pi\)-periodic and at least \(C^{1}\) in \(k\), we will call a family of vectors \(\left\{v_{i}(k)\right\}_{i\in\{1,\cdots,N\}}\subset\mathbb{C}^{N}\) a _Bloch basis associated to \(P(k)\)_ if it satisfies the following proprieties for every \(k\in\mathbb{R}\).
* The collection \(\left\{v_{i}(k)\right\}_{i\in\{1,\cdots,N\}}\) forms an orthonormal basis of \(\mathbb{C}^{N}\).
* The vectors \(\left\{v_{i}(k)\right\}_{i\in\{1,\ldots,m\}}\) span \(\operatorname{Ran}P(k)\); consequently, the vectors \(\left\{v_{i}(k)\right\}_{i\in\{m+1,\ldots,N\}}\) span \(\ker P(k)=\operatorname{Ran}[\mathds{1}-P(k)]\).
* Every \(v_{i}(k)\) is \(2\pi\)-periodic and as regular as \(P(k)\) as a function of \(k\in\mathbb{R}\).
**Definition III.2** (Symmetric Bloch basis).: Let \(P_{-}(k)\) be as in (8) and \(P_{+}(k)=\mathds{1}-P_{-}(k)\). Assume that either Assumption II.3 or II.4 holds, so that \(N=2m\), \(P_{-}(k)\) and \(P_{+}(k)\) both have rank \(m\), and (9) holds as well. Then a Bloch basis \(\left\{v_{i}(k)\right\}_{i\in\{1,\cdots,N\}}\) associated to \(P_{-}(k)\) will be called _particle-hole symmetric_ if
\[Cv_{i}(k)=v_{N-i+1}(-k)\quad\text{for all }i\in\{1,\ldots,m\}\;.\]
Similarly, the Bloch basis will be called _chiral symmetric_ if
\[Sv_{i}(k)=v_{N-i+1}(k)\quad\text{for all }i\in\{1,\ldots,m\}\;.\]
### Parallel transport
For 1-dimensional systems, such as the ones at hand, it is possible to exhibit Bloch bases associated to the eigenprojections \(P_{-}(k)\) via a tool from differential geometry called _parallel transport_. We will review this construction here, and show in particular how parallel transport behaves under PHS or CS.
**Theorem III.3**.: _If \(P(k)\in M_{N}(\mathbb{C})\) is a family of projections which is at least \(C^{1}\) in \(k\), then there exists a unique family of operators \(T(k)\in M_{N}(\mathbb{C})\) that solves the Cauchy problem_
\[\begin{cases}T^{\prime}(k)=[P^{\prime}(k),P(k)]\;T(k),\\ T(0)=\mathds{1}\;.\end{cases} \tag{10}\]
_Here and in the following, the "prime" symbol \({}^{\prime}\) denotes differentiation with respect to \(k\). The family of operators \(T(k)\) is at least as regular as \(P(k)\) as a function of \(k\). It further satisfies the following properties:_
1. Unitarity_: Each_ \(T(k)\) _is unitary,_ \(T(k)^{*}=T(k)^{-1}\)_._
2. Intertwining property_:_ \[P(k)\,T(k)=T(k)\,P(0)\,,\quad k\in\mathbb{R}\,.\] (11)
_If \(P(k)\) is also \(2\pi\)-periodic with respect to \(k\), then \(T(k)\) satisfies further the_
_3._ Telescopic property_:_
\[T(k+2\pi n)=T(k)\,T(2\pi)^{n}\,,\quad k\in\mathbb{R},\quad n\in\mathbb{Z}\,. \tag{12}\]
_We will call the family of unitaries \(T(k)\) the parallel transport operators associated to \(P(k)\)._
Proof.: Existence, uniqueness and regularity of the solution to the Cauchy problem (10) for every initial value \(T_{0}\) follows from the standard theory of first-order linear differential equations (see e.g. Ref. [23], Section IV.14.VI).
Unitarity To prove that \(T(k)\) is unitary, it suffices to notice that the (non-autonomous) generator of the differential equation is skew-adjoint:
\[\big{[}P^{\prime}(k),P(k)\big{]}^{*}=-\,\big{[}P^{\prime}(k),P(k)\big{]}\;.\]
\(\mathbb{Z}_{2}\) invariant for CS and PHS topological chains
Intertwining propertyThe intertwining property (11) clearly holds at \(k=0\). To conclude that it holds for all \(k\in\mathbb{R}\), one computes \([T(k)^{*}P(k)\,T(k)]^{\prime}\) and shows that it vanishes. We have that
\[\frac{\mathrm{d}}{\mathrm{d}k}\,T(k)^{*}P(k)\,T(k)=T(k)^{*}\,\left\{-\left[P^{ \prime}(k),P(k)\right]\,P(k)+P^{\prime}(k)+P(k)\,\left[P^{\prime}(k),P(k) \right]\right\}\,T(k)\,.\]
The term in curly brackets vanishes, as one can verify with a straightforward algebraic computation using the identity \(P(k)\,P^{\prime}(k)\,P(k)=0\) (following from differentiation of the relation \(P(k)=P(k)^{2}\)).
Telescopic propertyTo prove (12), let us define \(A(k):=T(k+2\pi n)\) for fixed \(n\in\mathbb{Z}\). By direct inspection, \(A(k)\) solves the Cauchy problem (10) with \(A(0)=T(2\pi n)\), as the family of projections \(P(k)\) is \(2\pi\)-periodic. The same is true for \(B(k):=T(k)\,T(2\pi n)\), and by uniqueness it must hold that
\[T(k+2\pi n)=T(k)\,T(2\pi n)\quad\text{for all }k\in\mathbb{R},\;n\in\mathbb{Z}\,.\]
The above implies in particular
\[T(2\pi n)=T(2\pi)^{n},\quad n\in\mathbb{Z}\]
from which (12) follows.
**Remark III.4**.: Uniqueness of the solution of the Cauchy problem in the statement of Theorem III.3 implies that the parallel transport operators associated to the families \(P(k)\) and \(Q(k):=\mathds{1}-P(k)\) coincide, since they share the same generator of the linear differential equation:
\[\left[Q^{\prime}(k),Q(k)\right]=\left[(\mathds{1}-P(k))^{\prime},\mathds{1}-P (k)\right]=\left[P^{\prime}(k),P(k)\right]\,.\]
**Lemma III.5**.: _Let \(P_{-}(k)\) be the family of projections defined in (8), and denote by \(T(k)\) the parallel transport unitaries associated to \(P_{-}(k)\). Assume that either Assumption II.3 or II.4 holds. Then_
\[CT(k)=T(-k)\,C\quad\text{or}\quad ST(k)=T(k)\,S,\qquad k\in\mathbb{R}.\]
Proof.: Under Assumption II.3, let us denote by \(U(k):=C^{-1}\,T(-k)\,C\). Compute
\[U^{\prime}(k) =-C^{-1}\,\left[P^{\prime}_{-}(-k),P_{-}(-k)\right]\,T(-k)\,C= \left[-C^{-1}\,P^{\prime}_{-}(-k)\,C,C^{-1}\,P_{-}(-k)\,C\right]\,C^{-1}\,T(-k )\,C\] \[=\left[P^{\prime}_{+}(k),P_{+}(k)\right]\,U(k)\]
where we used Lemma II.10 for the last equality. Since \(U(0)=\mathds{1}\) this means that \(U(k)\) solves the Cauchy problem (10) for the parallel transport operators associated to \(P_{+}(k)=\mathds{1}-P_{-}(k)\). By Remark III.4 we conclude that \(U(k)=T(k)\).
Arguing similarly under Assumption II.4, one can show that \(V(k):=S^{-1}\,T(k)\,S\) and \(T(k)\) solve the same Cauchy problem, and therefore coincide.
### Construction of symmetric Bloch bases
Using the parallel transport unitaries \(T(k)\) we can try to construct a symmetric Bloch basis. If we fix an orthonormal basis \(\{v_{1}(0),\cdots,v_{2m}(0)\}\subset\mathbb{C}^{2m}\) of eigenvectors of \(H(0)\) such that the first half have negative eigenvalues while the second half have positive eigenvalues, we can define the family of vectors
\[\tilde{v}_{i}(k):=T(k)\,v_{i}(0),\quad k\in\mathbb{R}.\]
These vectors form an orthonormal basis of \(\mathbb{C}^{N}\) that depends smoothly on \(k\); moreover, in view of the intertwining property (11), we have that
\[P_{-}(k)\tilde{v}_{i}(k)=\begin{cases}\tilde{v}_{i}(k)&\text{if }i\leq m \,,\\ 0&\text{if }i>m\,,\end{cases}\quad\text{while}\quad P_{+}(k)\tilde{v}_{i}(k)= \begin{cases}0&\text{if }i\leq m\,,\\ \tilde{v}_{i}(k),&\text{if }i>k\,.\end{cases}\]
Finally, if the model enjoys PHS or CS, then by Remark II.9 we can further choose the basis of eigenvectors of \(H(0)\) in such a way that
\[Cv_{i}(0)=v_{N-i+1}(0)\,,\quad\text{or}\quad Sv_{i}(0)=v_{N-i+1}(0)\,. \tag{13}\]
\(\mathbb{Z}_{2}\) invariant for CS and PHS topological chains
Then, in view of Lemma III.5, we conclude that for \(i\in\{1,\ldots,m\}\)
\[C\tilde{v}_{i}(k)=CT\left(k\right)v_{i}(0)=T(-k)\,Cv_{i}(0)=T(-k)\,v_{N-i+1}(0)= \tilde{v}_{N-i+1}(-k)\,,\]
or similarly
\[S\tilde{v}_{i}(k)=\tilde{v}_{N-i}(k)\,.\]
This is almost enough to conclude that \(\{\tilde{v}_{i}(k)\}_{i\in\{1,\ldots,N\}}\) is a symmetric Bloch basis, but unfortunately in general it will not be \(2\pi\)-periodic since \(T(2\pi)\neq T(0)=\mathds{1}\). Therefore we need to adjust this construction.
Let us then focus on the unitary matrix \(T(2\pi)\in M_{N}(\mathbb{C})\), called the _holonomy unitary_. By the spectral theorem, \(T(2\pi)\) can be diagonalized by a unitary transformation \(V\):
\[T(2\pi)=V^{-1}\begin{bmatrix}\mathrm{e}^{\mathrm{i}\phi_{1}}&\cdots&0\\ \vdots&\ddots&\vdots\\ 0&\cdots&\mathrm{e}^{\mathrm{i}\phi_{N}}\end{bmatrix}V\,,\quad\text{with} \quad V^{-1}=V^{*},\quad\phi_{i}\in[0,2\pi)\ \forall i\in\{1,\ldots,N\}\.\]
With this choice of the phases for the eigenvalues of \(T(2\pi)\), we let \(X\in M_{N}(\mathbb{C})\) be the self-adjoint matrix
\[X=-\mathrm{i}\ln T(2\pi):=V^{-1}\begin{bmatrix}\phi_{1}&\cdots&0\\ \vdots&\ddots&\vdots\\ 0&\cdots&\phi_{N}\end{bmatrix}V \tag{14}\]
so that \(T(2\pi)=\mathrm{e}^{\mathrm{i}X}\). It is worth to notice that \(X\) and \(T(2\pi)\) share the same eigenspaces.
**Theorem III.6**.: _Let \(P_{-}(k)\in M_{N}(\mathbb{C})\) be the rank-\(m\) projections defined in (8). Let also \(\{v_{1}(0),\cdots,v_{N}(0)\}\subset\mathbb{C}^{N}\) be an orthonormal basis of eigenvectors of \(H(0)\) such that \(\{v_{1}(0),\cdots,v_{m}(0)\}\) span \(\mathrm{Ran}\,P_{-}(0)\) while \(\{v_{m+1}(0),\cdots,v_{N}(0)\}\) span \(\mathrm{ker}\,P_{-}(0)\). Finally, denote by \(T(k)\) the family of parallel transport unitaries associated to \(P_{-}(k)\). Then the family of vectors_
\[v_{i}(k):=T(k)\mathrm{e}^{-\mathrm{i}kX/2\pi}\,v_{i}(0)\,,\quad i\in\{1, \ldots,N\} \tag{15}\]
_define a Bloch basis associated to \(P_{-}(k)\) in the sense of Definition III.1._
_If furthermore Assumption II.3 or II.4 holds, then the vectors defined in (15) form a symmetric Bloch basis associated to \(P_{-}(k)\), in the sense of Definition III.2._
Proof.: We split the proof in a few steps.
Spanning propertyThe vectors \(\{v_{i}(k)\}_{i\in\{1,\ldots,N\}}\) are clearly orthonormal as they are obtained from the orthonormal basis of eigenvectors of \(H(0)\) by means of unitary transformations. Let us check that the first \(m\) vectors span \(\mathrm{Ran}\,P_{-}(k)\), so that the last \(N-m\) will span \(\mathrm{Ran}\,P_{+}(k)\). To this end, it suffices to notice that the intertwining property (11) of the parallel transport unitaries yields in particular
\[P_{-}(2\pi)\,T(2\pi)=P_{-}(0)\,T(2\pi)=T(2\pi)\,P_{-}(0)\]
in view of the \(2\pi\)-periodicity of the family of projections. So \(T(2\pi)\) commutes with \(P_{-}(0)\), and as the matrix \(X\) in (14) is obtained from \(T(2\pi)\) through functional calculus, so does \(X\) and therefore \(\mathrm{e}^{-\mathrm{i}kX/2\pi}\) for all \(k\in\mathbb{R}\). It then follows that the vectors \(\mathrm{e}^{-\mathrm{i}kX/2\pi}\,v_{i}(0)\) for \(i\in\{1,\ldots,m\}\) are orthonormal and in the range of \(P_{-}(0)\), and are then mapped orthonormally by \(T(k)\) to the range of \(P_{-}(k)\) in view of the intertwining property again.
RegularitySince \(P_{-}(k)\) is analytic in \(k\), \(T(k)\) is as well by Theorem III.3. Also the matrix
\[\mathrm{e}^{-\mathrm{i}kX/2\pi}=V^{-1}\begin{bmatrix}\mathrm{e}^{-\mathrm{i}k \phi_{1}/2\pi}&\cdots&0\\ \vdots&\ddots&\vdots\\ 0&\cdots&\mathrm{e}^{-\mathrm{i}k\phi_{N}/2\pi}\end{bmatrix}V\]
depends analytically on \(k\). We conclude that (15) defines an analytic \(\mathbb{C}^{N}\)-valued function of \(k\).
PeriodicityTo prove that \(v_{i}(k)\) is \(2\pi\)-periodic in \(k\), we will exploit the telescopic property (12) of the parallel transport unitaries. We have indeed, for \(k\in\mathbb{R}\) and \(n\in\mathbb{Z}\),
\[v_{i}(k+2\pi n) =T\left(k+2\pi n\right)\mathrm{e}^{-\mathrm{i}(k+2\pi n)X/2\pi}\, v_{i}(0)=T(k)\,T(2\pi)^{n}\,\mathrm{e}^{-\mathrm{i}nX}\,\mathrm{e}^{-\mathrm{i}kX/2\pi} \,v_{i}(0)\] \[=T\left(k\right)\mathrm{e}^{-\mathrm{i}kX/2\pi}\,v_{i}(0)=v_{i}( k)\,,\]
since by definition \(T(2\pi)^{n}=\mathrm{e}^{\mathrm{i}nX}\).
\(\mathbb{Z}_{2}\) invariant for CS and PHS topological chains
SymmetryUnder the CS hypothesis, we know from Lemma III.5 that \(S\) commutes with \(T(k)\), so in particular it commutes with \(T(2\pi)\) and by functional calculus with \(X\) and with \(\operatorname{e}^{-\mathrm{i}kX/2\pi}\). Therefore it holds that
\[Sv_{i}(k)=ST(k)\operatorname{e}^{-\mathrm{i}kX/2\pi}v_{i}(0)=T(k)\operatorname {e}^{-\mathrm{i}kX/2\pi}Sv_{i}(0)=T(k)\operatorname{e}^{-\mathrm{i}kX/2\pi}v_{ N-i+1}(0)=v_{N-i+1}(k)\]
as wanted.
Let us now assume the PHS hypothesis. Lemma III.5 and the telescopic property (12) together give that
\[CT(2\pi)=T(-2\pi)C=T(2\pi)^{-1}C\quad\Longleftrightarrow\quad C\operatorname {e}^{\mathrm{i}X}=\operatorname{e}^{-\mathrm{i}X}C\]
which by functional calculus implies for all \(k\in\mathbb{R}\)
\[C\operatorname{e}^{-\mathrm{i}kX/2\pi}=\operatorname{e}^{\mathrm{i}kX/2\pi}C.\]
Therefore it holds that
\[Cv_{i}(k)=CT(k)\operatorname{e}^{-\mathrm{i}kX/2\pi}v_{i}(0)=T(-k) \operatorname{e}^{\mathrm{i}kX/2\pi}Cv_{i}(0)=T(-k)\operatorname{e}^{\mathrm{ i}kX/2\pi}v_{N-i+1}(0)=v_{N-i+1}(-k)\]
which concludes the proof.
## IV Berry phase and the \(\mathbb{Z}_{2}\) invariant
We are finally in position to extract the topological invariant out of any Bloch basis associated to the spectral projections \(P_{-}(k)\) of the fiber Hamiltonians \(H(k)\), which we know exist thanks to the results of the previous Section. The invariant will be formulated starting from the (_abelian_) _Berry phase_ of the Bloch basis, whose definition we recall below.
**Definition IV.1** (Berry phase).: Let \(P(k)\in M_{N}(\mathbb{C})\) be a \(2\pi\)-periodic family of rank-\(m\) projections which is at least \(C^{1}\) in \(k\), and \(\{v_{i}(k)\}_{i\in\{1,\ldots N\}}\) be a Bloch basis associated to \(P(k)\) in the sense of Definition III.1. The _Berry phase_ of the Bloch basis is the following integral:
\[\frac{1}{2\pi\mathrm{i}}\int_{S^{1}}\mathrm{d}k\sum_{i=1}^{N}\left\langle v_{ i}(k),v_{i}^{\prime}(k)\right\rangle\,. \tag{16}\]
The same formula, in general, gives the holonomy phase of any moving basis (i.e. global frame) for the trivial bundle \(S^{1}\times\mathbb{C}^{N}\).
This Section will be devoted to proving some properties of the Berry phase defined above: it will be argued that it takes integer values, and that under PHS or CS the parity of this integer is a quantity which depends on the projections \(P_{-}(k)\), and not on the choice of a Bloch basis compatible with the symmetries. This will lead to the formulation of the \(\mathbb{Z}_{2}\) invariant.
### Properties of the Berry phase
Much of the following discussion will be based on the next result regarding homotopy classes of unitary-matrix-valued maps on the circle. This statement is well-known from algebraic topology, see e.g. Ref. [6], Proposition 13.11, but we provide a sketch of some aspects of the proof in Appendix A for the reader's convenience.
**Theorem IV.2** (Winding number of unitary-matrix-valued maps).: _Let \(\pi_{1}(U(N))\) denote the group of (smooth) homotopy classes of (smooth) maps \(S^{1}\to U(N)\), with \(U(N):=\big{\{}U\in M_{N}(\mathbb{C}):U^{*}=U^{-1}\big{\}}\), endowed with point-wise multiplication of maps as group operation. Then there is a group isomorphism_
\[\operatorname{wn}\colon\pi_{1}(U(N))\stackrel{{\sim}}{{ \longrightarrow}}\mathbb{Z},\qquad[U\colon S^{1}\to U(N)]\mapsto\frac{1}{2 \pi\mathrm{i}}\int_{S^{1}}\mathrm{d}k\operatorname{tr}\left(U(k)^{*}U^{\prime }(k)\right). \tag{17}\]
_The integer \(\operatorname{wn}([U])\) is called the winding number of (the homotopy class \([U]\) of) the map \(U\colon S^{1}\to U(N)\)._
We start by computing the Berry phase of the Bloch basis constructed in the previous Section.
**Theorem IV.3**.: _Under the assumptions of Theorem III.6, we have that_
\[\frac{1}{2\pi\mathrm{i}}\int_{S^{1}}\mathrm{d}k\sum_{i=1}^{N}\left\langle v_{ i}(k),v_{i}^{\prime}(k)\right\rangle=-\frac{1}{2\pi}\operatorname{tr}X\quad \in\mathbb{Z}\]
_where \(\left\{v_{i}(k)\right\}_{i\in\{1,\ldots,N\}}\) is as in (15) and \(X\) is as in (14). In particular, the Berry phase above is independent of the choice of \(\left\{v_{i}(0)\right\}_{i\in\{1,\ldots,N\}}\) as in the statement of Theorem III.6._
\(\mathbb{Z}_{2}\) invariant for CS and PHS topological chains
Proof.: With the notation of Theorem III.6, let us define \(W(k):=T(k)\operatorname{e}^{-\mathrm{i}\mathrm{i}\mathrm{i}X/2\pi}\), so that \(v_{i}(k)=W(k)\,v_{i}(0)\) for \(i\in\{1,\ldots,N\}\). We can compute then
\[\frac{1}{2\pi\mathrm{i}}\int_{S^{1}}\mathrm{d}k\sum_{i=1}^{N} \big{\langle}v_{i}(k),v_{i}^{\prime}(k)\big{\rangle} =\frac{1}{2\pi\mathrm{i}}\int_{S^{1}}\mathrm{d}k\sum_{i=1}^{N} \big{\langle}W(k)\,v_{i}(0),W^{\prime}(k)\,v_{i}(0)\big{\rangle}=\frac{1}{2\pi \mathrm{i}}\int_{S^{1}}\mathrm{d}k\sum_{i=1}^{N}\big{\langle}v_{i}(0),W(k)^{* }\,W^{\prime}(k)\,v_{i}(0)\big{\rangle}\] \[=\frac{1}{2\pi\mathrm{i}}\int_{S^{1}}\mathrm{d}k\operatorname{tr} \big{(}W(k)^{*}\,W^{\prime}(k)\big{)}\]
owing to the fact that \(\{v_{i}(0)\}_{i\in\{1,\ldots,N\}}\) is an orthonormal basis of \(\mathbb{C}^{N}\). In view of Theorem IV.2, this allows to conclude that
\[\frac{1}{2\pi\mathrm{i}}\int_{S^{1}}\mathrm{d}k\sum_{i=1}^{N} \big{\langle}v_{i}(k),v_{i}^{\prime}(k)\big{\rangle}=\operatorname{wn}([W]) \in\mathbb{Z}\,.\]
To prove the equality claimed in the statement, we compute then
\[W^{\prime}(k)=\frac{\mathrm{d}}{\mathrm{d}k}\big{(}T(k)\operatorname{e}^{- \mathrm{i}\mathrm{i}X/2\pi}\big{)}=T^{\prime}(k)\operatorname{e}^{-\mathrm{i} \mathrm{i}X/2\pi}+T(k)\operatorname{e}^{-\mathrm{i}\mathrm{i}X/2\pi}\frac{X}{ 2\pi\mathrm{i}}=[p_{-}^{\prime}(k),P_{-}(k)]\,W(k)+W(k)\,\frac{X}{2\pi\mathrm{ i}}\]
and therefore
\[W(k)^{*}\,W^{\prime}(k)=W(k)^{*}\,[P_{-}^{\prime}(k),P_{-}(k)]\,W(k)+\frac{X} {2\pi\mathrm{i}}\,.\]
Consequently
\[\operatorname{tr}\big{(}W(k)^{*}\,W^{\prime}(k)\big{)}=\operatorname{tr} \big{(}W(k)^{*}\,[P_{-}^{\prime}(k),P_{-}(k)]\,W(k)\big{)}+\frac{1}{2\pi \mathrm{i}}\operatorname{tr}X=\operatorname{tr}\big{(}[P_{-}^{\prime}(k),P_{- }(k)]\big{)}+\frac{1}{2\pi\mathrm{i}}\operatorname{tr}X\]
due to the invariance of the trace under the unitary conjugation by \(W(k)\). Now, the first summand on the right-hand side of the above vanishes because commutators are traceless, and we conclude that
\[\operatorname{wn}([W])=\frac{1}{2\pi\mathrm{i}}\int_{S^{1}}\mathrm{d}k \operatorname{tr}\big{(}W(k)^{*}\,W^{\prime}(k)\big{)}=\frac{1}{(2\pi\mathrm{ i})^{2}}\operatorname{tr}X\int_{S^{1}}\mathrm{d}k=-\frac{1}{2\pi} \operatorname{tr}X\]
as claimed.
Finally \(\operatorname{tr}X\) does not depend on the choice of the basis \(\{v_{i}(0)\}_{i\in\{1,\ldots,N\}}\) for \(\mathbb{C}^{N}\), so neither does the Berry phase of the Bloch basis \(\{v_{i}(k)\}_{i\in\{1,\ldots,N\}}\).
**Remark IV.4**.: Since \(T(2\pi)=\operatorname{e}^{\mathrm{i}\mathrm{i}X}\) and \(\operatorname{tr}(X)=2\pi n\) for \(n=-\operatorname{wn}([W])\in\mathbb{Z}\), it must be that \(\det T(2\pi)=\operatorname{e}^{\mathrm{i}\operatorname{tr}X}=\operatorname{e }^{2\pi\mathrm{i}n}=1\). So, while the parallel transport unitaries \(T(k)\) are not \(2\pi\)-periodic in \(k\) in general (compare (12)), the determinant map \(k\mapsto\det T(k)\) is, and therefore has a well defined winding number in view of Proposition A.2, which can be alternatively used to compute (up to a sign) the Berry phase of the Bloch basis in (15).
The next result will tell us that, when a change of Bloch basis is performed (also called a _change of Bloch gauge_), the Berry phase changes by an integer. In combination with the previous result, we can conclude that Berry phases are always integer-valued.
**Theorem IV.5**.: _Let \(\{v_{i}(k)\}_{i\in\{1,\ldots,N\}}\) be is as in (15) and let \(\{u_{i}(k)\}_{i\in\{1,\ldots,N\}}\) be any Bloch basis associated to \(P_{-}(k)\), in the sense of Definiton III.1. Define the change-of-basis matrix \(G(k)\), called Bloch gauge, such that \(u_{i}(k)=G(k)\,v_{i}(k)\) for all \(i\in\{1,\ldots,N\}\). Then \(G(k)\) is unitary, \(2\pi\)-periodic and as regular in \(k\) as the Bloch bases. Moreover_
\[\frac{1}{2\pi\mathrm{i}}\int_{S^{1}}\mathrm{d}k\sum_{i=1}^{N} \big{\langle}u_{i}(k),u_{i}^{\prime}(k)\big{\rangle}=\frac{1}{2\pi\mathrm{i}} \int_{S^{1}}\mathrm{d}k\sum_{i=1}^{N}\big{\langle}v_{i}(k),v_{i}^{\prime}(k) \big{\rangle}+\operatorname{wn}([G])\quad\in\mathbb{Z}\]
_where \(\operatorname{wn}([G])\) denotes the winding number of the map \(G\colon S^{1}\to U(N)\)._
Proof.: For each \(k\in\mathbb{R}\), the matrix \(G(k)\) is unitary, as it maps one orthonormal basis into another. Moreover, in the \(2\pi\)-periodic and regular orthonormal basis \(\{v_{i}(k)\}_{i\in\{1,\ldots,N\}}\) for \(\mathbb{C}^{N}\), the Bloch gauge \(G(k)\) has entries
\[\big{\langle}v_{j}(k),G(k)\,v_{i}(k)\big{\rangle}=\big{\langle}v_{j}(k),u_{i}( k)\big{\rangle}\,,\quad i,j\in\{1,\ldots,N\}\,\]
and therefore it is itself \(2\pi\)-periodic and regular, as wanted.
We can then compute
\[\frac{1}{2\pi\mathrm{i}} \int_{S^{1}}\mathrm{d}k\sum_{i=1}^{N}\left\langle u_{i}(k),u_{i}^{ \prime}(k)\right\rangle=\frac{1}{2\pi\mathrm{i}}\int_{S^{1}}\mathrm{d}k\sum_{i =1}^{N}\left\langle G(k)\,v_{i}(k),\partial_{k}\big{(}G(k)\,v_{i}(k)\big{)}\right\rangle\] \[=\frac{1}{2\pi\mathrm{i}}\int_{S^{1}}\mathrm{d}k\sum_{i=1}^{N} \left\langle v_{i}(k),v_{i}^{\prime}(k)\right\rangle+\frac{1}{2\pi\mathrm{i}} \int_{S^{1}}\mathrm{d}k\sum_{i=1}^{N}\left\langle v_{i}(k),G(k)^{*}\,G^{\prime }(k)\,v_{i}(k)\right\rangle\] \[=\frac{1}{2\pi\mathrm{i}}\int_{S^{1}}\mathrm{d}k\sum_{i=1}^{N} \left\langle v_{i}(k),v_{i}^{\prime}(k)\right\rangle+\frac{1}{2\pi\mathrm{i}} \int_{S^{1}}\mathrm{d}k\,\mathrm{tr}\left(G(k)^{*}\,G^{\prime}(k)\right)\] \[=\frac{1}{2\pi\mathrm{i}}\int_{S^{1}}\mathrm{d}k\sum_{i=1}^{N} \left\langle v_{i}(k),v_{i}^{\prime}(k)\right\rangle+\mathrm{wn}([G])\,,\]
as claimed.
**Remark IV.6**.: By definition, a Bloch basis \(\left\{u_{i}(k)\right\}_{i\in\{1,\ldots,N\}}\) associated to \(P_{-}(k)\) has the first \(m\) vectors in \(\mathrm{Ran}P_{-}(k)\) and the last \(N-m\) in \(\mathrm{Ran}P_{+}(k)\), where as usual \(P_{+}(k)=\mathds{1}-P_{-}(k)\). Therefore, the Bloch gauge \(G(k)\) which maps the Bloch basis \(\left\{v_{i}(k)\right\}_{i\in\{1,\ldots,N\}}\) in (15) to \(\left\{u_{i}(k)\right\}_{i\in\{1,\ldots,N\}}\) has a block-diagonal form in the decomposition \(\mathbb{C}^{N}=\mathrm{Ran}\,P_{-}(k)\oplus\mathrm{Ran}\,P_{+}(k)\), i.e.
\[G(k)=\begin{bmatrix}G_{-}(k)&0\\ 0&G_{+}(k)\end{bmatrix} \tag{18}\]
where \(G_{\pm}(k)=P_{\pm}(k)\,G(k)\,P_{\pm}(k)\) seen as a linear operator on \(\mathrm{Ran}\,P_{\pm}(k)\). In particular
\[u_{i}(k)=\begin{cases}G_{-}(k)\,v_{i}(k)&\text{if }i\in\{1,\ldots,m\}\;,\\ G_{+}(k)\,v_{i}(k)&\text{if }i\in\{m+1,\ldots,N\}\;.\end{cases}\]
### Symmetric Berry phases
We now turn to the investigation of how PHS and CS affect the value of Berry phases.
**Proposition IV.7**.: _Let Assumption II.3 or II.4 hold. Let \(\left\{v_{i}(k)\right\}_{i\in\{1,\ldots,N\}}\) be is as in (15) and let \(\left\{u_{i}(k)\right\}_{i\in\{1,\ldots,N\}}\) be any symmetric Bloch basis associated to \(P_{-}(k)\), in the sense of Definiton III.2. Finally, let \(G(k)\) be the Bloch gauge defined in Theorem IV.5. Then_
\[\mathrm{wn}([G])\in 2\,\mathbb{Z}\,.\]
Proof.: We will use the block decomposition of \(G(k)\) from (18). As usual we will denote by \(m\) the rank of \(P_{-}(k)\), so that in particular \(N=2m\).
CS Under Assumption II.4, we have that
\[v_{N-i+1}(k)=Sv_{i}(k)\quad\text{and}\quad u_{N-i+1}(k)=Su_{i}(k)\quad\text{ for }i\in\{1,\ldots,m\}\;.\]
We have also, for \(i\in\{1,\ldots,m\}\),
\[G_{+}(k)\,Sv_{i}(k)=G_{+}(k)\,v_{N-i+1}(k)=u_{N-i+1}(k)=Su_{i}(k)=SG_{-}(k)\,v_ {i}(k)\,.\]
Since \(\left\{v_{i}(k)\right\}_{i\in\{1,\ldots,m\}}\) span \(\mathrm{Ran}\,P_{-}(k)\) orthonormally, we conclude that
\[G_{+}(k)\,P_{+}(k)\,S=G_{+}(k)\,SP_{-}(k)=SG_{-}(k)\,P_{-}(k)\,.\]
Since \(G_{\pm}(k)=P_{\pm}(k)\,G(k)\,P_{\pm}(k)\) by definition, we conclude from the above that
\[P_{+}(k)\,G(k)\,P_{+}(k)\,S=SP_{-}(k)\,G(k)\,P_{-}(k)\]
\(\mathbb{Z}_{2}\) invariant for CS and PHS topological chains
and therefore
\[\det G(k)=\det G_{-}(k)\cdot\det G_{+}(k)=\big{(}\det G_{-}(k)\big{)}^{2}\,.\]
(In the above, it should be noted that \(\det G\) is the determinant of a linear operator on \(\mathbb{C}^{N}=\mathbb{C}^{2m}\), while \(\det G_{\pm}\) are determinants of linear operators on \(\operatorname{Ran}P_{\pm}(k)\simeq\mathbb{C}^{m}\).) In view of the results of Appendix A, the winding number \(\operatorname{wn}([G])\) can be computed as the winding \(w(\det G)\) of the map \(\det G\): \(S^{1}\to S^{1}\), and additivity of this winding number (Proposition A.2) yields
\[\operatorname{wn}([G])=2\,\operatorname{wn}([G_{-}])\in 2\,\mathbb{Z}\,.\]
PHSUnder Assumption II.3, we have this time
\[G_{+}(-k)\,C\,v_{i}(k)=G_{+}(-k)\,v_{N-i+1}(-k)=u_{N-i+1}(-k)=C\,u_{i}(k)=C\,G_ {-}(k)\,v_{i}(k)\]
for \(i\in\{1,\ldots,m\}\), and therefore
\[P_{+}(-k)\,G(-k)\,P_{+}(-k)\,C=CP_{-}(k)\,G(k)\,P_{-}(k)\,.\]
By antilinearity of \(C\), the above implies that
\[\det G_{+}(-k)=\overline{\det G_{-}(k)}=\det G_{-}(k)^{*}=\det G_{-}(k)^{-1}\]
and in view of Remark A.3
\[w(\det G_{+})=-w(\det G_{+}\circ\iota)=-w(\det G_{-}^{-1})=w(\det G_{-})\]
where \(\iota(k):=-k\). As before, this identity allows to deduce that \(\operatorname{wn}([G])=\operatorname{wn}([G_{-}])+\operatorname{wn}([G_{+}])= 2\,\operatorname{wn}([G_{-}])\in 2\,\mathbb{Z}\), which concludes the proof.
**Corollary IV.8**.: _Under the assumptions of Proposition IV.7, the \(\mod 2\) equivalence class_
\[\left[\frac{1}{2\pi\mathrm{i}}\int_{S^{1}}\mathrm{d}k\,\sum_{i=1}^{2m}\big{<}u _{i}(k),u_{i}^{\prime}(k)\big{>}\right]=\left[\frac{1}{\pi\mathrm{i}}\int_{S^{ 1}}\mathrm{d}k\,\sum_{i=1}^{m}\big{<}u_{i}(k),u_{i}^{\prime}(k)\big{>}\right] \quad\in\mathbb{Z}_{2}\]
_does not depend on the choice of a symmetric Bloch basis \(\{u_{i}(k)\}_{i\in\{1,\ldots,N\}}\) associated to \(P_{-}(k)\)._
Proof.: The independence of the \(\mod 2\) reduction of the Berry phase from the Bloch basis is a direct consequence of Theorem IV.5 and Proposition IV.7, so we only need to prove the rewriting of the Berry phase claimed in the statement.
CS Under Assumption II.4 it suffices to notice that for a chiral symmetric Bloch basis \(\{u_{i}(k)\}_{i\in\{1,\ldots,N\}}\)
and therefore
\[\sum_{i=1}^{2m}\big{<}u_{i}(k),u_{i}^{\prime}(k)\big{>}=2\sum_{i=1}^{m}\big{<} u_{i}(k),u_{i}^{\prime}(k)\big{>}\,.\]
PHS This time, under Assumption II.3, we have that for a particle-hole symmetric Bloch basis
\[u_{N-i+1}(-k)=C\,u_{i}(k)\quad\Longrightarrow\quad-u_{N-i+1}^{\prime}(-k)=C \,u_{i}^{\prime}(k)\,,\quad i\in\{1,\ldots,m\}\,\]
and therefore
\[\big{<}u_{N-i+1}(-k),u_{N-i+1}^{\prime}(-k)\big{>}=-\big{<}C\,u_{i}(k),C\,u_{ i}^{\prime}(k)\big{>}=-\big{<}\overline{u_{i}(k),u_{i}^{\prime}(k)}\big{>}\,, \quad i\in\{1,\ldots,m\}\,\]
due to the antilinearity of \(C\). Now, the normalization \(\big{<}u_{i}(k),u_{i}(k)\big{>}\equiv 1\) implies that \(\big{<}u_{i}(k),u_{i}^{\prime}(k)\big{>}\) is a purely imaginary number, which yields
\[\big{<}u_{N-i+1}(-k),u_{N-i+1}^{\prime}(-k)\big{>}=\big{<}u_{i}(k),u_{i}^{ \prime}(k)\big{>}\,,\quad i\in\{1,\ldots,m\}\.\]
Notice now that, by changing variables to \(\kappa=-k\), we can rewrite
\[\int_{0}^{2\pi}\mathrm{d}k\,\sum_{j=-\pi+1}^{2\pi}\left\langle u_{j} (k),u^{\prime}_{j}(k)\right\rangle =-\int_{0}^{-2\pi}\mathrm{d}\kappa\,\sum_{j=-\pi+1}^{2\pi}\left\langle u _{j}(-\kappa),u^{\prime}_{j}(-\kappa)\right\rangle\] \[=\int_{-2\pi}^{0}\mathrm{d}\kappa\,\sum_{i=1}^{m}\left\langle u _{i}(\kappa),u^{\prime}_{i}(\kappa)\right\rangle=\int_{0}^{2\pi}\mathrm{d} \kappa\,\sum_{i=1}^{m}\left\langle u_{i}(\kappa),u^{\prime}_{i}(\kappa)\right\rangle\,,\]
where in the last equality we shifted the integration interval thanks to the \(2\pi\)-periodicity of the Bloch basis. As before, this allows to conclude that
as wanted.
### The \(\mathbb{Z}_{2}\) invariant
The above result finally puts in the position to set the following
**Definition IV.9** (The \(\mathbb{Z}_{2}\) invariant).: Let \(H\) be a finite-range Hamiltonian on \(\ell^{2}(\mathbb{Z})\otimes\mathbb{C}^{N}\) modeling a translation-invariant insulator, see Assumptions II.1 and II.2. Assume also that \(H\) is either particle-hole symmetric or chiral symmetric, in the sense of Assumption II.3 or II.4 respectively. Then the quantity
\[I_{\mathrm{PHC}}(H):=\frac{1}{\pi\mathrm{i}}\,\int_{S^{1}}\mathrm{d}k\,\sum_{i =1}^{m}\left\langle u_{i}(k),u^{\prime}_{i}(k)\right\rangle\,\,\mathrm{mod}\, 2\quad\in\mathbb{Z}_{2} \tag{19}\]
is a gauge invariant of the spectral projection of \(H\) onto the negative energy eigenspace. Here \(\left\{u_{i}(k)\right\}_{i\in 1,\dots,N}\) is any symmetric Bloch basis, in the sense of Definition III.2.
The above definition of our \(\mathbb{Z}_{2}\) invariant \(I_{\mathrm{PHC}}\) is formulated in terms of a symmetric basis for the spectral projections \(P_{-}(k)\) corresponding to negative eigenvalues of the fiber Hamiltonians; the symmetry dictates how the corresponding basis for \(P_{+}(k)=\mathds{1}-P_{-}(k)\) should be defined. This could make the numerical computation of the invariant in specific models more accessible.
The next result states that \(I_{\mathrm{PHC}}(H)\) is also invariant under continuous deformations, i.e. homotopies, of the model, and therefore qualifies as a "topological invariant".
**Theorem IV.10** (Topological invariance of \(I_{\mathrm{PHC}}\)).: _Assume that \(H_{t}\), \(t\in[0,1]\), is a continuous family of bounded operators in \(\ell^{2}(\mathbb{Z})\otimes\mathbb{C}^{N}\) with the properties described in Definition IV.9. Assume that the spectral gap of \(H_{t}\) stays open for all \(t\in[0,1]\). Then_
\[I_{\mathrm{PHC}}(H_{0})=I_{\mathrm{PHC}}(H_{1})\,.\]
Proof.: The homotopy \(t\mapsto H_{t}\) induces a corresponding homotopy \(t\mapsto P_{-}(k;t)\) between the spectral projections of the fiber operators \(H_{t}(k)\), which remains continuous due to the fact that the gap remains open. Since both \(k\in S^{1}\) and \(t\in[0,1]\) range over compact sets, by uniform continuity there exists \(\delta>0\) such that
\[\sup_{k\in S^{1}}\|P_{-}(k;t)-P_{-}(k;s)\|<1\quad\text{if}\quad|t-s|<\delta\,.\]
Let us use the short-hand notation \(P_{t}(k):=P_{-}(k;t)\) and \(P_{s}(k):=P_{-}(k;s)\) in the following. The above implies the existence of a _Kato-Nagy unitary_ (see Ref. [8], Chapter I, Section 4.6)
\[U_{t,s}(k):=\left[\mathds{1}-\left(P_{t}(k)-P_{s}(k)\right)^{2}\right]^{-1/2} \,\left[P_{t}(k)\,P_{s}(k)+\left(\mathds{1}-P_{t}(k)\right)\,\left(\mathds{1}- P_{s}(k)\right)\right]\]
such that
\[P_{t}(k)\,U_{t,s}(k)=U_{t,s}(k)\,P_{s}(k)\,.\]
From the above explicit expression for the Kato-Nagy unitary, it is clear that it is as regular in \(k\) as the projections, jointly continuous in \(t\) and \(s\), and commutes with \(C\) or \(S\) if the models are particle-hole symmetric or chiral symmetric, respectively.
\(\mathbb{Z}_{2}\) invariant for CS and PHS topological chains
Let us fix for example \(s=0\), and let \(\left\{u_{i}(k;0)\right\}_{i\in\{1,\ldots,N\}}\) be a symmetric Bloch basis associated to \(P_{-}(k;0)\). By the above properties of the Kato-Nagy unitary, the collection of vectors
\[u_{i}(k;t):=U_{t,0}(k)\,u_{i}(k;0)\,,\quad i\in\{1,\ldots,N\}\,\]
defines a symmetric Bloch basis associated to \(P_{-}(k;t)\), as long as \(|t|<\delta\). Arguing as in the proof of Proposition IV.7, one can check that the Berry phase of the Bloch basis \(\left\{u_{i}(k;t)\right\}_{i\in\{1,\ldots,N\}}\) has the same parity of the one of the Bloch basis \(\left\{u_{i}(k;0)\right\}_{i\in\{1,\ldots,N\}}\), which defines the invariant \(I_{\text{PHC}}(H_{0})\). This implies that \(I_{\text{PHC}}(H_{i})\) is constant on the interval \([0,\delta)\). Partitioning the interval \([0,1]\) into a sequence of intervals of length \(\delta/2\) and iterating this argument yields the conclusion of the proof.
**Remark IV.11**.: It is important to stress that, in the above statement, the symmetry (be it chiral or particle-hole) is required to be _unbroken_ along the whole homotopy \(t\mapsto H_{t}\), on top of the assumption that the continuous deformation of \(H_{0}\) to \(H_{1}\) does not close the spectral gap. Indeed, one could connect also Hamiltonians which are in different topological phases (meaning that they have different values of \(I_{\text{PHC}}\)) without closing the gap, but breaking the symmetry along the deformation. The authors of Ref. [20], for example, provide a deformation of (continuum models for) a "topological" chiral chain to a "trivial" one which keeps the spectral gap open, but do not claim that the deformation also leaves the chiral symmetry unbroken.
## V SSH model
The Su-Schrieffer-Heeger (SSH) model, introduced in Ref. [21] to model a molecular chain of polyacetylene, is often considered as a prototypical example of a chiral-symmetric topological insulator. The Hamiltonian \(H_{\text{SSH}}\equiv H_{\text{SSH}}(\delta)\) acts on the Hilbert space \(\mathcal{H}=\ell^{2}(\mathbb{Z})\otimes\mathbb{C}^{2}\) as
\[\left(H_{\text{SSH}}\psi\right)_{n}=A_{1}\,\psi_{n+1}+A_{0}\,\psi_{n}+A_{1}^{* }\,\psi_{n-1}\,,\quad\psi\in\ell^{2}(\mathbb{Z})\otimes\mathbb{C}^{2}\,,\]
where
\[A_{1}=\begin{bmatrix}0&0\\ 1&0\end{bmatrix},\quad A_{0}\equiv A_{0}(\delta)=\begin{bmatrix}0&\delta\\ \delta&0\end{bmatrix},\quad\text{with}\quad\delta\in\mathbb{R}\,.\]
The SSH Hamiltonian thus fits Assumptions II.1 and II.2, provided we exclude some values of the parameter \(\delta\) which lead to gapless spectrum. To this end, we apply Lemma II.5 to obtain the fiber operators
\[H_{\text{SSH}}(k)=A_{0}+\text{e}^{-\text{i}k}A_{1}+\text{e}^{\text{i}k}A_{1}^ {*}=\begin{bmatrix}0&\delta+\text{e}^{\text{i}k}\\ \delta+\text{e}^{-\text{i}k}&0\end{bmatrix}\]
with eigenvalues (_Bloch bands_)
\[E_{\pm}(k):=\pm\sqrt{(\delta+\cos k)^{2}+(\sin k)^{2}}. \tag{20}\]
The SSH model represents an insulator if and only if \(E_{\pm}(k)\neq 0\) for all \(k\in\mathbb{R}\), that is, if \(\delta\notin\{-1,1\}\).
Moreover, it is easily verified that this model verifies the CS hypothesis (or, more appropriately, a sublattice symmetry hypothesis), Assumption II.4, with respect to the CS operator \(S_{\text{C}}=\mathds{1}_{\ell^{2}(\mathbb{Z})}\otimes S\) with \(S=\sigma_{3}\), the third Pauli matrix, i.e.
\[S\begin{bmatrix}x\\ y\end{bmatrix}=\begin{bmatrix}x\\ -y\end{bmatrix}.\]
The SSH model is thus amenable to our analysis conducted in the previous Sections: as we will see, the parameter \(\delta\) can be used to toggle topological and non-topological phases.
### The \(\mathbb{Z}_{2}\) invariant for the SSH model
In order to compute the invariant \(I_{\text{PHC}}(\delta)\equiv I_{\text{PHC}}(H_{\text{SSH}}(\delta))\) in (19), let us pick a smooth and periodic eigenvector for \(H_{\text{SSH}}(k)\) with negative eigenvalue. To this end, let us denote \(z(k):=\delta+\text{e}^{\text{i}k}\), so that in particular \(E_{\pm}(k)=\pm|z(k)|\) and \(z(k)\neq 0\) as long as the Hamiltonian is an insulator. Solving the eigenvalue equation \(H(k)\,u(k)=E_{-}(k)\,u(k)\) yields the normalized eigenvector
\[u(k)=\frac{1}{\sqrt{2}}\,\left[\frac{-1}{\sqrt{\frac{z(k)}{z(k)}}}\right]\]
from which we find, with a straightforward computation,
\[\left\langle u(k),u^{\prime}(k)\right\rangle=\frac{1}{4}\,\frac{\overline{z^{ \prime}(k)}\,z(k)-\overline{z(k)}\,z^{\prime}(k)}{\overline{z(k)}z(k)}=-\frac{ \mathrm{i}}{2}\,\operatorname{Im}\frac{z^{\prime}(k)}{z(k)}\,.\]
Therefore the \(\mathbb{Z}_{2}\) invariant can be computed as
\[I_{\mathrm{PHC}}(\delta) =\frac{1}{\pi\mathrm{i}}\int_{S^{\mathrm{f}}}\mathrm{d}k\,\left \langle u(k),u^{\prime}(k)\right\rangle\,\operatorname{mod}\,2=-\frac{1}{2 \pi}\int_{S^{\mathrm{f}}}\mathrm{d}k\,\operatorname{Im}\frac{z^{\prime}(k)}{ z(k)}\,\operatorname{mod}\,2\] \[=\operatorname{Re}\left[-\frac{1}{2\pi\mathrm{i}}\,\int_{S^{ \mathrm{f}}}\mathrm{d}k\,\frac{z^{\prime}(k)}{z(k)}\right]\,\operatorname{ mod}\,2\,.\]
By the Cauchy integral formula, the integral above computes the winding number around the origin of the curve \(\gamma\) in the complex plane obtained as the image of \(k\mapsto z(k)=\delta+\mathrm{e}^{\mathrm{i}k}\), namely a circle of unit radius centered around \(\delta\in\mathbb{R}\subset\mathbb{C}\). This winding number is an integer, and therefore the real part in right-hand side of the above identity is redundant: in particular, the curve \(\gamma\) winds around the origin once if \(\delta\in(-1,1)\) or zero times otherwise. (Notice that \(\gamma\) passes through \(0\in\mathbb{C}\) if \(\delta=\pm 1\), which are excluded values if we want the spectral gap to be open.) We conclude that
\[I_{\mathrm{PCH}}(\delta)=\begin{cases}1&\text{if }\delta\in(-1,1)\,,\\ 0&\text{if }\delta\in\mathbb{R}\setminus[-1,1]\,.\end{cases}\]
Incidentally, let us observe that the integer winding number of \(k\mapsto z(k)\) coincides with the "bulk" topological invariant associated to 2-band chiral chains in Refs. [5] and [19].
### Boundary modes for the SSH model
When a topological insulator is cut along some edge, the _bulk-boundary correspondence_ predicts the appearance of new modes which fill the spectral gap of the bulk Hamiltonian and are spatially localized around the cut. We investigate this situation here for the SSH model, and verify that localized boundary modes to appear exactly for the values of the parameter \(\delta\) for which \(I_{\mathrm{PCH}}\neq 0\in\mathbb{Z}_{2}\). For a general proof of a bulk-edge correspondence in (disordered) chiral chains, we refer the reader to Ref. [5].
Let us define a truncated version of the SSH Hamiltonian as follows: it will be given by the operator \(H^{\sharp}_{\mathrm{SSH}}\) acting on \(\mathcal{H}^{\sharp}:=\ell^{2}(\mathbb{N})\otimes\mathbb{C}^{2}\) as
\[(H^{\sharp}_{\mathrm{SSH}}\,\psi)_{n}:=(H_{\mathrm{SSH}}\,\psi)_ {n}\quad\text{if }n>0,\quad\text{and}\] \[(H^{\sharp}_{\mathrm{SSH}}\,\psi)_{0}:=A_{1}\,\psi_{1}+A_{0}\, \psi_{0},\quad\psi\in\ell^{2}(\mathbb{N})\otimes\mathbb{C}^{2}\,.\]
The truncated Hamiltonian imposes a Dirichlet condition on \(H_{\mathrm{SSH}}\) which sets \(\psi_{n}=0\) for negative \(n\). This boundary condition does not change the essential spectrum of \(H_{\mathrm{SSH}}\), which remains given by the two bands obtained as images of the functions defined in (20), but pure point spectrum may appear in the bulk gap. We therefore look for zero-energy modes of \(H^{\sharp}_{\mathrm{SSH}}\), namely for a vector \(\psi=([x_{n},y_{n}]^{\mathrm{T}})\in\mathcal{H}^{\sharp}\), \(\psi\neq 0\), such that \(H^{\sharp}_{\mathrm{SSH}}\,\psi=0\). Explicitly, this reads
\[\begin{bmatrix}0&0\\ 1&0\end{bmatrix}\begin{bmatrix}x_{n+1}\\ \psi_{n+1}\end{bmatrix}+\begin{bmatrix}0&\delta\\ \delta&0\end{bmatrix}\begin{bmatrix}x_{n}\\ y_{n}\end{bmatrix}+\begin{bmatrix}0&1\\ 0&0\end{bmatrix}\begin{bmatrix}x_{n-1}\\ y_{n-1}\end{bmatrix}=\begin{bmatrix}0\\ 0\end{bmatrix}\quad\text{if }n>0\,,\quad\text{and}\] \[\begin{bmatrix}0&0\\ 1&0\end{bmatrix}\begin{bmatrix}x_{1}\\ y_{1}\end{bmatrix}+\begin{bmatrix}0&\delta\\ \delta&0\end{bmatrix}\begin{bmatrix}x_{0}\\ y_{0}\end{bmatrix}=\begin{bmatrix}0\\ 0\end{bmatrix}\,.\]
This yields the recursion rule
\[\begin{cases}\delta\,x_{0}=-x_{1}\,,\\ \delta\,y_{0}=0\,,\\ x_{n+1}=-\delta\,x_{n}\quad\text{if }n>0\,,\\ \delta\,y_{n}=-y_{n-1}\quad\text{if }n>0\,.\end{cases}\]
This obviously means that \(x_{n}=(-\delta)^{n}\,x_{0}\) and \(y_{n}\equiv 0\). In order for the resulting vector \(\psi\) to be square-summable in the index \(n\in\mathbb{N}\), we therefore have to require
\[\|\psi\|^{2}_{\mathcal{H}^{\sharp}}=\sum_{n\in\mathbb{N}}|x_{n}|^{2}+|y_{n}|^{ 2}=\sum_{n\in\mathbb{N}}|\delta|^{2n}<\infty\quad\Longleftrightarrow\quad| \delta|<1\,.\]
This range for \(\delta\) coincides exactly with the values of the parameter in which the \(\mathbb{Z}_{2}\) invariant \(I_{\mathrm{PHC}}(\delta)\) is non-trivial.
\(\mathbb{Z}_{2}\) invariant for CS and PHS topological chains
## VI Kitaev chain
The Kitaev chain was introduced in Ref. [9] as a model for a topological superconductor. In the notation introduced in Section II the Hamiltonian \(H_{\text{Kit}}\equiv H_{\text{Kit}}(\mu,\delta)\) acts on the Hilbert space \(\mathcal{H}=\ell^{2}(\mathbb{Z})\otimes\mathbb{C}^{2}\) as
\[(H_{\text{Kit}}\,\psi)_{n}=A_{1}\,\psi_{n+1}+A_{0}\,\psi_{n}+A_{1}^{*}\psi_{n-1}\]
where
\[A_{1}\equiv A_{1}(\delta)=\begin{bmatrix}0&1+\delta\\ 1-\delta&0\end{bmatrix},\quad A_{0}\equiv A_{0}(\mu)=\begin{bmatrix}0&\mu\\ \mu&0\end{bmatrix},\quad\text{with}\quad\delta,\mu\in\mathbb{R}.\]
To check that this models an insulator, we compute the fiber operators as prescribed by Lemma II.5 and obtain
\[H_{\text{Kit}}(k)=\begin{bmatrix}0&(1+\delta)\,\mathrm{e}^{-\mathrm{i}k}+\mu+ (1-\delta)\,\mathrm{e}^{\mathrm{i}k}\\ (1-\delta)\,\mathrm{e}^{-\mathrm{i}k}+\mu+(1+\delta)\,\mathrm{e}^{\mathrm{i}k }&0\end{bmatrix},\]
which in turn yields the Bloch bands
\[E_{\pm}(k):=\pm\sqrt{(\mu+2\cos k)^{2}+(2\delta\sin k)^{2}}\,.\]
Therefore, the Kitaev Hamiltonian is gapped if and only if \(E_{\pm}(k)\neq 0\) for every \(k\in\mathbb{R}\): this leads to exclude the set of parameters \(\mu,\delta\) given by
\[\eta:=\left\{(\mu,\delta)\in\mathbb{R}^{2}:|\mu|<2\text{ and }\delta=0\quad\text{or} \quad|\mu|=2\text{ and }\delta\in\mathbb{R}\right\}\,.\]
If \((\mu,\delta)\in\mathbb{R}^{2}\setminus\eta\), the Kitaev Hamiltonian \(H_{\text{Kit}}\) thus verifies both Assumptions II.1 and II.2. Moreover, this model verifies the PHS hypothesis II.3 with respect to the operator \(S_{\text{PH}}=\mathds{1}_{\ell^{2}(\mathbb{Z})}\otimes C\) with
\[C\begin{bmatrix}x\\ y\end{bmatrix}=\begin{bmatrix}\underline{\pi}\\ -\overline{y}\end{bmatrix}.\]
The Kitaev chain also satisfies the CS from the previous Section on the SSH model, but it is often considered paradigmatic for its anti-unitary particle-hole symmetry.
### The \(\mathbb{Z}_{2}\) invariant for the Kitaev chain
The invariant
\[I_{\text{PHC}}(\mu,\delta)\equiv I_{\text{PHC}}(H_{\text{Kit}}(\mu,\delta)) \in\mathbb{Z}_{2}\]
can be computed following the same lines of the argument for the SSH model, starting this time from the function
\[z(k):=(1+\delta)\,\mathrm{e}^{-\mathrm{i}k}+\mu+(1-\delta)\,\mathrm{e}^{ \mathrm{i}k}=\mu+2\cos k-2\mathrm{i}\delta\sin k,\quad k\in\mathbb{R}\,.\]
Its image \(\gamma\subset\mathbb{C}\) describes an ellipse centered around \(\mu\in\mathbb{R}\) with semi-axes \(2\) and \(2\delta\); its orientation is dictated by the sign of \(\delta\in\mathbb{R}\). As before \(I_{\text{PHC}}(\mu,\delta)\in\mathbb{Z}_{2}\) computes the parity of the winding number of this curve around the origin in \(\mathbb{C}\), and therefore we conclude that
\[I_{\text{PHC}}(\mu,\delta)=\begin{cases}1&\text{if }|\mu|<2\text{ and }\delta\neq 0\,,\\ 0&\text{otherwise in }\mathbb{R}^{2}\setminus\eta\,.\end{cases} \tag{21}\]
### Boundary modes for the Kitaev chain
As we did for the SSH model, we now pass to the investigation of zero-energy boundary modes for a truncated Kitaev chain. These modes are predicted to behave as Majorana particles, and their topological nature makes them amenable for applications in quantum computing as robust qubits.
\(\mathbb{Z}_{2}\) invariant for CS and PHS topological chains
The truncated operator \(H^{\sharp}_{\text{Kit}}\) acts on \(\mathcal{H}^{\sharp}=\ell^{2}(\mathbb{N})\otimes\mathbb{C}^{2}\) as
\[(H^{\sharp}_{\text{Kit}}\,\psi)_{n}:=(H_{\text{Kit}}\,\psi)_{n}\quad \text{if }n>0\,,\quad\text{and}\] \[(H^{\sharp}_{\text{Kit}}\,\psi)_{0}:=A_{1}\,\psi_{1}+A_{0}\,\psi_{ 0},\quad\mathbf{\psi}\in\ell^{2}(\mathbb{N})\otimes\mathbb{C}^{2}\,.\]
This means that a vector \(\psi=([x_{n},y_{n}]^{\mathrm{T}})\in\mathcal{H}^{\sharp}\), \(\psi\neq 0\), such that \(H^{\sharp}_{\text{Kit}}\,\psi=0\) satisfies
\[\begin{bmatrix}0&1+\delta\\ 1-\delta&0\end{bmatrix}\begin{bmatrix}x_{n+1}\\ y_{n+1}\end{bmatrix}+\begin{bmatrix}0&\mu\\ \mu&0\end{bmatrix}\begin{bmatrix}x_{n}\\ y_{n}\end{bmatrix}+\begin{bmatrix}0&1-\delta\\ 1+\delta&0\end{bmatrix}\begin{bmatrix}x_{n-1}\\ y_{n-1}\end{bmatrix}=\begin{bmatrix}0\\ 0\end{bmatrix}\quad\text{if }n>0\,,\quad\text{and}\] \[\begin{bmatrix}0&1+\delta\\ 1-\delta&0\end{bmatrix}\begin{bmatrix}x_{1}\\ y_{1}\end{bmatrix}+\begin{bmatrix}0&\mu\\ \mu&0\end{bmatrix}\begin{bmatrix}x_{0}\\ y_{0}\end{bmatrix}=\begin{bmatrix}0\\ 0\end{bmatrix}\,,\]
leading this time to the decoupled recursion rules
\[\begin{cases}(1-\delta)x_{1}+\mu\,x_{0}=0\,,\\ (1-\delta)x_{n+1}+\mu\,x_{n}+(1+\delta)\,x_{n-1}=0\quad\text{if }n>0\,,\end{cases} \tag{22}\]
and
\[\begin{cases}(1+\delta)y_{1}+\mu\,y_{0}=0\,,\\ (1+\delta)y_{n+1}+\mu\,y_{n}+(1-\delta)\,y_{n-1}=0\quad\text{if }n>0\,.\end{cases} \tag{23}\]
**Remark VI.1**.: Notice that one is obtained from the other by the exchange of \(\delta\leftrightarrow-\delta\): in particular, if the "particle" \(\psi=([x_{n}(\mu,\delta),0]^{\mathrm{T}})\in\mathcal{H}^{\sharp}\) is a solution for the first recurrence relation, then the "hole" \(\psi=([0,x_{n}(\mu,-\delta)]^{\mathrm{T}})\in\mathcal{H}^{\sharp}\) is a solution for the second recurrence equation, and viceversa.
The case \(\delta=\pm 1\)The situation is simpler when \(\delta=\pm 1\): indeed, if \(\delta=1\) the relation (23) reduces to
\[\begin{cases}2y_{1}+\mu\,y_{0}=0\,,\\ 2\,y_{n+1}+\mu\,y_{n}=0\quad\text{if }n>0\,,\end{cases}\]
which is of the form already encountered in the SSH model. In particular, the solution \(y_{n}=(-\mu/2)^{n}y_{0}\) is square-summable if and only if \(|\mu|<2\). Notice that \(I_{\text{PHC}}(\mu,1)\) is non-trivial exactly in this range of parameters. Similar considerations hold for (23) starting from \(\delta=-1\).
The characteristic equationFrom now one we will suppose \(\delta\neq\pm 1\). Moreover, in view of Remark VI.1, it will be enough to restrict the range of parameters to \((\mathbb{R}^{2}\setminus\eta)\cap\{\delta\geq 0,\,\delta\neq 1\}\) and solve the system (23). Under these assumptions, the latter can be recast as
\[\begin{cases}y_{n+1}+\frac{\mu}{1+\delta}y_{n}+\frac{1-\delta}{1+\delta}y_{n-1 }=0\quad\text{for }n\in\mathbb{N}\,,\\ y_{-1}=0,\quad y_{0}\neq 0\end{cases} \tag{24}\]
(the condition on \(y_{0}\) is to exclude the trivial solution \(y_{n}\equiv 0\) and hence \(\psi=0\in\mathcal{H}^{\sharp}\) which cannot define an eigenvector for \(H^{\sharp}_{\text{Kit}}\)). The theory of homogeneous linear difference equations with constant coefficients [2] prompts to consider the _characteristic equation_
\[\lambda^{2}+\frac{\mu}{1+\delta}\,\lambda+\frac{1-\delta}{1+\delta}=0\,. \tag{25}\]
If \(\lambda_{+}\neq\lambda_{-}\) are the distinct roots of the above equation, then the general solution of (24) is given by
\[y_{n}=a\,\lambda_{+}^{n}+b\,\lambda_{-}^{n}\,,\]
while if \(\lambda_{+}=\lambda_{-}=\lambda\) solutions are of the form
\[y_{n}=(a+b\,n)\,\lambda^{n}\,.\]
In both cases, \(a,b\) are constants chosen to fit the initial conditions for the recursion: in particular they cannot be simultaneously zero if we want a non-trivial solution. The above solutions can be easily verified by plugging in the recurrence relation an _ansatz_ of the form \(y_{n}=\lambda^{n}\).
\(\mathbb{Z}_{2}\) invariant for CS and PHS topological chains
Let us consider first the case \(\lambda_{+}=\lambda_{-}=\lambda\). In order to have that \(\psi=([0,y_{n}]^{\mathrm{T}})\) is square-summable, we need to impose that
\[\left\|\psi\right\|_{\mathcal{H}^{\varepsilon}}^{2}=\sum_{n\in\mathbb{N}}|y_{n} |^{2}=\sum_{n\in\mathbb{N}}|a+b\,n|^{2}\,|\lambda|^{2n}<\infty.\]
The behaviour of the series on the right-hand side is dictated by its geometric part, and hence it converges if and only if \(|\lambda|<1\). In the case \(\lambda_{+}\neq\lambda_{-}\), we appeal to the following
**Lemma VI.2**.: _Given \(a,b\in\mathbb{C}\), the series_
\[\sum_{n\in\mathbb{N}}\left|a\,\lambda_{+}^{n}+b\,\lambda_{-}^{n}\right|^{2}\]
_converges if and only if \(|\lambda_{+}|<1\) and \(|\lambda_{-}|<1\)._
Proof.: The statement is clearly true if either \(\lambda_{+}=0\) or \(\lambda_{-}=0\), or if \(a=0\) or \(b=0\). Hence let us assume that \(a\neq 0\neq b\) and, without loss of generality, that \(|\lambda_{+}|>|\lambda_{-}|>0\). If \(|\lambda_{+}|>1\) then
\[\left|a\,\lambda_{+}^{n}+b\,\lambda_{-}^{n}\right|^{2}=|\lambda_{+}|^{2n}\, \left|a+b\,\left(\frac{\lambda_{-}}{\lambda_{+}}\right)^{n}\right|^{2} \longrightarrow\infty\quad\text{for }n\rightarrow\infty\]
so the series cannot converge. Viceversa, if both \(\lambda_{+}\) and \(\lambda_{-}\) are smaller than \(1\) in absolute value, we can write
\[\sum_{n\in\mathbb{N}}\left|a\,\lambda_{+}^{n}+b\,\lambda_{-}^{n}\right|^{2}= \sum_{n\in\mathbb{N}}|a|^{2}\,\,|\lambda_{+}|^{2n}+|b|^{2}\,\,|\lambda_{-}|^{2 n}+2\,\text{Re}\left[\overline{a}b\,\left(\overline{\lambda_{+}}\lambda_{-} \right)^{n}\right]\]
which is a sum of geometric series whose ratios are complex numbers with modulus smaller than \(1\), so they converge.
With the above considerations at hand, we are led to investigate the roots of the characteristic equation (25), namely
\[\lambda_{\pm}:=\frac{1}{2}\left[-\frac{\mu}{1+\delta}\pm\sqrt{\frac{\mu^{2}-4 \left(1-\delta\right)\left(1+\delta\right)}{\left(1+\delta\right)^{2}}}\right] =\frac{1}{2}\left[-\frac{\mu}{1+\delta}\pm\sqrt{\frac{\mu^{2}-4+4\,\delta^{2}} {\left(1+\delta\right)^{2}}}\right]\]
and to study when they are both smaller than \(1\) in absolute value.
Coinciding roots of the characteristic equationIf \(\lambda_{+}=\lambda_{-}=-\mu/2(1+\delta)\), then observing that we are restricting to \(\delta\geq 0\) we have that
\[\left|-\frac{\mu}{2\left(1+\delta\right)}\right|<1\quad\Longleftrightarrow \quad|\mu|<\min_{\delta\geq 0}|2\left(1+\delta\right)|=2\,.\]
Complex roots of the characteristic equationIf \(\mu^{2}+4\,\delta^{2}<4\), the characteristic equation has two distinct complex roots
\[\lambda_{\pm}=\frac{1}{2}\left[-\frac{\mu}{1+\delta}\pm\mathrm{i}\sqrt{\frac{4 -4\,\delta^{2}-\mu^{2}}{\left(1+\delta\right)^{2}}}\right]\,.\]
Since \(|\lambda|<1\) if and only if \(|\lambda|^{2}<1\), we can bound
\[\left|\lambda_{+}\right|^{2}=\left|\lambda_{-}\right|^{2}=\frac{1}{4}\left[ \frac{\mu^{2}}{\left(1+\delta\right)^{2}}+\frac{4-4\,\delta^{2}-\mu^{2}}{\left( 1+\delta\right)^{2}}\right]=\frac{1-\delta^{2}}{\left(1+\delta\right)^{2}}=1- \frac{2\,\delta}{1+\delta}<1\]
for all \(\delta>0\). If \(\delta=0\), the relation \(\mu^{2}+4\delta^{2}<4\) forces \(|\mu|<2\): but the region \(\{|\mu|<2\text{ and }\delta=0\}\) in parameter space lies in \(\eta\), and is therefore excluded from our analysis. We conclude that once again square-integrable solutions of the recurrence relation (24) exists if \(\mu^{2}+4\delta^{2}<4\) for all \(\delta>0\), that is, for \(|\mu|<2\).
Real roots of the characteristic equationFinally, when \(\mu^{2}+4\,\delta^{2}>4\) the characteristic equation has two distinct real roots
\[\lambda_{\pm}=\frac{1}{2}\left[-\frac{\mu}{1+\delta}\pm\sqrt{\frac{\mu^{2}-4+4 \,\delta^{2}}{\left(1+\delta\right)^{2}}}\right]\,.\]
We are led this time to solve the system
\[\begin{cases}\frac{\mu^{2}}{4\left(1+\delta\right)^{2}}+\frac{\mu^{2}+4\delta^ {2}-4}{4\left(1+\delta\right)^{2}}-\frac{\mu}{1+\delta}\sqrt{\frac{\mu^{2}+4 \delta^{2}-4}{4\left(1+\delta\right)^{2}}}<1\\ \frac{\mu^{2}}{4\left(1+\delta\right)^{2}}+\frac{\mu^{2}+4\delta^{2}-4}{4 \left(1+\delta\right)^{2}}+\frac{\mu}{1+\delta}\sqrt{\frac{\mu^{2}+4\delta^ {2}-4}{4\left(1+\delta\right)^{2}}}<1\end{cases}\]
which after a lengthy but straightforward computation is shown to be equivalent to \(|\mu|<2\).
\(\mathbb{Z}_{2}\) invariant for CS and PHS topological chains
ConclusionsIn conclusion, if we combine all the previous results we obtain that, if \(\delta\geq 0\) (and, in view of Remark VI.1, also if \(\delta\leq 0\)), there is a zero-energy boundary state for the truncated Kitaev chain if and only if \(|\mu|<2\). Comparing with (21), we conclude that as in the SSH model the presence of boundary states occurs exactly for those values of the physical parameters \(\mu,\delta\) for which the \(\mathbb{Z}_{2}\) invariant is non-trivial.
###### Acknowledgements.
D. M. wishes to thank the organizers of the workshop "Learning from Insulators: New Trends in the Study of Conductivity of Metals" (Giuseppe De Nittis, Max Lein, Constanza Rojas-Molina, and Marcello Seri) for the invitation to participate in the event and their commitment to its organization even in the most trying times of the COVID-19 pandemic. The authors also gratefully acknowledge financial support from Sapienza Universita di Roma within Progetto di Ricerca di Ateneo 2020 (grant no. RM120172AE419BE1) and 2021 (grant no. RM12117A86FB96EE). This work has been carried out under the auspices of the GNFM-INdAM (Gruppo Nazionale per la Fisica Matematica - Istituto Nazionale di Alta Matematica), and within the framework of the activities for PNRR MUR Project PE0000023-NQSTI.
## Data Availability Statement
Data sharing is not applicable to this article as no new data were created or analyzed in this study.
## Appendix A Winding numbers for unitary matrices
This Appendix is devoted to give some insights into the proof of Theorem IV.2; more details can be found e.g. in Ref. [12], Section III. The statement that \(U(N)\)-valued maps on \(S^{1}\) are characterized by their winding number up to homotopy can be essentially reduced to the analogous fact for \(N=1\), namely that maps \(S^{1}\to U(1)\simeq S^{1}\) are described by the way they wind around the origin in the complex plane. The reduction step is based on the following
**Proposition A.1**.: _If \(U\colon\mathbb{R}\to U(N)\) is at least \(C^{1}\), then_
\[\frac{\partial_{k}\det U(k)}{\det U(k)}=\operatorname{tr}\left(U(k)^{*}U^{ \prime}(k)\right).\]
Proof.: Since \(U(k)\) is unitary we have \(U(k)U(k)^{*}=\mathds{1}\), and using the fact that \(\det(\mathds{1}+H)=1+\operatorname{tr}(H)+\mathcal{O}(\|H\|^{2})\) we can compute
\[\partial_{k}\det U(k) =\lim_{h\to 0}\frac{\det U(k+h)-\det U(k)}{h}=\det U(k)\cdot \lim_{h\to 0}\frac{\det\left(\mathds{1}-U(k)^{*}U(k)+U(k)^{*}U(k+h)\right)-1}{ h}\] \[=\det U(k)\cdot\lim_{h\to 0}\frac{\operatorname{tr}\left(U(k)^{*}U (k+h)-U(k)^{*}U(k)\right)}{h}=\det U(k)\cdot\operatorname{tr}\left\{U(k)^{*} \cdot\left[\lim_{h\to 0}\frac{U(k+h)-U(k)}{h}\right]\right\}\] \[=\det U(k)\cdot\operatorname{tr}\left(U(k)^{*}U^{\prime}(k) \right).\qed\]
The above result immediately gives that \(\operatorname{wn}([U])\) defined in (17) can be computed as
\[\operatorname{wn}([U])=\frac{1}{2\pi\mathds{i}}\int_{S^{1}}\mathrm{d}k\,\frac {\partial_{k}\det U(k)}{\det U(k)}\,.\]
We thus shift our focus on the function \(u:=\det U\colon S^{1}\to U(1)\simeq S^{1}\).
**Proposition A.2**.: _Let \(f,g\colon S^{1}\to S^{1}\) be two differentiable maps. The integral_
\[w(f):=\frac{1}{2\pi\mathds{i}}\int_{S^{1}}\mathrm{d}k\,\frac{f^{\prime}(k)}{ f(k)}\]
_is integer-valued, and depends only on the homotopy class of \(f\): it is called the winding number of \(f\). Moreover, the winding number of the product is the sum of the two winding numbers:_
\[w(f\cdot g)=w(f)+w(g).\]
\(\mathbb{Z}_{2}\) invariant for CS and PHS topological chains
Proof.: The image of \(f\colon S^{1}\to S^{1}\) can be seen as a rectifiable and closed curve \(\gamma\subset\mathbb{C}\). By the Cauchy integral formula
\[\int_{S^{1}}\mathrm{d}k\,\frac{f^{\prime}(k)}{f(k)}=\int_{\gamma}\mathrm{d}z\, \frac{1}{z}=2\pi\mathrm{i}\operatorname{Ind}_{\gamma}(0)\]
where \(\operatorname{Ind}_{\gamma}(0)\) computes the winding number of the curve \(\gamma\) around the origin in \(\mathbb{C}\). As is well known, this quantity is integer-valued.
If \(f_{s}\colon S^{1}\to S^{1}\), \(s\in[0,1]\), is a smooth homotopy between two maps \(f_{0}\) and \(f_{1}\), then
\[\frac{\mathrm{d}}{\mathrm{d}s}\int_{S^{1}}\mathrm{d}k\,\frac{f^{\prime}_{s}(k )}{f_{s}(k)}=\int_{S^{1}}\mathrm{d}k\,\frac{\partial_{s}\partial_{t}f_{s}(k) /g_{s}(k)}{f_{s}(k)^{2}}=\int_{S^{1}}\mathrm{d}k\,\partial_{k}\left(\frac{ \partial_{s}f_{s}(k)}{f_{s}(k)}\right)=0\]
and therefore \(w(f)\) is constant along the homotopy class of \(f\).
Finally, to show the additive property of the winding number, it suffices to compute
\[w(f\cdot g)=\frac{1}{2\pi\mathrm{i}}\int_{S^{1}}\mathrm{d}k\,\frac{\partial_{ k}(fg)(k)}{f(k)g(k)}=\frac{1}{2\pi\mathrm{i}}\int_{S^{1}}\mathrm{d}k\,\frac{ \partial_{k}f(k)}{f(k)}+\frac{1}{2\pi\mathrm{i}}\int_{S^{1}}\mathrm{d}k\, \frac{\partial_{k}g(k)}{g(k)}dk=w(f)+w(g)\,.\qed\]
**Remark A.3**.: Let \(\iota\colon S^{1}\to S^{1}\) be the involution \(k\mapsto-k\). For \(f\colon S^{1}\to S^{1}\), we then have
\[w(f\circ\iota)=-w(f)\,.\]
Indeed, we can compute \((f\circ\iota)^{\prime}(k)=-f^{\prime}(-k)\) and therefore
\[2\pi\mathrm{i}\,w(f\circ\iota)=-\int_{2}^{2\pi}\mathrm{d}k\,\frac{f^{\prime} (-k)}{f(-k)}=\int_{0}^{-2\pi}\mathrm{d}\kappa\,\frac{f^{\prime}(\kappa)}{f( \kappa)}=-\int_{-2\pi}^{0}\mathrm{d}\kappa\,\frac{f^{\prime}(\kappa)}{f( \kappa)}=-\int_{0}^{2\pi}\mathrm{d}\kappa\,\frac{f^{\prime}(\kappa)}{f( \kappa)}=-2\pi\mathrm{i}\,w(f)\]
where in the second equality we used the substitution \(\kappa=\iota(k)=-k\) (from which \(\mathrm{d}\kappa=-\mathrm{d}k\)) and in the second-to-last equality we shifted the integration interval by periodicity of \(f\).
**Proposition A.4**.: _The map_
\[\pi_{1}\left(S^{1}\right)\to\mathbb{Z},\quad[f]\mapsto\frac{1}{2\pi\mathrm{i} }\int_{S^{1}}\mathrm{d}k\,\frac{f^{\prime}(k)}{f(k)}\]
_is a bijection._
Proof.: See e.g. Ref. [10], Chapter 1, Section 5 or Ref. [12], pages 7-8.
|
2301.10452 | A Chern-Simons model for baryon asymmetry | In search of a phenomenological model that would describe physics from Big
Bang to the Standard Model (SM), we propose a model with the following
properties (i) above an energy about Lambda_{cr} > 10^{16} GeV there are
Wess-Zumino supersymmetric preons and Chern-Simons (CS) fields, (ii) at
Lambda_{cr} ~ 10^{16} GeV spontaneous gauge symmetry breaking takes place in
the CS model and the generated topological mass providing attractive
interaction to equal charge preons, and (iii) well below 10^{16} GeV the model
reduces to the standard model with essentially pointlike quarks and leptons,
having a radius ~ 10^{-31} m. The baryon asymmetry turns out to have a
fortuitous ratio n_B/n_gamma << 1. | Risto Raitio | 2023-01-25T08:07:37Z | http://arxiv.org/abs/2301.10452v3 | # A Chern-Simons model for baryon asymmetry
###### Abstract
In search of a phenomenological model that would describe physics from Big Bang to the Standard Model (SM), we propose a model with the following properties (i) above an energy about \(\Lambda_{cr}>10^{16}\) GeV there are Wess-Zumino supersymmetric preons and Chern-Simons (CS) fields, (ii) at \(\Lambda_{cr}\sim 10^{16}\) GeV spontaneous gauge symmetry breaking takes place in the CS sector and the generated topological mass provides an attractive interaction to equal charge preons, (iii) well below \(10^{16}\) GeV the model reduces to the standard model with essentially pointlike quarks and leptons, having a radius \(\sim 1/\Lambda_{cr}\approx 10^{-31}\) m. The baryon asymmetry turns out to have a fortuitous ratio \(n_{B}/n_{\gamma}\ll 1\).
_Keywords:_ Isospin violation, Baryon asymmetry, Supersymmetry, Chern-Simons model
DOI: [https://doi.org/10.1016/j.nuclphysb.2023.116174](https://doi.org/10.1016/j.nuclphysb.2023.116174)
###### Contents
* 1 Introduction
* 2 Wess-Zumino action kinetic terms
* 3 Chern-Simons-QED\({}_{3}\) action
* 4 Chernon-Chernon interaction
* 5 Inflation and Supergravity
* 6 Sakharov conditions
* 7 Baryon asymmetry
* 8 Nucleon isospin violation
* 9 Conclusions and Outlook
* A Chernon-particle correspondence
## 1 Introduction
The observation of expanding baryon asymmetric universe is about 100 years old. The concordance (standard) model [1] has been developed explaining observations though not with the same precision and extension as in particle physics [2]. Even missing pieces exist like dark matter, baryon asymmetry and quantum gravity.
In this note we take a first step to draft a model for both particles and cosmology with simplicity as the main principle of unification. It is commonly understood that the quark electric charges and running interaction coupling constants in the standard model (SM) imply a large unified gauge group with rich spectra of particles. We take an alternative position of keeping number of elementary particles small, determined by global supersymmetry, and fermions obeying Dirac equation with SM gauge interactions. In addition, we reinforce our previous model by topological concepts of Chern-Simons model. On the other hand, we do not exclude any current structure, like string theory, loop quantum gravity, etc. but we want to start from certain, in our opinion, simpler concepts and see how far they can take us.
So we split quarks and leptons in three pointlike constituents, called in this note chernons (synonym for preon1 or superon). The reason for doing so is disclosed in section 7. Of the many preon models in the literature there are two of them which are like ours. One of them was proposed by Harari, and
independently by Shupe [4, 5]. The model of Finkelstein [6] was developed from a different basis, including the quantum symmetry group SLq(2) and knot theory. It turned out, however, to agree with [4, 5]. The major difference between the above models and our model [7, 8] is that ours has its basis in unbroken global supersymmetry where superpartners are in the model initially, not as new sparticles to be found in the future.
The scale where three chernon bound states form, making the standard model particles in 1+2 dimensions, is assumed to be near the usual grand unified theory (GUT) scale, about \(10^{16}\) GeV, denoted here by \(\Lambda_{cr}\). Below \(\Lambda_{cr}\) all the four preon scenarios of the previous paragraph revert to the standard model at accelerator energies. Above \(\Lambda_{cr}\) in the early universe chernons were momentarily located nearly fixed in the comoving frame of the rapidly expanding universe making the 1+2 dimensional potential energy description a reasonable approximation.
Chern-Simons-Maxwell (CSM) models (3.1) have been studied in condensed matter physics papers, e.g. [9, 10, 11]. In this note we apply the CSM model in particle physics phenomenology at high energy in the early universe.
We construct the visible matter of two fermionic chernons: (i) one charged \(m^{-}\), (ii) one neutral \(m_{V}^{0}\), V = R, G, B, carrying quantum chromodynamics (QCD) color, and the photon \(A\). The action is C symmetric. The chernons have zero (or small) mass. Weak interactions operate below \(\Lambda_{cr}\) between quarks and leptons, just as in SM. The chernon baryon (B) and lepton (L) numbers are zero. Given these quantum numbers, quarks consist of three chernons, as indicated in table 1. There could be more composite states like those containing \(m^{+}m^{-}\) pair. This annihilates immediately into other particles, which form later leptons and quarks.
The article is organized as follows. In section 2 we recap the Wess-Zumino model kinetic terms of the supersymmetric chernons and some scalars. The full Chern-Simons-QED\({}_{3}\) action is given in section 3. The chernon-chernon interaction potential is disclosed in 4. The transformation from chernons to quarks and leptons takes place during inflation, which is briefly reviewed in section 5. Sakharov conditions are discussed in sectionsakharov. Our mechanism for
\begin{table}
\begin{tabular}{|l|l|} \hline SM quark & chernon state \\ \hline \(u_{R}\) & \(m^{+}m^{+}m_{R}^{0}\) \\ \(u_{G}\) & \(m^{+}m^{+}m_{G}^{0}\) \\ \(u_{B}\) & \(m^{+}m^{+}m_{B}^{0}\) \\ \hline \(d_{R}\) & \(m^{-}m_{G}^{0}m_{B}^{0}\) \\ \(d_{G}\) & \(m^{-}m_{B}^{0}m_{R}^{0}\) \\ \(d_{B}\) & \(m^{-}m_{R}^{0}m_{G}^{0}\) \\ \hline \end{tabular}
\end{table}
Table 1: Quark-chernon correspondence. The upper index of \(m\) is charge zero or \(\pm\frac{1}{3}\). The lower index is color \(R,G\) or \(B\).
baryon asymmetry is proposed in section 7. Conclusions are given in section 9. In the appendix A a table of visible and dark matter is displayed.
## 2 Wess-Zumino action kinetic terms
We briefly recap our chernon (superon) scenario of [7, 8], which turned out to have close resemblance to the simplest \(N=1\) globally supersymmetric 1+3 model, namely the free, massless Wess-Zumino model [12, 13] with the kinetic Lagrangian including three neutral fields \(m\), \(s\), and \(p\) with \(J^{P}=\frac{1}{2}^{+},0^{+}\), and \(0^{-}\), respectively
\[{\cal L}_{\rm WZ}=-\frac{1}{2}\bar{m}\partial\!\!\!/m-\frac{1}{2}(\partial s) ^{2}-\frac{1}{2}(\partial p)^{2} \tag{2.1}\]
where \(m\) is a Majorana spinor, \(s\) and \(p\) are real fields (metric is mostly plus).
We assume that the pseudoscalar \(p\) is the axion [14], and denote it below as \(a\). It has a fermionic superpartner, the axino \(n\), and a bosonic superpartner, the saxion \(s^{0}\).
In order to have visible matter we assume the following charged chiral field Lagrangian
\[{\cal L}_{-}=-\frac{1}{2}m^{-}\partial\!\!\!/m^{-}-\frac{1}{2}(\partial s_{i }^{-})^{2},\ \ i=1,2 \tag{2.2}\]
## 3 Chern-Simons-QED\({}_{3}\) action
A number of 1+2 dimensional models have properties close to 1+3 dimensional world as can be found in [9, 15, 16], see also [17]. Our choice here is 1+2 dimensional Chern-Simons (CS) action is [18, 19]
\[S=\frac{k}{4\pi}\int_{M}{\rm tr}({\rm A}\wedge{\rm d}{\rm A}+\frac{2}{3}{\rm A }\wedge{\rm A}\wedge{\rm A}) \tag{3.1}\]
where \(k\) is the level of the theory and \(A\) the connection. (The compatibility of different dimensions is discussed in section 5.)
The action for a Chern-Simons-QED\({}_{3}\) model [11, 20] including two polarization \(\pm\) fermionic fields \((\psi_{+},\psi_{-})\), a gauge field \(A_{\mu}\) and a complex scalar field \(\varphi\) with spontaneous breaking of local U(1) symmetry is
\[S_{\rm CS-QED_{3}}=\int d^{3}x\{-\frac{1}{4}F^{\mu\nu}F_{\mu\nu} +i\overline{\psi}_{+}\gamma^{\mu}D_{\mu}\psi_{+}+i\overline{\psi}_{-}\gamma^{ \mu}D_{\mu}\psi_{-}\] \[+\frac{1}{2}\theta\epsilon^{\mu\nu\alpha}A_{\mu}\partial_{\nu}A_ {\alpha}-m_{e}(\overline{\psi}_{+}\psi_{+}-\overline{\psi}_{-}\psi_{-})\] \[-y(\overline{\psi}_{+}\psi_{+}-\overline{\psi}_{-}\psi_{-}) \varphi^{*}\varphi+D^{\mu}\varphi^{*}D_{\mu}\varphi-V(\varphi^{*}\varphi)\}, \tag{3.2}\]
where the covariant derivatives are \(D_{\mu}\psi_{\pm}=(\partial_{\mu}+ie_{3}A_{\mu})\psi_{\pm}\) and \(D_{\mu}\varphi=(\partial_{\mu}+ie_{3}A_{\mu})\varphi\). \(\theta\) is the important topological parameter and \(e_{3}\) is the coupling
constant of the \(U(1)\) local gauge symmetry, here with dimension of \(({\rm mass})^{1/2}\). \(V(\varphi^{*}\varphi)\) represents the self-interaction potential,
\[V(\varphi^{*}\varphi)=\mu^{2}\varphi^{*}\varphi+\frac{\zeta}{2}(\varphi^{*} \varphi)^{2}+\frac{\lambda}{3}(\varphi^{*}\varphi)^{3} \tag{3.3}\]
which is the most general sixth power renormalizable potential in 1+2 dimensions [21]. The parameters \(\mu,\ \zeta,\ \lambda\) and \(y\) have mass dimensions 1, 1, 0 and 0, respectively. For potential parameters \(\lambda>0,\zeta<0\) and \(\mu^{2}\leq 3\zeta^{2}/(16\lambda)\) the vacua are stable.
In 1+2 dimensions, a fermionic field has its spin polarization fixed up by the sign of mass [22]. The model includes two positive-energy spinors (two spinor families). Both of them obey Dirac equation, each one with one polarization state according to the sign of the mass parameter.
The vacuum expectation value \(v\) of the scalar field \(\varphi\) is given by:
\[\langle\varphi^{*}\varphi\rangle=v^{2}=-\zeta/\left(2\lambda\right)+\left[ \left(\zeta/\left(2\lambda\right)\right)^{2}-\mu^{2}/\lambda\right]^{1/2} \tag{3.4}\]
The condition for its minimum is \(\mu^{2}+\frac{\zeta}{2}v^{2}+\lambda v^{4}=0\). After the spontaneous symmetry breaking, the scalar complex field can be parametrized by \(\varphi=v+H+i\theta\), where \(H\) represents the Higgs scalar field and \(\theta\) the would-be Goldstone boson. For manifest renormalizability one adopts the 't Hooft gauge by adding the gauge fixing term \(S_{R_{\xi}}^{gt}=\int d^{3}x[-\frac{1}{2\xi}(\partial^{\mu}A_{\mu}-\sqrt{2} \xi M_{A}\theta)^{2}]\) to the broken action. Keeping only the bilinear and the Yukawa interaction terms one has the following action
\[S_{\rm CS-QED}^{\rm SSB} =\int d^{3}x\biggl{\{}-\frac{1}{4}F^{\mu\nu}F_{\mu\nu}+\frac{1}{2 }M_{A}^{2}A^{\mu}A_{\mu}\] \[-\frac{1}{2\xi}(\partial^{\mu}A_{\mu})^{2}+\overline{\psi}_{+}(i \not{\partial}-m_{eff})\psi_{+}\] \[+\overline{\psi}_{-}(i\not{\partial}+m_{eff})\psi_{-}+\frac{1}{2 }\theta\epsilon^{\mu\nu\alpha}A_{\mu}\partial_{v}A_{\alpha}\] \[+\partial^{\mu}H\partial_{\mu}H-M_{H}^{2}H^{2}+\partial^{\mu} \theta\partial_{\mu}\theta-M_{\theta}^{2}\theta^{2}\] \[-2yv(\overline{\psi}_{+}\psi_{+}-\overline{\psi}_{-}\psi_{-})H-e _{3}\left(\overline{\psi}_{+}\not{\cal A}\psi_{+}+\overline{\psi}\not{\cal A} \psi_{-}\right)\biggr{\}} \tag{3.5}\]
where the mass parameters
\[M_{A}^{2}=2v^{2}e_{3}^{2},\ \ m_{eff}=m_{e}+yv^{2},\ \ M_{H}^{2}=2v^{2}( \zeta+2\lambda v^{2}),\ \ M_{\theta}^{2}=\xi M_{A}^{2} \tag{3.6}\]
depend on the SSB mechanism. The Proca mass, \(M_{A}^{2}\)originates from the Higgs mechanism. The Higgs mass, \(M_{H}^{2}\), is associated with the real scalar field. The Higgs mechanism also contributes to the chernon mass, resulting in an effective mass \(\ m_{eff}\). There are two photon mass-terms in (3.5), the Proca and the topological one.
## 4 Chernon-Chernon interaction
The chernon-chernon scattering amplitude in the non-relativistic approximation is obtained by calculating the t-channel exchange diagrams of the Higgs scalar and the massive gauge field. The propagators of the two exchanged particles and the vertex factors are calculated from the action (3.5) [11].
The gauge invariant effective potential for the scattering considered is obtained in [23, 24]
\[V_{\rm CS}(r)=\frac{e^{2}}{2\pi}\left[1-\frac{\theta}{m_{e}}\right]K_{0}( \theta r)+\frac{1}{m_{e}r^{2}}\left\{l-\frac{e^{2}}{2\pi\theta}[1-\theta rK_{ 1}(\theta r)]\right\}^{2} \tag{4.1}\]
where \(K_{0}(x)\) and \(K_{1}(x)\) are the modified Bessel functions and \(l\) is the angular momentum (\(l=0\) in this note). In (4.1) the first term \([\ ]\) corresponds to the electromagnetic potential, the second one \(\{\ \}^{2}\) contains the centrifugal barrier \(\left(l/mr^{2}\right)\), the Aharonov-Bohm term and the two photon exchange term.
One sees from (4.1) the first term may be positive or negative while the second term is always positive. The function \(K_{0}(x)\) diverges as \(x\to 0\) and approaches zero for \(x\to\infty\) and \(K_{1}(x)\) has qualitatively similar behavior. For our scenario we need negative potential between equal charge chernons. Being embarrassed of having no data points for several parameters in (4.1) we can give one relation between these parameter values for a negative potential. We must have the condition2
Footnote 2: For applications to condensed matter physics, one must require \(\theta\ll m_{e}\), and the scattering potential given by (4.1) then comes out positive [11].
\[\theta\gg m_{e} \tag{4.2}\]
The potential (4.1) also depends on \(v^{2}\), the vacuum expectation value, and on \(y\), the parameter that measures the coupling between fermions and Higgs scalar. Being a free parameter, \(v^{2}\) indicates the energy scale of the spontaneous breakdown of the \(U(1)\) local symmetry.
## 5 Inflation and Supergravity
We discuss briefly, and in simple terms, the question of different dimensions of CS theory and gravity. We assume that the universe at \(t\sim 0\) included a subspace of one dimension less than the manifold of general relativity \(M_{GR}\).3 A promising example of such a theory is Chern-Simons gauge theory defined in a smooth, compact three-manifold \(M_{CS}\subset M_{GR}\), having a gauge group \(G\), which is semi-simple and compact, and an integer parameter \(k\). The Chern-Simons field equations (3.1) require that \(A\) be flat [19]. The curvature tensor may be decomposed, in any spacetime dimension, into a curvature scalar \(R\), a Ricci tensor \(R_{\mu\nu}\), and a conformally invariant Weyl tensor \(C_{\mu\nu\rho}^{\ \ \ \sigma}\). In 1+2 a
dimensions the Weyl tensor vanishes identically, and the Riemann curvature tensor \(R_{\mu\nu\rho\sigma}\) is determined algebraically by the curvature scalar and the Ricci tensor. Therefore any solution of the vacuum Einstein field equations is flat and any solution of the field equations with a cosmological constant \(R_{\mu\nu}=2\Lambda g_{\mu\nu}\) has constant curvature. Physically, a 1+2 dimensional spacetime has no local degrees of freedom. There are no gravitational waves in the classical theory, and no gravitons in the quantum theory
CS theory, defined earlier by the action (3.1), is a topological, quantizable gauge field theory [19]. The appropriate observables lead to vevs which correspond to topological invariants. The observables have to be gauge invariant. Secondly, they must be independent of the metric. Wilson loops verify these two properties [19], and they are therefore the key to observables to be considered in Chern-Simons theory. Independence of metric gives CS theories the desireable property of background independence. The CS interaction (3.1) is effective only at energy scales near and above \(\Lambda_{cr}\). This we interpret as chernons living (mod 3) on surfaces of spheres with diameter of the order of \(1/\Lambda_{cr}\). These composite states are quarks and leptons of the standard model in 1+3 dimensions.
In summary, the potential (4.1) dominates over general relativity, and Coulomb repulsion, at distances below \(1/\Lambda_{cr}\) in the 1+2 dimensional manifold \(M_{CS}\) while at larger distances gravity is stronger.
At the beginning of inflation, \(t=t_{i}\sim 10^{-36}\) s, the universe is modeled by 1+3 dimensional classical gravity, and Chern-Simons theory as long as \(T\geq\Lambda_{cr}\). The Einstein-Hilbert action is
\[S=\int d^{4}x\sqrt{-g}\Big{(}\frac{1}{2}R-\frac{1}{2}g^{\mu\nu}\partial_{\mu} \phi\partial_{\nu}\phi-V(\phi)\Big{)} \tag{5.1}\]
The E-H action dominates rapidly leading inflation to end at \(t_{R}\approx 10^{-32}\) s. Then the inflaton, which is actually coherently oscillating homogeneous field, a Bose condensate, reaches the minimum of its potential. There it oscillates and decays to SM particles produced from chernons in the earlier phase of inflation. This causes the reheating phase, or the Bang, giving visible matter particles more kinetic energy than dark matter particles have.
The CMB measurements of inflation can be well described by a few simple slow-roll single scalar potentials in (5.1). One of the best fits to Planck data [25] is obtained by one of the very oldest models, the Starobinsky model [26]. The action is
\[S=\frac{1}{2}\int d^{4}x\sqrt{-g}\Big{(}R+\frac{R^{2}}{6M^{2}}\Big{)} \tag{5.2}\]
where \(M\ll M_{\rm Pl}\) is a mass scale. Current CMB measurements indicate scale invariant spectrum with a small tilt in scalar density \(n_{s}=0.965\pm 0.004\) and an upper limit for tensor-to-scalar ratio \(r<0.06\). These values are fully consistent with the Starobinsky model (5.2) which predicts \(r\simeq 0.003\).
The model (5.2) has the virtue of being based on gravity only physics. Furthermore, the Starobinsky model has been shown to correspond to no-scale supergravity coupled to two chiral supermultiplets. Some obstacles have to be
sorted out first before reaching supergravity. To do that we follow the review by Ellis et al. [27].
The first problem with generic supergravity models with matter fields is that their effective potentials do not provide slow-roll inflation as needed. Secondly, they may have anti-deSitter vacua instead of deSitter ones. Thirdly, looking into the future, any new model of particles and inflation should preferably be consistent with some string model properties. These problems can be overcome by no-scale supergravity models. No-scale property comes from their effective potentials having flat directions without specific dynamical scale at the tree level. This has been derived from string models, whose low energy effective theory supergravity is.
Other authors have studied other implications of superstring theory to inflationary model building focusing on scalar fields in curved spacetime [28] and the swampland criteria [29, 30, 31]. These studies point out the inadequacy of slow roll single field inflation. We find it important to establish first a connection between the Starobinsky model and (two field) supergravity.
The bosonic supergravity Lagrangian includes a Hermitian function of complex chiral scalar fields \(\phi_{i}\) which is called the Kahler potential \(K(\phi^{i},\phi^{*}_{j})\). It describes the geometry of the model. In minimal supergravity (mSUGRA) \(K=\phi^{i}\phi^{*}_{i}\). Secondly the Lagrangian includes a holomorphic function called the superpotential \(W(\phi^{i})\). This gives the interactions among the fields \(\phi^{i}\) and their fermionic partners. \(K\) and \(W\) can be combined into a function \(G\equiv K+\ln|W|^{2}\). The bosonic Lagrangian is of the form
\[{\cal L}=-\frac{1}{2}R+K^{j}_{i}\partial_{\mu}\phi^{i}\partial^{\mu}\phi^{*}_ {j}-V-\frac{1}{4}\mbox{Re}(\mbox{f}_{\alpha\beta})\mbox{F}^{\alpha}_{\mu\nu} \mbox{F}^{\beta\mu\nu}-\frac{1}{4}\mbox{Im}(\mbox{f}_{\alpha\beta})\mbox{F}^{ \alpha}_{\mu\nu}\tilde{\mbox{F}}^{\beta\mu\nu} \tag{5.3}\]
where \(K^{j}_{i}\equiv\partial^{2}K/\partial\phi^{i}\partial\phi^{*}_{j}\) and \(\mbox{Im}(\mbox{f}_{\alpha\beta})\) is the gauge kinetic function of the chiral fields \(\phi^{i}\). In mSUGRA the effective potential is
\[V(\phi^{i},\phi^{*}_{j})=e^{K}\big{[}|W_{i}+\phi^{*}_{i}W|^{2}-3|W|^{2}\big{]} \tag{5.4}\]
where \(W_{i}\equiv\partial W/\partial\phi^{i}\). It is seen in (5.4) that the last term with negative sign may generate AdS holes with depth \(-{\cal O}(m_{3/2}^{2}M_{\rm Pl}^{2})\) and cosmological instability. Solution to this and the slow-roll problem is provided by no-scale supergravity models. The simplest such model is the single field case with
\[K=-3\ln(T+T^{*}) \tag{5.5}\]
where \(T\) is a volume modulus in a string compactification.
The single field (5.5) model can be generalized to include matter fields \(\phi^{i}\) with the followng Kahler potential
\[K=-3\ln(T+T^{*}-\frac{1}{3}|\phi_{i}|^{2}) \tag{5.6}\]
The no-scale Starobinsky model is now obtained with some extra work from the potential (5.4) and assuming \(\langle T\rangle=\frac{1}{2}\). For the superpotential the Wess-Zumino form is introduced [32]
\[W=\frac{1}{2}M\phi^{2}-\frac{1}{3}\lambda\phi^{3} \tag{5.7}\]
which is a function of \(\phi\) only. Then \(W_{T}=0\) and from \(V^{\prime}=|W_{\phi}|^{2}\) the potential becomes as
\[V(\phi)=M^{2}\frac{|\phi|^{2}|1-\lambda\phi/M|^{2}}{(1-|\phi|^{2}/3)^{2}} \tag{5.8}\]
The kinetic terms in the scalar field Lagrangian can be written now
\[{\cal L}=(\partial_{\mu}\phi^{*},\partial_{\mu}T^{*})\Big{(}\frac{3}{(T+T^{*} -|\phi|^{2}/3)^{2}}\Big{)}\left(\begin{array}{cc}(T+T^{*})/3&-\phi/3\\ -\phi^{*}/3&1\end{array}\right)\left(\begin{array}{c}\partial^{\mu}\phi\\ \partial^{\mu}T\end{array}\right) \tag{5.9}\]
Fixing \(T\) to some value one can define the canonically normalized field \(\chi\)
\[\chi\equiv\sqrt{3}\tanh^{-1}\left(\frac{\phi}{\sqrt{3}}\right) \tag{5.10}\]
By analyzing the real and imaginary parts of \(\chi\) one finds that the potential (5.8) reaches its minimum for \({\rm Im}\chi=0\). \({\rm Re}\chi\) is of the same form as the Starobinsky potential in conformally transformed Einstein-Hilbert action [33] with a potential of the form
\[V=\frac{3}{4}M^{2}(1-e^{-\sqrt{2/3}\phi})^{2} \tag{5.11}\]
when
\[\lambda=\frac{M}{\sqrt{3}} \tag{5.12}\]
Most interestingly, \(\lambda/M\) has to be very accurately \(1/\sqrt{3}\), better than one part in \(10^{-4}\), for the potential to agree with measurements.
This is briefly the basic mechanism behind inflation in the Wess-Zumino mSUGRA model, which foreruns reheating of visible matter. But only the particles containing \(m\) chernons, i.e. the visible matter gets reheated. The dark sector is going through reheating unaffected and is distributed smoothly all over space. The quantum fluctuations of the dark fields are enhanced by gravitation and provide a clumpy underlay for visible matter to form objects of various sizes, from stars to large scale structures.
## 6 Sakharov conditions
Sakharov suggested [34] three necessary conditions that must satisfied to produce matter and antimatter at different rates. They are (i) baryon number B violation, (ii) C-symmetry and CP-symmetry violation and (iii) interactions out of thermal equilibrium.
Baryon number violation is clearly needed to reach baryon asymmetry. This is valid in our model because baryon number is not defined conventionally. C-symmetry violation is needed so that the interactions which produce more baryons than anti-baryons will not be counterbalanced by interactions which produce more anti-baryons than baryons. This is discussed in section 8. CP-symmetry violation is required because otherwise equal numbers of left-handed
baryons and right-handed anti-baryons would be produced, as well as equal numbers of left-handed anti-baryons and right-handed baryons. The observed pattern of CP-violation [1] remarkably confirms the Cabibbo-Kobayashi-Maskawa (CKM) description of three fermionic generations of particles [35, 36]. CP-violation phenomenology is discussed in detail in [37, 38]. Our present one generation "skeleton" model cannot satisfy this condition but in principle, by completing the model and deriving the low energy limit, it could be explained. In the SM, the CKM model gives an explanation of why the breaking is so small, despite the phase associated to it being of order one. Thirdly, interactions are out of thermal equilibrium in a rapidly expanding universe.
## 7 Baryon asymmetry
We now examine the potential (4.1) in the early universe. Consider large number of groups of twelve chernons each group consisting of four \(m^{+}\), four \(m^{-}\) and four \(m^{0}\) particles. Any bunch may form only electron and proton (hydrogen atoms H), only positron and antiproton (\(\bar{\rm H}\)) or some combination of both H and \(\bar{\rm H}\) atoms [7, 8]. This is achieved by arranging the chernons appropriately (mod 3) using table 2. This way the transition from matter-antimatter symmetric universe to matter-antimatter asymmetric one happens straightforwardly.
Because the Yukawa force (4.1) is the strongest force the light \(e^{-}\), \(e^{+}\) and the neutrinos are expected to form first at the very onset of inflation. To obey condition \(B-L=0\) of baryon-lepton balance and to sustain charge conservation, for one electron made of three chernons, nine other chernons have to be created simultaneously, these form a proton. Accordingly for positrons. One neutrino requires a neutron to be created. The \(m^{0}\) carries in addition color enhancing neutrino formation. This makes neutrinos different from other leptons and the quarks.
Later, when the protons were formed, because chernons had the freedom to choose whether they are constituents of H or \(\bar{\rm H}\) there are regions of space of various sizes dominated by H or \(\bar{\rm H}\) atoms. Since the universe is the largest statistical system it is expected that there is only a very slight excesses of H atoms (or \(\bar{\rm H}\) atoms which only means a charge sign redefinition) which remain after the equal amounts of H and \(\bar{\rm H}\) atoms have annihilated. The ratio \(n_{B}/n_{\gamma}\) is thus predicted to be \(\ll 1\). The ratio \(n_{B}/n_{\gamma}\) is a multiverse-like concept.
Fermionic dark matter has in this scenario no mechanism to become "baryon" asymmetric like visible matter. Therefore we expect that part of fermionic dark matter has annihilated into bosonic dark matter. Secondly, we predict there should exist both dark matter and anti-dark matter clumps attracting visible matter in the universe. Collisions of anti-dark matter and dark matter celestial bodies would give us a new source for wide spectrum gravitational wave production (the lunar mass alone is \(\sim 10^{49}\) GeV).
Nucleon isospin violation
The topological mass works in favor of heavier d-quark and neutron [39], in qualitative agreement with lattice calculations. Care must be taken not to do double counting for d/u-quark mass difference with respect to CS-QED\({}_{3}\) calculations and QCD/QED lattice results. It is plausible that the topological terms in action (3.2) are very small on scales \(\ll\Lambda_{cr}\) in 1+3 dimensions and therefore QCD/QED only contribute to the mass difference.
## 9 Conclusions and Outlook
Above \(\Lambda_{cr}\) the fermionic chernons are C symmetric with equal masses and charges symmetrically around zero: {-1/3, 0, 1/3}. Below the transition energy \(\Lambda_{cr}\) fractional charge chernon composites form quarks while charge zero and one states are leptons as shown in table 2. These composite states behave to a good approximation like pointlike particles: the composite radius being of the order of \(10^{-31}\) m corresponding to a photon energy of \(\Lambda_{cr}\sim 10^{16}\) GeV. Below this energy the standard model is obtained [4, 5, 6, 8] and photons lose their resolving power to differentiate the Yukawa trapped chernons inside SM particles.
The main results of this note are the Chern-Simons-QED\({}_{3}\) extension of the Wess-Zumino Lagrangian (2.1), (2.2) and the viable mechanism for baryon asymmetry with the ratio \(n_{B}/n_{\gamma}\ll 1\). Large scale cosmological simulations are needed to obtain detailed information of the properties of the model proposed above. The central experimental test of our scenario is finding no broken supersymmetry (MSSM) superpartners [40] in the universe.
On the theoretical side mathematical work is needed extensively. But the situation is interesting. When the Chern-Simons, or Kodama, state
\[\psi(A)={\cal N}\exp\Big{(}-\frac{3}{2l_{\rm Pl}^{2}\Lambda}Y_{\rm CS}\Big{)} \tag{9.1}\]
where \(l_{\rm Pl}\) is the Planck length and \(\Lambda\) the cosmological constant and
\[Y_{\rm CS}=\int A^{I}dA^{I}+\frac{1}{3}\epsilon_{IJK}A^{I}A^{J}A^{K} \tag{9.2}\]
is reduced to mini-superspace it becomes, with some reservations, the Fourier dual of the Hartle-Hawking wave function of the universe [41, 42, 43].
Another interesting matter, though likewise troubled, is the possible connection of the Kodama state to quantum gravity [44].
## Appendix A Chernon-particle correspondence
The table 2 gives the chernon content of SM matter and a proposal for dark matter.
|
2308.13450 | Relativistic-kinematic approach to the hydrogen atom. A possible family
of two-particle optically-inactive bound states | The invariant mass of free particles is used to derive a bound-state equation
for the hydrogen atom at rest. This equation has the well-known solutions for
the single-particle states. Existence of two-particle bound states, for which
the electron and proton have different scales of intraatomic motions, is
predicted. Two two-particle states are obtained numerically and discussed in
detail. Radiative processes involving the two-particle bound state, should
result in simultaneous changing the electron and the proton states and occur
through the simultaneous two-photon emission. These processes will be extremely
improbable, in comparison with electronic two-photon transitions, since they
will be additionally suppressed by the large proton mass. | A. I. Agafonov | 2023-07-25T08:28:15Z | http://arxiv.org/abs/2308.13450v1 | Relativistic-kinematic approach to the hydrogen atom. A possible family of two-particle optically-inactive bound states
###### Abstract
The invariant mass of free particles is used to derive a bound-state equation for the hydrogen atom at rest. This equation has the well-known solutions for the single-particle states. Existence of two-particle bound states, for which the electron and proton have different scales of intraatomic motions, is predicted. Two two-particle states are obtained numerically and discussed in detail. Radiative processes involving the two-particle bound state, should result in simultaneous changing the electron and the proton states and occur through the simultaneous two-photon emission. These processes will be extremely improbable, in comparison with electronic two-photon transitions, since they will be additionally suppressed by the large proton mass.
Keywords: hydrogen atom, bound-state equation, two-particle wave function, radiative transitions
## 1 Introduction
The hydrogen atom is one of the most studied quantum objects. At present, the optical frequencies of the atom have been measured with very high accuracy [1]. It gives a unique opportunity to use the spectroscopy methods for the experimental verification of atomic level theories.
Both the non-relativistic and relativistic theories for hydrogen bound states are well known. These theories have one thing in common: the hydrogen atom, consisting of the two interacting particles, is described by the single-particle bound states.
In non-relativistic quantum mechanics, the solution of the Schrodinger equation for the atom is found using the method of variable separation [2]. Then, in the center-of-mass
frame, the problem is reduced to the motion of a particle with the electron charge and the reduced mass in the Coulomb field. The eigenstates of the transformed equation are the single-particle states.
In quantum electrodynamics, bound two-body systems are studied using the Bethe-Salpeter equation [3, 4]. Its exact solution for hydrogen is difficult to find even in the ladder approximation for the interaction function [5]. Therefore, the electron motion in the external field of the fixed proton is investigated [4]. In the frame of reference associated with a fixed proton, the bipinor wave function of the electron depends only on its radius-vector. The single-particle bound states for the Dirac equation with the external field are considered as the hydrogen states.
It is usually accepted that in the Hamiltonian \(H=H_{0}+V\), \(H_{0}\) is the operator of the energy of free, non-interacting particles, and \(V\) is the interaction between them. In the works [6, 7, 8, 9, 10] a system of two particles with the same mass was studied. The operator \(H_{0}\) was represented as the sum of the relativistic energies of free particles whose momenta are equal in absolute value. As a result, the bound-state spectrum of the Hamiltonian \(2\sqrt{p^{2}+m^{2}}+V\) was studied. That is, the two-particle problem is again reduced to the one-particle problem.
The well-known expansion of the function \(\frac{1}{|{\bf r}_{e}-{\bf r}_{p}|}\) (\({\bf r}_{e}\) and \({\bf r}_{p}\) are the radius-vectors of the electron and the proton, respectively) in spherical functions [11] leads us to the following reasoning: in the reference frame in which the atom is at rest, the hydrogen atom could be represented by bound states which depend on the absolute values of the particles radius-vectors and the angle between them. These states are the two-particle bound states. They can be called electron-proton bound states.
In the present paper, we propose a relativistic-kinematic method to study the hydrogen bound states. The method is based on the fact that in relativistic mechanics the total energy and the total momentum of free particles systems are additive quantities, but their total mass is not additive [12]. However, the mass of the systems is the invariant; it does not change when moving from one frame of reference to another. This circumstance makes it possible to use the invariant mass to construct a bound state equation. This equation allows us to study the two-particle wave functions that take into account the particle mass difference, which results in significantly different spatial scales of the internal motion of particles. Two wave functions from this family is obtained by numerical methods. Radiative processes which involve the electron-proton bound states, are discussed.
## 2 The bound-state equation
In relativistic mechanics, the total energy and the total momentum of the free particles system are additive [12]:
\[E_{s}=\sqrt{m^{2}+p^{2}}+\sqrt{M^{2}+q^{2}} \tag{1}\]
and
\[{\bf P}_{s}={\bf p}+{\bf q}. \tag{2}\]
Here \(m\) and \({\bf p}\) are the electron mass and momentum, \(M\) and \({\bf q}\) are the proton mass and momentum.
The mass of the system is not additive. However, the mass defined as:
\[m_{s}^{2}=E_{s}^{2}-{\bf P}_{s}^{2}, \tag{3}\]
is invariant in all frames of reference.
It is important that the energy of the system at rest is:
\[H_{0}=m_{s}. \tag{4}\]
Substituting (3) into (4) and considering the interaction between the particles, we obtain the Schrodinger-like equation:
\[\Big{[}\sqrt{(m+M)^{2}+2mM\Big{[}\Big{(}1+\frac{\hat{\bf p}^{2}}{m^{2}}\Big{)} ^{1/2}\Big{(}1+\frac{\hat{\bf q}^{2}}{M^{2}}\Big{)}^{1/2}-1\Big{]}-2{\bf p}{ \bf q}}-\frac{\alpha(1-\hat{\bf v}_{e}\hat{\bf v}_{p})}{|{\bf r}_{e}-{\bf r}_{ p}|}\Big{]}\psi=E\psi. \tag{5}\]
Here the interaction of the particles through the vector potential is taken into account, \(\hat{\bf v}_{e}\) and \(\hat{\bf v}_{p}\) are the electron and the proton velocity operators.
We present the energy \(E={\cal E}+m+M\) with \({\cal E}\propto\alpha^{2}m\). In the expansion of the left-hand side of Eq. (5) we restrict ourselves to the term of order of \(p^{4}/m^{3}\). For the hydrogen states, the interaction through the vector potential can be omitted since the velocities of the particles are small compared to the speed of light in vacuum. Then we obtain:
\[\left[\frac{1}{2(m+M)}\Big{(}\sqrt{\frac{M}{m}}\hat{\bf p}-\sqrt{\frac{m}{M}} \hat{\bf q}\Big{)}^{2}-\frac{\alpha}{|{\bf r}_{e}-{\bf r}_{p}|}-\frac{M^{2}(M+3 m)\hat{\bf p}^{4}}{8m^{3}(M+m)^{3}}\right]\psi({\bf r}_{e},{\bf r}_{p})={\cal E}\psi. \tag{6}\]
The term proportional to \(p^{4}\) on the left side of Eq. (7) determines the fine splitting of the energy levels. Given \(m<<M\), the term is reduced to \(p^{4}/8m^{3}\) that leads to the well-known expression for the fine structure. Now we omit this term. Then Eq. (7) is reduced to:
\[\left[\frac{1}{2(m+M)}\Big{(}\sqrt{\frac{M}{m}}\hat{\bf p}-\sqrt{\frac{m}{M}} \hat{\bf q}\Big{)}^{2}-\frac{\alpha}{|{\bf r}_{e}-{\bf r}_{p}|}\right]\psi({ \bf r}_{e},{\bf r}_{p})={\cal E}\psi({\bf r}_{e},{\bf r}_{p}). \tag{7}\]
One can transform Eq. (7) in terms of new independent variables:
\[{\bf R}=\frac{m{\bf r}_{e}+M{\bf r}_{p}}{m+M},\hskip 28.452756pt{\bf r}={\bf r }_{e}-{\bf r}_{p}, \tag{8}\]
Then, it is easy to obtain that the transformed equation is reduced to the Schrodinger equation:
\[\Big{(}-\frac{\hbar^{2}}{2\mu}\Delta_{\bf r}-\frac{\alpha}{r}\Big{)}\psi({\bf r })={\cal E}\psi({\bf r}). \tag{9}\]
Here \(\mu=\frac{mM}{m+M}\) is the reduced mass.
Note that in Eq. (9), the term related to the center-of-mass motion, is absent. This is due to the fact that Eq. (7) were obtained in the reference frame in which the atom is at rest. In this case, it makes no sense to use the replacement (8). Instead, we are looking for solutions of the original equation (7) which depend on the independent variables \({\bf r}_{e}\) and \({\bf r}_{p}\).
## 3 Bound-state integral equation
The differential equation (7) corresponds to the following integral equation:
\[\psi({\bf r}_{e},{\bf r}_{p})=-\alpha\int{\bf r}^{\prime}_{e}\int{\bf r}^{ \prime}_{p}\int\frac{d{\bf p}}{(2\pi)^{3}}\int\frac{d{\bf q}}{(2\pi)^{3}}e^{i {\bf p}({\bf r}_{e}-{\bf r}^{\prime}_{e})+i{\bf q}({\bf r}_{p}-{\bf r}^{ \prime}_{p})}G({\bf p},{\bf q};{\cal E})\frac{1}{|{\bf r}^{\prime}_{e}-{\bf r} ^{\prime}_{p}|}\psi({\bf r}^{\prime}_{e},{\bf r}^{\prime}_{p}), \tag{10}\]
where, by analogy with the well-known structure of bound-state integral equations, the function \(G\) should be considered as the two-particle propagator in the momentum space:
\[G({\bf p},{\bf q};{\cal E})=\frac{1}{{\cal E}-\frac{1}{2(m+M)}\Big{(}\sqrt{ \frac{M}{m}}{\bf p}-\sqrt{\frac{m}{M}}{\bf q}\Big{)}^{2}}. \tag{11}\]
In the momentum space Eq. (10) takes the form:
\[\psi({\bf p},{\bf q})=-\frac{\alpha}{2\pi^{2}}G({\bf p},{\bf q};{\cal E})\int \frac{d{\bf k}}{({\bf p}-{\bf k})^{2}}\psi({\bf k},{\bf p}+{\bf q}-{\bf k}) \tag{12}\]
In Eq. (12) the binding energy of the two-particle system is given by:
\[{\cal E}=T+U, \tag{13}\]
where the average kinetic energy is:
\[T=<\psi({\bf p},{\bf q})|\frac{1}{2(m+M)}\Big{(}\sqrt{\frac{M}{m}}{\bf p}- \sqrt{\frac{m}{M}}{\bf q}\Big{)}^{2}|\psi({\bf p},{\bf q})> \tag{14}\]
and the average potential energy is
\[U=-<\psi({\bf p},{\bf q})|\frac{\alpha}{2\pi^{2}}\int\frac{d{\bf k}}{({\bf p}-{\bf k })^{2}}|\psi({\bf k},{\bf p}+{\bf q}-{\bf k})>. \tag{15}\]
The wave function is normalized:
\[<\psi({\bf p},{\bf q})|\psi({\bf p},{\bf q})>=1 \tag{16}\]
In the investigated state, the kinetic energies of the proton and the electron are comparable in order of magnitude. The electron kinetic energy must be of the order of \(\alpha^{2}m\) and, respectively, the average momentum of the electron is \(<p>\propto\alpha m\). Then, the proton average momentum should be of the order of \(<q>\propto\alpha\sqrt{mM}\). It means that in the supposed state, \(<q>>>><p>\). In the stationary steady state, the average momenta of the particles must be zero, \(<{\bf p}>=0\) and \(<{\bf q}>=0\).
The value \(T\) given by (14), should be considered as the total kinetic energy of the state. Attention is drawn to the symmetry of the equation with respect to particle masses. But this symmetry leads to a surprising result: the proton moment does almost not contribute to the electron-proton state energy. Indeed, for \(p\simeq q\sqrt{m/M}\) we have \(p>>q\frac{m}{M}\), and in Eq. (14) the term \(\sqrt{\frac{m}{M}}{\bf q}\) can be omitted.
Below, we calculate the energy using the original expression (14) having the three terms. Their values will be given. Despite the proton large momentum, the absence of the proton contribution to kinetic energy helps to the confinement of the particles in the electron-proton bound state. Of course, for this state, the virial theorem does not hold.
## 4 Numerical solution of Eq. (12)
In the investigated state, the wave function depends on the absolute values of the particles momenta and the angle between them. \(\psi(p,q,\theta)\). The energy of the state is \({\cal E}=-\alpha^{2}m*\gamma\), where \(\gamma>0\) is the numerical factor. We introduce \(\eta=\sqrt{\frac{m}{M}}\) and dimensionless variables: \(p=\alpha mx\), \(q=\alpha\sqrt{mM}y\) and \(k=\alpha mz\). Then Eq. (12) for the wave function is rewritten as:
\[\psi(x,y,\theta)=\frac{1}{\pi^{2}}\frac{1}{2\gamma+\frac{1}{1+\eta^{2}}({\bf x }-\eta{\bf y})^{2}}\int\frac{d{\bf z}}{({\bf z}-{\bf x})^{2}}\psi(z,|{\bf y}+ \eta{\bf x}-\eta{\bf z}|,\theta_{s}). \tag{17}\]
Here \(\theta_{s}\) is the angle between \({\bf z}\) and \({\bf y}+\eta{\bf x}-\eta{\bf z}\).
Eq. (13) for the energy in units \(\alpha^{2}m\) is rewritten as:
\[\gamma=-\frac{1}{2(1+\eta^{2})}<\psi(x,y,\theta)|(x^{2}-2\eta xycos\theta+ \eta^{2}y^{2})|\psi(x,y,\theta)>+\]
\[\frac{1}{2}<\psi(x,y,\theta)|\int\frac{d{\bf z}}{({\bf z}-{\bf x})^{2}}\psi(z,|{\bf y}+\eta{\bf x}-\eta{\bf z}|,\theta_{s})>. \tag{18}\]
Without loss of generality, we assume that the vector \({\bf y}+\eta{\bf x}\) is directed along the \(z-\)axis. Then Eq. (17) takes the form:
\[\psi(x,y,\theta)=\frac{2}{\pi}\frac{1}{2\gamma+\frac{1}{1+\eta^{2}}(x^{2}-2 \eta xycos\theta+\eta^{2}y^{2})}\int_{0}^{\infty}z^{2}dz\]
\[\int_{0}^{\pi}\frac{\psi(z,y_{s},\theta_{s})\sin\theta_{z}d\theta_{z}}{\sqrt{ (z^{2}+x^{2}-2xz\cos\theta_{z}\cos\theta_{x})^{2}-4x^{2}z^{2}\sin^{2}\theta_{ z}\sin^{2}\theta_{x}}}. \tag{19}\]
Here the following functions are introduced:
\[y_{s}=\sqrt{D^{2}-2\eta Dz\cos\theta_{z}+\eta^{2}z^{2}} \tag{20}\]
and
\[\cos\theta_{s}=\frac{D\cos\theta_{z}-\eta z}{y_{s}}. \tag{21}\]
The angle \(\theta_{z}\) is the polar angle of the vector \({\bf z}\), and the function \(D=|{\bf q}+{\bf p}|\) has the form:
\[D=\sqrt{y^{2}+2\eta xy\cos\theta+\eta^{2}x^{2}} \tag{22}\]
in the dimensionless variables.
The angle \(\theta_{x}\) is the polar angle of the vector \({\bf p}\). This angle is related to the angle \(\theta\) as:
\[\sin\theta_{x}=\frac{y\sin\theta}{D}. \tag{23}\]
## 5 Numerical results
An iteration method is used for solving Eq. (19). The function \(\psi(x,y,\theta)\) is represented by a matrix \(\psi_{m}(i,j,k)\) of the dimension \(101\times 101\times 121\). Here \(m\) is the iteration number. An important point is the choice of the initial matrix \(\psi_{0}(i,j,k)\). Below we present the results for two different choices. First, we used:
\[\psi_{0}(i,j,k)=\frac{1}{(1+a_{1}x(i)^{2})^{2}(1+a_{2}y(j)^{2})^{2}}, \tag{24}\]
where \(a_{1}<a_{2}\). The matrix (24) has no zeros, and this choice is related to the \(1s\) momentum-space wave function. After calculating the matrix \(\psi_{1}(i,j,k)\), the function for the next iteration is \(\psi_{m+1}(i,j,k)=\xi\psi_{m}(i,j,k)+\sqrt{1-\xi^{2}}\psi_{m-1}(i,j,k)\). Here \(m=1,2,...N_{it}\), \(\xi=0.97\).
To demonstrate the convergence of the iterative procedure, Fig. 1 shows the iteration dependence of the energy (18). It is clearly seen that the energy tends to a certain value with increasing the iteration number. This value is \({\cal E}=-0.364\alpha^{2}m_{e}\). Similar curves were obtained for the iteration dependence of the average values of \(<p^{2}>\) and \(<q^{2}>\). Their asymptotic values are \(<p^{2}>=0.383\alpha^{2}m^{2}\) and \(<q^{2}>=0.204\alpha^{2}mM\). Hence, the electron average kinetic energy \(T_{e}=0.191\alpha^{2}m\) and that for the proton \(T_{p}=0.102\alpha^{2}m\). But, as discussed above, this proton energy practically does not contribute to the energy (18). The motion of the proton leads to the two contributions: \(E_{ep}=-\frac{\eta}{1+\eta^{\prime}}<xy\cos(\theta)>\) and \(E_{p}=\frac{\eta^{2}}{2(1+\eta^{\prime})}<y^{2}>\). In the usual units, \(E_{ep}=-0.68\times 10^{-3}\alpha^{2}m\) and \(E_{p}=0.56\times 10^{-4}\alpha^{2}m\).
The average potential energy \(U\) given by Eq. (15), is \(U=-0.556\alpha^{2}m_{e}\). It means that
\[<\psi({\bf r}_{p},{\bf r}_{p})|\frac{1}{|{\bf r}_{e}-{\bf r}_{p}|}|\psi({\bf r }_{e},{\bf r}_{p})>=0.556a_{B}^{-1}. \tag{25}\]
This value is intermediate between \(<1s|r^{-1}|1s>=a_{B}^{-1}\) and \(<2s|r^{-1}|2s>=0.25a_{B}^{-1}\).
It turned out that when choosing (24), the calculated function \(\psi(x,y,\theta)\) depends weakly on the angle \(\theta\). The wave function for the angle \(\theta=0\) is shown in Fig. 2. It have no zeros, just like the ground state wave function of the hydrogen atom. The energy of this electron-proton state \({\cal E}=-0.364\alpha^{2}m\) is greater than the ground state energy \({\cal E}_{1s}=-0.5\alpha^{2}\mu\), but is less than the energy of the first excited state \({\cal E}_{2s}=-0.125\alpha^{2}\mu\).
There are significant differences between the momentum space wave function shown in Fig. 2, and the \(1s\) wave function. In the center of mass system, \({\bf p}+{\bf q}=0\) for the well-known bound states, and the average momenta of the particles are the same. For the \(1s\)
Figure 1: The iteration dependence of the energy of the electron-proton bound state.
state \(<p>=<q>=0.849\alpha m\). Therefore, the proton kinetic energy is 1836 times less than that for the electron.
For the found state, the sum \(\mathbf{p}+\mathbf{q}\) is uncertain, \(<\mathbf{p}>=0\) and \(<\mathbf{p}>=0\), and the average values \(<p>=0.566\alpha m\) and \(<q>=0.381\alpha\sqrt{mM}\). Hence, the kinetic energies of the proton and the electron are of the same order.
The wave function \(\psi(x,y,\theta)\) depends on the angle \(\theta\). In order to elucidate this dependence, we calculate the electron kinetic energy \(T_{e}(\theta)\) for the given angle. We obtain that \(T_{e}(0)=0.201\alpha^{2}m\), \(T_{e}(\frac{\pi}{2})=0.184\alpha^{2}m\) and \(T_{e}(\pi)=0.189\alpha^{2}m\) with the angle-averaged value \(T_{e}(0)=0.191\alpha^{2}m\). Overall, the relative change of the kinetic energy is about 5 percent. So, the angle dependence of the wave function is relatively weak.
The coordinate-space wave function \(\psi(r_{e},r_{p})\) which was obtained by the Fourier transform of the function \(\psi(p,q,\theta=0)\), is demonstrated in Fig. 3. According to this function, the electron and the proton have significantly different scales of intraatomic motions. We calculated the average radius-vectors of the electron and the proton: \(<r_{e}>=2.45a_{B}\) and \(<r_{p}>=9.805\sqrt{\frac{m}{M}}a_{B}=0.229a_{B}\). These scales of internal motions differ by an order of magnitude.
Now we present numerical results for the second choice of the initial matrix. It has multiple zeros:
\[\psi_{0}(i,j,k)=\cos(b_{1}x(i)-b_{2}y(i))/((1+a_{1}x(i)^{2})^{2}(1+a_{2}y(j)^{2 })^{2}) \tag{26}\]
with constants \(b_{1}<b_{2}\).
The course of the iterative procedure was similar to that discussed above. The momentum space wave function is shown in Fig. 4. It has zeros, and, hence, this state is an excited one. The energy of this state is \(calE=-0.329\alpha^{2}m\), which is greater than that for the state shown in Fig. 2. It was obtained: \(<p^{2}>=0.280\alpha^{2}m^{2}\) and \(<q^{2}>=0.156\alpha^{2}mM\). Accordingly, the average kinetic energy of the electron \(T_{e}=0.140\alpha^{2}m\) and that for the proton \(0.078\alpha^{2}m\). The latter does not make a significant contribution to \(\mathcal{E}\) for the reason discussed above.
Figure 3: Fourier transform function of the wave function \(\psi(p,q,\theta=0)\) shown in Fig. 2.
Figure 2: The wave function \(\psi(x,y,\theta=0)\) satisfying Eq. (19) with the initial matrix (24).
The average potential energy \(U\) given by Eq. (15), is \(U=-0.469\alpha^{2}m\). It means that
\[<\psi(\mathbf{r}_{e},\mathbf{r}_{p})|\frac{1}{|\mathbf{r}_{e}-\mathbf{r}_{p}|}| \psi(\mathbf{r}_{e},\mathbf{r}_{p})>=0.469a_{B}^{-1}. \tag{27}\]
Thus, for this two-particle state, the internal motion of particles takes place in a larger space region than that given by Eq. (25).
The coordinate-space wave function \(\psi(r_{e},r_{p})\) was obtained by the Fourier transform of the function shown in Fig. 4. It is demonstrated in Fig. 5. According to this function, the electron and the proton have significantly different scales of intraatomic motions. We calculated the average radius-vectors of the electron and the proton: \(<r_{e}>=2.89a_{B}\) and \(<r_{p}>=0.267a_{B}\).
## 6 Conclusion: Optically-inactive bound states
In conclusion, we would like to discuss the following: if in the hydrogen atom, the two-particle states obtained above, exist, then why their did not detected earlier? In this regard, the most important are spectroscopic studies.
Regardless of whether a two-particle bound state is the final or initial state, any optical process must be accompanied by a change in the internal motions of both the electron and the proton. That is, during a radiative transition, both the electron and the proton change simultaneous their states. But these particle states are not independent. They can only change simultaneously because they are unique determined by the two-particle bound state.
Therefore, any radiative process involving any two-particle bound state, can be expected to be a transition with simultaneous emission of two photons. One of them is emitted by the electron, and the second is emitted by the proton. It should be noted that in the hydrogen atom, the most probable process of the \(2s\to 1s\) transition is the transition with the simultaneous emission of two photons by the electron (see [4] and referenced therein). However, the probability of this process, is quite small compared to the single-photon electric-dipole transitions. The process probability is \(8s^{-1}\). The radiative process under discussion should
Figure 4: The momentum space wave function satisfying Eq. (16) with the initial condition given by the matrix (26).
Figure 5: The coordinate-space wave function which correspond to the function shown in Fig. 4.
have a lower probability by many orders of magnitude, since the two-photon transition will be suppressed by the large proton mass. Therefore, these two-particle states can be optically inactive.
Note also that the relativistic-kinematic approach presented above is easily generalized to systems of many particles with pair interactions between them. In this case, the generalization of equation (77)5) to N particles is:
\[\left[\sqrt{\left(\sum_{i=1}^{N}\hat{E}_{i}\right)^{2}-\left(\sum_{i=1}^{N} \hat{\mathbf{p}}_{i}\right)^{2}}-\frac{1}{2}\sum_{i}^{N}\sum_{j\neq i}^{N}V_{ ij}\right]\psi(\{\mathbf{r}_{i}\})=E\psi(\{\mathbf{r}_{i}\}), \tag{28}\]
where \(\hat{E}_{i}=\sqrt{m_{i}^{2}+\mathbf{\hat{p}}_{i}^{2}}\) is the operator of the \(i\)-particle energy, \(m_{i}\) and \(\hat{\mathbf{p}}_{i}\) its mass and momentum operator. The only constraint on particle momenta in \(N\)-particle bound state is
\[<\psi(\{\mathbf{r}_{j}\})|\hat{\mathbf{p}}_{i=1,..,N}|\psi(\{\mathbf{r}_{j}\}) >=0. \tag{29}\]
**CRediT authorship contribution statement**
This article has one author.
**Declaration of Competing Interest**
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
**Data availability**
No data was used for the research described in the article.
|
2310.09642 | Robot Imitation from Video Demonstration | This paper presents an attempt to replicate the robot imitation work
conducted by Sermanet et al., with a specific focus on the experiments
involving robot joint position prediction. While the original study utilized
human poses to predict robot joint positions, this project aimed to achieve
robot-to-robot imitation due to the challenges of obtaining human-to-robot
translation data. The primary objective was to provide a neural network with
robot images and have it predict end-effector positions through regression. The
paper discusses the implementation process, including data collection using the
open-source RoboSuite, where a Python module was developed to capture
randomized action data for four different robots. Challenges in data
collection, such as oscillations and limited action variety, were addressed
through domain randomization. Results show high testing error and
unsatisfactory imitation due to overfitting, necessitating improvements in the
project. | Venkat Surya Teja Chereddy | 2023-10-14T19:09:14Z | http://arxiv.org/abs/2310.09642v1 | # Robot Imitation from Video Demonstration
###### Abstract
This paper presents an attempt to replicate the robot imitation work conducted by Sermanet et al. [1], with a specific focus on the experiments involving robot joint position prediction. While the original study utilized human poses to predict robot joint positions, this project aimed to achieve robot-robot imitation due to the challenges of obtaining human-to-robot translation data. The primary objective was to provide a neural network with robot images and have it predict end-effector positions through regression. The paper discusses the implementation process, including data collection using the open-source RoboSuite, where a Python module was developed to capture randomized action data for four different robots. Challenges in data collection, such as oscillations and limited action variety, were addressed through domain randomization. Results show high testing error and unsatisfactory imitation due to overfitting, necessitating improvements in the project.
## I What I Set Out to Do
The goal in my mind, when I first started this project, is to replicate the work of Sermanet et al. [1], specifically the robot imitation from the experiments section. In the original paper, the authors have used human poses to predict the joint positions of the robot. I, however, did not plan for a human-to-robot translation for this project. This is primarily because I need to manually collect the human poses' to joint spaces data, which is hard given that this is a solo project. Therefore, I planned for robot-to-robot imitation, since the collection of data can be simpler for simulation data. In simple terms, the idea of this project is to give a neural network the image of the robot, and the network predicts the end-effector positions, using regression.
For the training process, I first learn a TCN representation space, using the triplet loss implementation. Once the TCN representation space is learned, I train a feed forward regression network that maps the TCN embeddings to the end-effector positions (it should be noted that the original paper is trained on the joint-space instead of end-effector positions) of the robot.
## II Implementation (and the Difficulties I Faced)
### _Collecting Data: Part 1_
For this project, I need quality data of robots (preferably more than 3 robots) performing random actions. After researching the internet, I found an open-source project: RoboSuite. This project is primarily used for Reinforcement Learning research and gives a Gym like API. The project also implements random action, however, the random actions are not good as they produce oscillations and the actions are limited.
I built a Python Module with RoboSuite in the backend, for the random data collection. This Python Module supports domain randomized, random action data collection for 4 robots: Panda, Sawyer, IIWA and Jaco. This Python Module saves the image and end-effector positions at every timestep. On top this, this Python module supports action replay, given an array of end-effector positions.
### _Training TCN and Regression: Part 1_
During the first time I trained, I made two big mistakes:
#### Ii-B1 Not Using a Validation Dataset
Not using a validation dataset has made it very hard for me to know if the TCN is overfit, which is surely did to some extent, when I checked the TCN representation space on a new dataset.
#### Ii-B2 Regression Never Converged Because I Collected the Wrong Data
Unknown to me, the RoboSuite library uses delta changes as inputs and outputs for its end-effector poses for action/replay. When I tried to train using this data, the network never converged, as expected. All the data I have collected is, therefore, unfortunately, useless.
### _Collecting Data: Part 2_
I made changes to my Python module for it to work with global positions, instead of delta positions at each time step. I have collected 80 videos (120 frames each) of the four robots with domain randomization at every 60 frames.
### _learning TCN and Regression: Part 2_
This time I have split the main dataset into training and validation datasets. Instead of first training TCN and then training regression, I have used a different approach: training them together. After trail and error I have used the following loss function:
\[\mathcal{L}_{TCN}=max(Dist(\phi(A),\phi(P))-Dist(\phi(A),\phi(N))+m,0)\]
\[\mathcal{L}_{Regression}=MSE(J_{A},j_{A})+MSE(J_{p},j_{p})+MSE(J_{n},j_{n})\]
\[\mathcal{L}_{total}=0.4*\mathcal{L}_{TCN}+\mathcal{L}_{Regression}\]
_Where, \(\phi(x)\) = TCN network, A = Anchor image, P = Positive image, N = Negative image, m = margin (Hyperparameter, I |
2303.11135 | TWINS: A Fine-Tuning Framework for Improved Transferability of
Adversarial Robustness and Generalization | Recent years have seen the ever-increasing importance of pre-trained models
and their downstream training in deep learning research and applications. At
the same time, the defense for adversarial examples has been mainly
investigated in the context of training from random initialization on simple
classification tasks. To better exploit the potential of pre-trained models in
adversarial robustness, this paper focuses on the fine-tuning of an
adversarially pre-trained model in various classification tasks. Existing
research has shown that since the robust pre-trained model has already learned
a robust feature extractor, the crucial question is how to maintain the
robustness in the pre-trained model when learning the downstream task. We study
the model-based and data-based approaches for this goal and find that the two
common approaches cannot achieve the objective of improving both generalization
and adversarial robustness. Thus, we propose a novel statistics-based approach,
Two-WIng NormliSation (TWINS) fine-tuning framework, which consists of two
neural networks where one of them keeps the population means and variances of
pre-training data in the batch normalization layers. Besides the robust
information transfer, TWINS increases the effective learning rate without
hurting the training stability since the relationship between a weight norm and
its gradient norm in standard batch normalization layer is broken, resulting in
a faster escape from the sub-optimal initialization and alleviating the robust
overfitting. Finally, TWINS is shown to be effective on a wide range of image
classification datasets in terms of both generalization and robustness. Our
code is available at https://github.com/ziquanliu/CVPR2023-TWINS. | Ziquan Liu, Yi Xu, Xiangyang Ji, Antoni B. Chan | 2023-03-20T14:12:55Z | http://arxiv.org/abs/2303.11135v1 | TWINS: A Fine-Tuning Framework for Improved Transferability of Adversarial Robustness and Generalization
###### Abstract
Recent years have seen the ever-increasing importance of pre-trained models and their downstream training in deep learning research and applications. At the same time, the defense for adversarial examples has been mainly investigated in the context of training from random initialization on simple classification tasks. To better exploit the potential of pre-trained models in adversarial robustness, this paper focuses on the fine-tuning of an adversarially pre-trained model in various classification tasks. Existing research has shown that since the robust pre-trained model has already learned a robust feature extractor, the crucial question is how to maintain the robustness in the pre-trained model when learning the downstream task. We study the model-based and data-based approaches for this goal and find that the two common approaches cannot achieve the objective of improving both generalization and adversarial robustness. Thus, we propose a novel statistics-based approach, Two-WIng **N**ormli**S**ation (TWINS) fine-tuning framework, which consists of two neural networks where one of them keeps the population means and variances of pre-training data in the batch normalization layers. Besides the robust information transfer, TWINS increases the effective learning rate without hurting the training stability since the relationship between a weight norm and its gradient norm in standard batch normalization layer is broken, resulting in a faster escape from the sub-optimal initialization and alleviating the robust overfitting. Finally, TWINS is shown to be effective on a wide range of image classification datasets in terms of both generalization and robustness. Our code is available at [https://github.com/ziquanliu/CVPR2023-TWINS](https://github.com/ziquanliu/CVPR2023-TWINS).
## 1 Introduction
The adversarial vulnerability of deep neural networks (DNNs) [60] is one of the major obstacles for their wide applications in safety-critical scenarios such as self-driving cars [19] and medical diagnosis [20]. Thus, addressing this issue has been one focus of deep learning research in the past eight years. Existing works have proposed to improve adversarial robustness from different perspectives, including data augmentation [22, 49, 54, 57], regularization [38, 44, 52] and neural architecture [24, 29].
Figure 1: The performance of fine-tuning robust and non-robust large-scale pre-trained (PT) ResNet50 [27, 56] on CIFAR10 [35] and Caltech-256 [23]. We compare standard adversarial training (AT), Learning without Forgetting (LwF) (_model approach_) [40], joint fine-tuning with UOT data selection (_data approach_) [47] and our TWINS fine-tuning. The robust accuracy is evaluated using \(l_{\infty}\) norm bounded AutoAttack [12] with \(\epsilon=8/255\). On CIFAR10, the data-based and model-based approach fail to improve clean and robust accuracy. On Caltech, both approaches improve the clean accuracy but hurt the robust accuracy. Our TWINS fine-tuning improves the clean and robust performance on both datasets. The pink triangle denotes the performance of standard AT with the non-robust pre-trained ResNet50, which drops considerably compared with fine-tuning starting from the robust pre-trained model.
However, most of existing works investigate the problem under the assumption that the training data is sufficient enough, and training from scratch gives a satisfactory performance, which is not realistic in the real world. There are a large number of computer vision tasks where training from scratch is inferior to training from pre-trained weights, such as fine-grained image classification (e.g., Caltech-UCSD Birds-200-2011 or CUB200 [61]), object detection [43] and semantic segmentation [50].
On the other hand, pre-trained models have been considered as the foundation models in deep learning [5] as a result of their strong performance and wide employment in computer vision [18, 25, 26, 46], as well as natural language processing [6, 14, 53]. Thus, how to better use the pre-trained model in downstream has emerged as a major research topic in many vision and language tasks, such as image classification under distribution shifts [48, 64], object detection [37] and semantic segmentation [36, 30]. There are a few papers that investigate the pre-trained model's robustness in target tasks [7, 8, 16, 32, 33, 58, 63]. [7, 58] mainly considers the transfer between small-scale datasets (e.g., CIFAR100 to CIFAR10), while [8, 33] use adversarial robust pre-training and fine-tuning on the same dataset, without considering a large-scale and general pre-trained model. Finally, [16, 63] investigate different kinds of robustness to corruption or out-of-distribution samples, and are not devoted to adversarial robustness.
In this paper, we consider how to transfer the adversarial robustness of a large-scale robust pre-trained model (e.g., a ResNet50 pre-trained on ImageNet [13] with adversarial training) on various downstream classification tasks when fine-tuning with adversarial training. This problem setting is becoming more important as the standard pre-trained models do not learn robust representations from the pre-training data and are substantially weaker than the robust pre-trained counterparts in some challenging downstream tasks, e.g., fine-grained classification as shown in our experiment. Meanwhile, more large-scale robust pre-trained models are released (e.g., ResNet [56] and ViT [4]), which makes the robust pre-trained models more accessible. However, naively applying adversarial training to fine-tune from the robustly pre-trained model will lead to suboptimal robustness, since the robust representations learned by the robust pre-trained model are not fully utilized. For example, [58] suggests that the robustness from a pre-trained model needs to be explicitly maintained for its better transfer to the downstream.
Following the idea that the key to improving the transferability of robustness is to maintain the robustness of the pre-training stage during fine-tuning [58], we first evaluate the data-based and model-based approach on two representative datasets, CIFAR10 and Caltech-256. The data-based approach uses pre-trained data in the fine-tuning and keeps their performance under adversarial attack, while the model-based approach regularizes the distance of features of the fine-tuned and pre-trained model. Our experiment shows that both methods fail to improve the robustness and generalization (Fig. 1), since the two methods are too aggressive in retaining the robustness and hurt the learning in downstream. Thus, we propose a subtle approach that keeps the batch-norm (BN) statistics of pre-training for preserving the robustness, which we call **Two-W**I**ng **N**ormal**S**ation (TWINS) fine-tuning. TWINS has two neural networks with fixed and adaptive BN layers respectively, where the fixed BN layers use the population means and STDs of pre-training for normalization, while the adaptive BN layers use the standard BN normalization. Our experiment first demonstrates the importance of pre-trained BN statistics in
Figure 2: The TWINS structure and training pipeline. **(a)** The Frozen Net and Adaptive Net have the same structure and share the weight parameters, except for batch normalization (BN) layers. The Frozen Net uses pre-trained means and standard deviations (STD) in the normalization layer, while Adaptive Net uses the mean and STD computed from the current batch as in standard BN. **(b)** In each step of mini-batch stochastic gradient descent (SGD), we split the batch of adversarial examples, generated from attacking the Adaptive Net, into two sub-batches and feed them to the Adaptive Net and Frozen Net respectively. The loss of two networks are combined and back-propagated to their shared parameters to train the network. In the _inference stage_, only the Adaptive Net is used.
the robust fine-tuning and then finds the benefit of TWINS in adversarial training dynamics. As the relationship between weight norm and its gradient norm no longer holds in TWINS, it is able to increase the gradient magnitude without increasing the gradient variance. At the initial training stage, TWINS has a faster escaping speed from the suboptimal initialization than vanilla adversarial training [42]. At the final training stage, the gradient of TWINS is more stable than adversarial training, which alleviates the robust overfitting effect [57]. In summary, the contributions of our paper are as follows:
1. We focus on the fine-tuning of _large-scale_ robust pre-trained models as a result of their potential importance in various downstream tasks. We evaluate current approaches to retain the pre-training robustness in fine-tuning, and show that they cannot substantially improve the robustness.
2. We propose TWINS, a statistics-based approach for better transferability of robustness and generalization from the pre-training domain to the target domain. TWINS has two benefits: a) it keeps the robust statistics for downstream tasks, thus helps the transfer the robustness to downstream tasks and b) it enlarges the gradient magnitude without increasing gradient variance, thus helps the model escape from the initialization faster and mitigates robust overfitting. The mechanisms of these two benefits are validated by our empirical study.
3. The effectiveness of TWINS is corroborated on five downstream datasets by comparing with two popular adversarial training baselines, adversarial training (AT) [49] and TRADES [65]. On average, TWINS improves the clean and robust accuracy by 2.18% and 1.21% compared with AT, and by 1.46% and 0.69% compared with TRADES. The experiment shows the strong potential of robust pre-trained models in boosting downstream's robustness and generalization when using more effective fine-tuning methods.
## 2 Related Work
**Adversarial defense.** There are several major approaches to improving the adversarial robustness of DNNs. The training of DNNs can be regularized to induce biases that are beneficial to adversarial robustness, such as locally linear regularization [52], margin maximization [15, 44] and Jacobian regularization [31]. The most commonly used adversarial defense is adversarial training (AT) [49], which directly trains the DNN on adversarial examples generated from PGD attack. Later, TRADES [65] is proposed to add a KL regularization to AT and achieves stronger adversarial robustness. Our paper proposes TWINS to improve adversarial training in the fine-tuning stage when the initial model is adversarially pre-trained. We compare TWINS-AT and TWINS-TRADES with vanilla AT and TRADES in our experiment and show the strong effectiveness of TWINS in the robust fine-tuning setting.
**Fine-tuning for downstream robustness.** Several aspects of robustness in pre-training and fine-tuning have been studied in existing works. Adversarial contrastive learning [8, 33] is proposed to pre-train on a dataset with contrastive learning and then fine-tune on the _same_ dataset, without considering the transferability of robustness from a large-scale pre-trained model to a _different_ downstream task. In contrast, our paper investigates a more general problem, where task-specific pre-training is not needed for a new task as we use one robust large-scale pre-trained model trained on ImageNet. [56] considers the robust pre-training on the large-scale ImageNet and its transfer to downstream tasks, but focuses on the performance on clean instead of adversarial images. The Learning-without-Forgetting (LwF) [40] approach for retaining robustness is shown to be effective in the _small-scale_ transfer experiment [58], but is not effective in our experiment setting of transfer of large-scale models. [32] proposes a learning rate schedule to improve the adversarial robustness of fine-tuned models, and [17] proposes robust informative fine-tuning for pre-trained language models to robustly keep pre-training information in downstream. The difference between [17, 32] and our work is that they assume a standard pre-trained model instead of the adversarial pre-trained model. [16, 63] investigate the performance of pre-trained models in downstream tasks, but the focus is the robustness to out-of-distribution samples instead of adversarial perturbations.
**Batch normalization.** There are existing papers proposing the two-branch BN structure for different purposes with different technical details. [59] proposes dual normalization for a better trade-off between accuracy and robustness, where the normalization is a weighted sum of normalized clean and adversarial input. [62] proposes a similar two-branch BN structure, where one branch is for adversarial examples and the other is for clean examples. The major difference between our work and [59, 62] is that both BN branches in TWINS are for adversarial examples and one branch (Frozen Net) has fixed BN statistics from pre-training so as to better maintain the pre-trained robustness, whereas [59, 62] uses clean examples in BN and aims to improve the accuracy for clean images.
## 3 The Model-based and Data-based Approach to Retaining Adversarial Robustness
This section introduces the two common approaches for keeping adversarial robustness of pre-training in downstream, model-based and data-based approaches. Denote the feature vector output of a neural network as \(g_{\mathbf{\theta}}(\mathbf{x})\), the training sample in the downstream task as \(\{(\mathbf{x}_{i},\mathbf{y}_{i})\}_{i=1}^{N}\sim\mathbb{P}\), and the loss function as \(\mathcal{L}(\mathbf{w}^{T}g_{\mathbf{\theta}}(\mathbf{x})+b,\mathbf{y})\), where \((\mathbf{w},b)\)
are the parameters of last classification layer. We assume that the pre-trained model is trained on adversarial examples generated from the PGD attack [49], where the \(l_{\infty}\) norm of the adversarial attack is bounded by \(\epsilon\), and during fine-tuning we use adversarial training with the same PGD attack to obtain adversarial robustness in downstream tasks. In short, we consider the robust pre-training and robust fine-tuning setting in this paper, if not specified otherwise.
Model-based approaches.We first introduce the model-based approach, which keeps the pre-trained model \(\mathbf{\theta}_{pt}\) during fine-tuning so as to maintain its robustness. The objective function is
\[\sum_{(\mathbf{x}_{n},\mathbf{y}_{n})\sim\mathbb{P}}\mathcal{L}(\mathbf{w}^{T}g_{\mathbf{ \theta}}(\tilde{\mathbf{x}}_{n})+b,\mathbf{y}_{n})+\lambda_{LwF}\|g_{\mathbf{\theta}_{pt}} (\tilde{\mathbf{x}}_{n})-g_{\mathbf{\theta}}(\tilde{\mathbf{x}}_{n})\|_{2}, \tag{1}\]
where the adversarial example generated from \(\mathbf{x}_{n}\) is denoted as \(\tilde{\mathbf{x}}_{n}\). The regularization term of the loss aims to minimize the distance between the features from the pre-trained and the fine-tuned models, which is expected to maintain the robustness of the pre-trained model. This approach is originally proposed in [40] to prevent the catastrophic forgetting in continual learning, and is used in [58] to preserve adversarial robustness in transfer learning. Note that [58] uses the LwF method in _standard_ fine-tuning instead of robust fine-tuning as in our paper.
Data-based approaches.The objective function of the data-based approach is
\[\sum_{(\mathbf{x}_{n},\mathbf{y}_{n})\sim\mathbb{P}} \mathcal{L}(\mathbf{w}^{T}g_{\mathbf{\theta}}(\tilde{\mathbf{x}}_{n})+b,\mathbf{y }_{n})\] \[+\lambda_{UOT}\sum_{(\mathbf{x}_{m},\mathbf{y}_{m})\sim\mathbb{Q}} \mathcal{L}(\mathbf{w}_{q}^{T}g_{\mathbf{\theta}}(\tilde{\mathbf{x}}_{m})+b_{q},\mathbf{y}_{m }), \tag{2}\]
where \(\mathbb{P}\) and \(\mathbb{Q}\) are data distribution of the target and the pre-training tasks, \(\mathbf{w}_{q},b_{q}\) are the classification layer for pre-trained data from \(\mathbb{Q}\). This method regularizes the current fine-tuned model feature extractor so that its prediction is still robust on the pre-training data. The joint training method is proposed in [47] to improve the performance of fine-tuning in downstream tasks where the training data is not sufficient.
Next, we test the performance of these two approaches on two standard image classification datasets, CIFAR10 [35] and Caltech-256 [23], with results shown in Figure 1. We use a grid search for learning rate and \(\lambda_{LwF}\) (\(\lambda_{UOT}\)) and report the result of the model with the best robust accuracy. See Section 5 and the supplemental material for the experiment setting. On CIFAR10, both approaches fail to improve either clean or robust accuracy; on Caltech-256, the two approaches improve the clean accuracy by a small margin but deteriorate the robustness. One reason why the model- and data-based approaches fail is that the regularization term might be too strong, thus hurting the learning in the downstream.
## 4 TWINS Fine-Tuning
The previous section demonstrates that both data- and model-based approaches cannot substantially improve the adversarial robustness in the downstream task. Thus we propose the TWINS for the better fine-tuning of robust pre-trained models for downstream adversarial robustness.
### Proposed Method
Though BN layers contain only a few parameters compared to convolution and fully connected layers, they play an important role in the good performance of DNNs. [21] shows that even if we only train the BN layers, the performance of a DNN is already quite impressive. [51] finds that only training the parameters in BN layers in an image generator is effective for small datasets. [39] proposes adaptive BN for domain adaptation, which updates the BN statistics with data from a target domain. These works motivate us to propose a statistics-based approach for retaining pre-training information in the target task.
Figure 3: The mean and normalized STD of gradient norms in AT and TWINS-AT on CIFAR10, CIFAR100 and Caltech-256. The averaged \(\mu(\|\nabla\mathbf{w}\|)\) and \(\sigma(\|\nabla\mathbf{w}\|)/\mu(\|\nabla\mathbf{w}\|)\) over epochs are shown in each plot. The gradient magnitudes of TWINS-AT are substantially larger than those of AT, while the Normalized STDs of gradient norm in TWINS-AT are not obviously increased (CIFAR10) or even decreased (CIFAR100 and Caltech-256) compared with AT. This property leads to the faster escaping speed of TWINS-AT from the initial sub-optimum compared with AT (Fig. 4), and reduced robust overfitting (Tab. 1).
Typical BN layers track the mean and STD of the training set and save them for the inference stage. As this distribution information for each layer might be helpful for downstream robustness, we propose the TWINS robust fine-tuning, which maintains two networks, _Frozen Net_ that uses the BN statistics from the robust pre-trained model, and _Adaptive Net_ that learns its BN statistics from the downstream task. Instead of using two independent networks for the Frozen and Adaptive Net, we let the two networks share weight parameters, excluding the BN layers, to save the model size and inference time. At initialization, both networks and their BN statistics are initialized by the robust pre-trained models. During training, the Frozen Net uses the population means and STDs of pre-training data computed in the pre-training stage in the normalization operation, while the Adaptive Net uses the current batch's mean and STD in the normalization and updates its running mean and STD with the target training data. Fig. 2 shows the general pipeline of TWINS training and the network structure.
The training objective of TWINS with adversarial training (TWINS-AT) in one mini-batch is:
\[\sum_{j=1}^{B/2}\mathcal{L}(\mathbf{w}^{T}g_{\mathbf{\theta}_{a}}(\tilde{ \mathbf{x}}_{j}^{(a)})+b,\mathbf{y}_{j})+ \tag{3}\] \[\lambda_{twins}\sum_{i=B/2+1}^{B}\mathcal{L}(\mathbf{w}^{T}g_{\mathbf{ \theta}_{f}}(\tilde{\mathbf{x}}_{i}^{(a)})+b,\mathbf{y}_{i}), \tag{4}\]
where \(\mathbf{\theta}_{f}\) and \(\mathbf{\theta}_{a}\) denote the Frozen and Adaptive Nets respectively, and \(\tilde{\mathbf{x}}^{(a)}\) is the adversarial image for \(\mathbf{x}\) when attacking the Adaptive Net \(\mathbf{\theta}_{a}\). We split the batch into two different sub-batches to avoid doubled batch sizes in TWINS training. Since the Frozen and Adaptive Nets share weight parameters, the number of parameters in TWINS is only increased by a very small amount (i.e., BN parameters). Thus, TWINS-AT only has a negligible cost in terms of memory and training time compared with vanilla AT. The TWINS structure can also be used in the TRADES in a similar way. Similar to [39], we can use the target training set to update the BN statistics so that they are more relevant to the downstream task. We call this procedure warmup in TWINS fine-tuning. The pseudo codes of TWINS-AT and TWINS-TRADES are given in Alg. 1 of the supplemental.
### The mechanism of TWINS
The first benefit of TWINS is mentioned in the motivation of TWINS, i.e., the BN statistics have robustness information in the pre-training domain that can be leveraged by robust fine-tuning to improve the downstream robustness. This argument is validated by our ablation study in Section 5, where we initialize the means and STDs with 1.0 and 0.0 for Frozen Net instead of the pre-trained means and STDs and check the accuracy and robustness. Figure 5 shows that the TWINS with (1,0) initialization cannot match the performance with TWINS with pre-trained statistics, indicating that the robustness information in pre-training is essential to the effectiveness of TWINS.
It is intriguing that even with the (1,0) initialization, TWINS still outperforms the AT baseline in terms of robustness or accuracy on some datasets, e.g., Stanford Dogs and CIFAR100. This suggests that besides retaining the pre-training information, TWINS provides some other benefits during robust fine-tuning. By analyzing the gradient of TWINS, we find that TWINS implicitly increases the effective learning rate without increasing the oscillation and empirically validate this finding. We detail this analysis next.
**Effective learning rate** We first write the gradient of a weight vector for TWINS training. Consider the \(l\)-th layer's weight \(\mathbf{w}_{j}^{(l)}\) and its output after BN layer
\[\tilde{h}_{ij}^{(l)} =\frac{\hat{h}_{ij}^{(l)}-\frac{2}{B}\sum_{k=1}^{B/2}\hat{h}_{kj} ^{(l)}}{\sqrt{\frac{2}{B}\sum_{k=1}^{B/2}(\hat{h}_{kj}^{(l)}-\hat{\mu}_{j}^{(l) })^{2}}} \tag{5}\] \[=\frac{\mathbf{w}_{j}^{(l)}{}^{T}(\mathbf{h}_{i}^{(l-1)}-\mathbf{\mu}^{(l-1)} )}{\sqrt{\frac{2}{B}\sum_{k=1}^{B/2}(\mathbf{w}_{j}^{(l)}{}^{T}(\mathbf{h}_{i}^{(l-1) }-\mathbf{\mu}^{(l-1)}))^{2}}}, \tag{6}\]
where we denote \(h_{ij}^{(l)}\), \(\hat{h}_{ij}^{(l)}\) and \(\tilde{h}_{ij}^{(l)}\) as the output of ReLU, convolution or fully connected layer and BN layer respectively, for \(i\)-th sample, \(j\)-th output variable at \(l\)-th layer. The
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Method & Rob. Acc. & C10 & C100 & Caltech & CUB & Dogs \\ \hline \multirow{3}{*}{AT} & Best \(\uparrow\) & 51.84 & 31.38 & 49.09 & 27.08 & 21.19 \\ & Final \(\uparrow\) & 49.41 & 28.52 & 48.37 & 26.60 & 19.80 \\ & Gap \(\downarrow\) & 2.43 & 2.86 & 0.73 & 0.48 & 1.39 \\ \hline \multirow{3}{*}{TWINS-AT} & Best \(\uparrow\) & 53.23 & 31.60 & 48.80 & 29.24 & 20.89 \\ & Final \(\uparrow\) & 52.40 & 31.08 & 48.40 & 29.24 & 20.58 \\ \cline{1-1} & Gap \(\downarrow\) & **0.83** & **0.52** & **0.40** & **0.00** & **0.29** \\ \hline \end{tabular}
\end{table}
Table 1: The robust accuracy drop of AT and TWINS-AT, where the adversarial attack is PGD10. TWINS-AT has smaller accuracy drop compared with AT, indicating that the TWINS-AT is less prone to robust overfitting as a result of reduced variance of gradient norms and stable training as shown in Fig. 3.
Figure 4: The distance between the current step’s weight \(\mathbf{\theta}\) and the initialization \(\mathbf{\theta}_{pt}\). On the three datasets, TWINS-AT has a faster escaping speed from the sub-optimal initial model than AT, which is due to TWIN-AT’s larger gradient norm as shown in Figure 3.
gradient with respect to \(\mathbf{w}_{j}^{(l)}\) is
\[\nabla_{a}\mathbf{w}_{j}^{(l)} =\frac{\partial\mathcal{L}_{a}(\tilde{\mathbf{x}}_{i})}{\partial\mathbf{w}_ {j}^{(l)}}=\frac{\nabla\tilde{h}_{ij}^{(l)}(\mathbf{h}_{i}^{(l-1)}-\mathbf{\mu}^{(l-1)}) }{\sqrt{\frac{2}{B}\sum_{k=1}^{B/2}({\mathbf{w}_{j}^{(l)}}^{T}(\mathbf{h}_{i}^{(l-1)}- \mathbf{\mu}^{(l-1)}))^{2}}}\] \[-\frac{\nabla\tilde{h}_{ij}^{(l)}{\mathbf{w}_{j}^{(l)}}^{T}(\mathbf{h}_{i }^{(l-1)}-\mathbf{\mu}^{(l-1)})}{(\frac{2}{B}\sum_{k=1}^{B/2}({\mathbf{w}_{j}^{(l)}}^ {T}(\mathbf{h}_{i}^{(l-1)}-\mathbf{\mu}^{(l-1)}))^{2})^{3/2}}\mathbf{\Sigma}^{(l-1)}\mathbf{w}_ {j}^{(l)},\]
where \(\mathbf{\mu}^{(l-1)}\) and \(\mathbf{\Sigma}^{(l-1)}\) are the mean and covariance matrix from \((l-1)\)-th layer, \(\nabla\tilde{h}_{ij}^{(l)}\) represents the gradient of loss with respect to \(\tilde{h}_{ij}^{(l)}\). If we write the weight as its norm \(\|\mathbf{w}_{j}^{(l)}\|\) multiplied by the unit vector \(\mathbf{u}_{j}^{(l)}\), there is a relationship between \(\nabla\mathbf{w}_{j}^{(l)}\) and \(\|\mathbf{w}_{j}^{(l)}\|\),
\[\|\nabla_{a}\mathbf{w}_{j}^{(l)}\|=\frac{1}{\|\mathbf{w}_{j}^{(l)}\|}\|\nabla_{a}\mathbf{u }_{j}^{(l)}\| \tag{7}\]
This relationship has been found in [2, 28] and extended to any scale-invariant layers such as layer normalization [3] by [41]. It means that there are two ways to increase the gradient magnitude or the effective learning rate: 1) find a steeper descent direction where \(\|\nabla_{a}\mathbf{u}_{j}^{(l)}\|\) is increased and 2) decrease the weight norm so that \(1/\|\mathbf{w}_{j}^{(l)}\|\) is increased. For a DNN with standard BN layers, the training will exploit this property to increase the gradient magnitude by reducing the weight norms with the help of weight decay regularization, which leads to a spurious increase in gradient magnitude and a larger variance of gradient estimation. In contrast, for a DNN with fixed BN layers such as the Frozen Net, the gradient norm is not correlated with weight norm,
\[\nabla_{f}\mathbf{w}_{j}^{(l)}=\frac{\partial\mathcal{L}_{f}(\tilde{\mathbf{x}}_{i})} {\partial\mathbf{w}_{j}^{(l)}}=\nabla\tilde{h}_{ij}^{(l)}\frac{\mathbf{h}_{i}^{(l-1)} }{\sigma_{j,pt}^{(l)}}. \tag{8}\]
In this gradient, the only way to increase the gradient magnitude is to find the actual steeper direction. The overall gradient for the weight is
\[\Delta\mathbf{w}_{j}^{(l)}=\nabla_{a}\mathbf{w}_{j}^{(l)}+\lambda_{twins}\nabla_{f} \mathbf{w}_{j}^{(l)}. \tag{9}\]
Empirical StudyTo see the difference between the gradients in AT and TWINS-AT, we record the gradient of all weight parameters in each step of the two training methods and compute the mean and STD of the gradient norm in each epoch. The model parameters and their gradients are treated as long vectors and we compute the \(l_{2}\) norm of the weight and gradient vector. Figure 3 shows the mean and normalized STD of gradient norms in 60 training epochs for three datasets. Here we show the normalized STD, i.e., dividing the STD by the mean, to see the relative effect of variance. The major finding is that the gradient magnitude of TWINS-AT is substantially larger than that of AT, while the variance of TWINS-AT is lower than AT in most epochs. Note that in theory, the ratio between STD and mean should remain the same after down-scaling the weight norm in standard BN, but in practice we do observe the high normalized variance of DNNs with standard BNs since we use one epoch's gradients to approximate the variance and mean.
One benefit of the larger gradient magnitude is that the model can escape from the initial sub-optimal point faster and find a better local optimum than the small gradient optimization [42]. We validate this hypothesis by recording the distance between the current model and the initial model during training. Figure 4 shows these weight distances on three datasets, where TWINS-AT moves away from the initial model much faster than the AT baseline. The robust overfitting effect of adversarial training [55] is partially a result of large gradient variance, especially at the final stage of training [9]. We compare the robust accuracy drop of AT and TWINS-AT in Table 1 and find that the small relative variance of TWINS-AT has the effect of reducing robust overfitting. See the experiment details in Sec. 5 and Supp.
## 5 Experiment
This section presents our experiment with TWINS. We first introduce our experiment setting and then show our main result and ablation study.
### Experiment Settings
**Dataset.** We use five datasets in our experiment. CIFAR10 and CIFAR100 [35] are low-resolution image datasets, where the training and validation sets have 50,000 and 10,000 images, and CIFAR10 has 10 classes, while CIFAR100 has 100 classes. Caltech-256 [23] is a high-resolution dataset with 30,607 images and 257 classes, which is split into training and validation set with a ratio of 9:1. Caltech-UCSD Birds-200-2011 (CUB200) [61] is a high-resolution bird image dataset for fine-grained image classification, which contains 200 classes of birds, 5,994 training images and 5,794 validation images. Stanford Dogs [34] has high-resolution dog images from 120 dog categories, where the training and validation set has 12,000 and 8,580 images. For both low-resolution image datasets (CIFAR10 and CIFAR100) and high-resolution datasets (Caltech-256,CUB200 and Stanford Dogs), we resize the image to 224\(\times\)224 so that the input sizes are the same for pre-training and fine-tuning. As with pre-training, the input image is normalized by the mean and STD of the pre-training set. Note that the resizing and normalization function is integrated into the model so we can attack the input image with the [0,1] bounds for pixel values as in standard adversarial attacks. We use the standard ImageNet data
augmentation for high-resolution datasets [27]. For CIFAR datasets, we use random cropping with padding=4 and random horizontal flipping.
**Adversarial Pre-Training.** Large-scale adversarial pre-training on ImageNet is time-consuming, and thus [56] has released adversarially pre-trained ResNet50 and WideResNet50-2, trained with \(l_{2}\) and \(l_{\infty}\) norm bounded attacks. In this paper, we adopt the pre-trained ResNet50 models, trained with \(l_{\infty}\) attack with bound \(\epsilon_{pt}=4/255\). We test other robust pre-trained models in our ablation study.
**Training Setting.** For baselines and our method, we train all parameters of the pre-trained model, i.e., _full fine-tuning_ instead of linear probing [58], with PGD attacks of \(l_{\infty}\) norm. The PGD step is 10, \(\epsilon_{ft}=8/255\) and stepsize \(\alpha=2/255\). We set the batch size as 128 and train the model for 60 epochs and divide the learning rate by 0.1 at 30th and 50th epoch. In TWINS with warmup, we initialize the means and STDs with their pre-trained values, and update the means and STDs using the target training set. The momentum of updating statistics is 0.1, the batch size is 128, and the warmup only lasts one epoch. Note that in the warmup stage, the input samples are added with adversarial perturbations generated by the PGD attack, which has the same setting as the attack in training, and the classifier layer is the pre-trained classifier for the adversarial attack. Our pilot experiment shows that using adversarial examples as input is more effective than using clean exmaples in the warmup. The optimizer is SGD with momentum in all of our experiments. The learning rate, weight decay and regularization hyperparameter are determined by grid search, which is described in detail in the supplemental.
**Adversarial robustness evaluation.** Two standard adversarial attacks are used in our experiment, i.e., PGD and AutoAttack [12]. The adversarial perturbation is \(l_{\infty}\) norm bounded in our evaluation. The setting of PGD attack for validation set is the same as the PGD attack in training. The AutoAttack (AA) is a more reliable adversarial attack and more often used for evaluation than PGD in recent years. We use the standard attacks of AA, i.e., untargeted APGD-CE, targeted APGD-DLR, targeted FAB [11] and Square Attack [1], with \(\epsilon=8/255\). The robust accuracy in our experiment result denotes the accuracy under AA.
### Experimental Result
On the three high-resolution datasets, we compare the performance of fine-tuning different initialization models, i.e., random initialization, standard pre-trained ResNet50 and robust pre-trained ResNet50. Table 3 shows that the
\begin{table}
\begin{tabular}{|c|l|l|l|l|l|l|} \hline Metric & Method & CIFAR10 & CIFAR100 & Caltech256 & CUB200 & Stanford Dogs \\ \hline \multirow{4}{*}{Clean Acc.} & AT & 89.77 & 69.48 & 75.90 & 65.74 & 60.09 \\ & TWINS-AT & 91.24(+1.47) & 70.72(+1.24) & 76.86(+0.96) & 68.09(+2.35) & 64.98(+4.89) \\ & TWINS-AT+warmup & 91.95(+2.18) & 72.12(+2.64) & 77.35(+1.45) & 67.64(+1.90) & 66.12(+6.03) \\ \cline{2-7} & TRADES & 87.06 & 62.76 & 69.70 & 58.92 & 59.99 \\ & TWINS-TRADES & 86.61(-0.45) & 66.72(+3.96) & 71.12(+1.42) & 60.72(+1.80) & 60.58(+0.59) \\ & TWINS-TRADES+warmup & 86.60(-0.46) & 65.91(+3.15) & 73.39(+3.69) & 61.05(+2.13) & 63.96(+3.97) \\ \hline \multirow{4}{*}{PGD10} & AT & 52.24 & 28.52 & 48.37 & 26.60 & 19.80 \\ & TWINS-AT & 52.73(+0.49) & 31.08(+2.56) & 48.40(+0.03) & 29.24(+2.64) & 20.58(+0.78) \\ & TWINS-AT+warmup & 52.46(+0.22) & 29.12(+0.60) & 49.13(+0.76) & 27.67(+1.07) & 19.48(-0.32) \\ \cline{2-7} & TRADES & 54.04 & 32.20 & 47.28 & 27.87 & 21.36 \\ & TWINS-TRADES & 56.23(+2.19) & 33.51(+1.31) & 47.31(+0.03) & 27.05(-0.82) & 19.93(-1.43) \\ & TWINS-TRADES+warmup & 55.81(+1.77) & 33.48(+1.28) & 48.53(+1.25) & 26.68(-1.19) & 19.46(-1.90) \\ \hline \multirow{4}{*}{AA} & AT & 48.46 & 23.47 & 43.85 & 22.82 & 12.30 \\ & TWINS-AT & **49.81(+1.35)** & **26.73(+3.26)** & 43.69(-0.16) & 22.33(-0.49) & **14.37(+2.07)** \\ \cline{1-1} & TWINS-AT+warmup & 49.02(+0.56) & 25.72(+2.25) & **43.92(+0.07)** & **23.58(+0.75)** & 13.80(+1.50) \\ \cline{1-1} \cline{2-7} & TRADES & 50.31 & 26.40 & 43.39 & 22.21 & 12.05 \\ \cline{1-1} & TWINS-TRADES & **51.71(+1.40)** & 28.29(+1.89) & 41.77(-1.62) & **22.68(+0.47)** & **13.36(+1.31)** \\ \cline{1-1} & TWINS-TRADES+warmup & 51.10(+0.79) & **28.30(+1.90)** & **43.55(+0.16)** & 21.92(-0.29) & 10.94(-1.11) \\ \hline \end{tabular}
\end{table}
Table 2: The performance of our TWINS-AT and TWINS-TRADES on five image classification tasks compared with AT and TRADES. The clean accuracy means the accuracy when testing images are input without adversarial perturbations. PGD10 and AA denote the robust test accuracy under PGD10 and AutoAttack. The increase and decrease in performance are denoted with green and red numbers. The **bold** numbers denote the best robust accuracy under AA. The proposed TWINS achieves better robustness and clean accuracy compared with the baseline. Averaged over the datasets, the clean and robust accuracy of TWINS are increased by 2.18% and 1.21% compared with AT, and 1.46% and 0.69% compared with TRADES. The means and STDs of the performance are in the supplemental.
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline Metric & PT Model & Caltech256 & CUB200 & Dogs \\ \hline \multirow{4}{*}{Clean Acc} & Random & 48.99 & 12.38 & 7.27 \\ & Non-Robust & 64.66 & 53.30 & 41.99 \\ & Robust & **75.90** & **65.74** & **60.09** \\ \hline \multirow{4}{*}{PGD10} & Random & 31.78 & 3.728 & 3.59 \\ & Non-Robust & 39.86 & 19.68 & 13.32 \\ \cline{1-1} & Robust & **48.37** & **26.60** & **19.80** \\ \hline \end{tabular}
\end{table}
Table 3: Comparison of random initialization, non-robust and robust pre-trained model on three difficult classification tasks, when fine-tuned with AT. The robust pre-trained model is indispensable to downstream robustness.
pre-trained models are essential to the accuracy and robustness in challenging downstream tasks, since the random initialization is much worse than the two pre-trained models. The robust pre-trained model has a clear benefit over the standard pre-trained one, indicating that robust pre-training is indispensable to downstream robustness.
Table 2 shows the result of TWINS-AT and TWINS-TRADES compared with the baselines. Since AA is a more reliable attack, we highlight the best robust accuracy under AA on each dataset. The TWINS fine-tuning learns more robust DNNs, as well as achieves a better clean accuracy on all five datasets, demonstrating the strong effectiveness of TWINS in the robust pre-training and robust fine-tuning setting. On CIFAR10 and CIFAR100, both TWINS-AT and TWINS-TRADES achieve better robustness and clean accuracy than their baselines, while the warmup only improves the clean accuracy and hurts the robustness sometimes. On Caltech-256, TWINS-AT improves upon the vanilla AT in both robustness and accuracy, but TWINS-TRADES does not perform better than vanilla TRADES. However, the warmup helps boost the performance of TWINS-TRADES as well as TWINS-AT and makes the TWINS with warmup perform better than the baselines.
On the two fine-grained image classification datasets, TWINS-AT and TWINS-TRADES generally perform better than baselines in terms of accuracy and robustness, if we look at the robust accuracy under AA, where only TWINS-AT on CUB200 has a slightly worse robust accuracy than its baseline. We find that the warmup improves the clean accuracy but hurts the robustness in most cases, except for CIFAR100 and Caltech-256. This can be a result of noisy adversarial perturbation, since we use the pre-trained classifier layer in the adversarial attack, or the insufficient update steps. Nevertheless, we note that the warmup can be considered as an operation for achieving a trade-off between robustness and accuracy.
### Ablation Study
**TWINS initialization.** We use the pre-trained BN statistics in the Frozen Net to keep the robust information learned during pre-training. To show the importance of the pre-trained BN statistics, we use the standard initialization (mean=0 and STD=1) for BN statistics in the Frozen Net, denoted as TWINS-Init, and show the result on the five datasets in Figure 5. Both clean and robust accuracy drop when the (0,1) initialization is used in TWINS-Init, demonstrating the crucial role of pre-trained statistics in TWINS. The fact that TWINS-Init sometimes improves upon the AT baseline motivates us to investigate the effect of TWINS on gradient norms in Section 4.
**Effect of weight decay.** Weight decay is the reason for the decreasing weight norm in DNNs with BN layers, so increasing the hyperparameter of weight decay \(\lambda_{WD}\) is also a way to increase the gradient magnitude. Table 4 show the result of TWINS-AT and AT when different \(\lambda_{WD}\) are used when fine-tuning the robust pre-trained model on CIFAR10. The robust accuracy of TWINS-AT is consistently better than that of AT across different \(\lambda_{WD}\)'s. We draw the same conclusion on CIFAR100 (see supplemental). Note that the clean accuracy of TWINS-AT drops when a large \(\lambda_{WD}\) is used, suggesting that we should not use a too large \(\lambda_{WD}\) for TWINS-AT.
**Different robust pre-trained models.** The main experiment uses the robust pre-trained ResNet50 with \(\epsilon_{pt}=4/255\) as the initial model. We try different robust pre-trained models with different \(\epsilon_{pt}\) in Table 5, which shows that a larger \(\epsilon_{pt}\) is beneficial to both clean and robust accuracy in the downstream, and the proposed TWINS-AT is better than AT in both metrics with different pre-trained models.
## 6 Conclusion
This paper investigates the utility of robust pre-trained models in various downstream classification tasks. We first find that the commonly used data- and model-based approaches to maintain pre-training information do not work
\begin{table}
\begin{tabular}{|c|l|l|l|l|l|} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Metric} & \multicolumn{4}{c|}{\(\lambda_{WD}\)} \\ \cline{3-6} & & 1e-5 & 1e-4 & 1e-3 & 1e-2 \\ \hline \multirow{2}{*}{AT} & Clean Acc. & 89.92 & 89.94 & 89.77 & **90.28** \\ & Robust Acc. & 46.34 & 44.67 & 48.46 & 47.57 \\ \hline \multirow{2}{*}{TWINS-AT} & Clean Acc. & **91.90** & **91.42** & **91.24** & 87.33 \\ & Robust Acc. & **47.19** & **49.07** & **49.81** & **51.65** \\ \hline \end{tabular}
\end{table}
Table 4: The performance of TWINS-AT and AT on CIFAR10 when the hyperparameter for weight decay \(\lambda_{WD}\) is changed. The robust accuracy is evaluated using AutoAttack. Our TWINS-AT achieves better adversarial robustness than AT for different \(\lambda_{WD}\).
Figure 5: Ablation study where the BN statistics in TWINS are initialized with (0,1) for means and STDs, denoted as TWINS-Init. The population mean and STD of pre-training are crucial to TWINS.
\begin{table}
\begin{tabular}{|c|l|l|l|l|} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Metric} & \multicolumn{4}{c|}{\(\epsilon_{pt}\)} \\ \cline{3-5} & & 1/255 & 2/255 & 4/255 & 8/255 \\ \hline \multirow{2}{*}{AT} & Clean Acc. & 65.47 & 67.08 & 69.48 & 69.93 \\ & Robust Acc. & 24.76 & 25.79 & 23.47 & 27.71 \\ \hline \multirow{2}{*}{TWINS-AT} & Clean Acc. & **68.55** & **70.45** & **70.72** & **72.59** \\ & Robust Acc. & **25.97** & **26.62** & **26.73** & **28.62** \\ \hline \end{tabular}
\end{table}
Table 5: The performance of TWINS-AT and AT on CIFAR100 when robust pre-trained ResNet50 with different \(\epsilon_{pt}\) are used.
in the adversarial robust fine-tuning. We then propose a subtle statistics-based method, TWINS, for retaining the pre-training robustness in the downstream. In addition to the robustness preserving effect, we find that TWINS increases the gradient magnitude without sacrificing the training stability and improves the training dynamics of AT. Finally, the performance of TWINS is shown to be stronger than that of AT and TRADES on five datasets. One limitation of our work is that we only evaluate the robust supervised pre-trained ResNet50. Recently, robust pre-trained ViT's on ImageNet [4] have been released. Our statistics-based approach can be extended to the layer normalization, on which the increasing gradient magnitude argument also holds, and thus future work will extend TWINS to ViT.
**Acknowledgement** This work was supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. CityU 11215820) and the Fundamental Research Funds for the Central University of China (DUT No. 82232031).
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.